path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
class-week-09/TensorFlow Transform (TFT).ipynb | ###Markdown
Installing libraries
###Code
!pip install tensorflow-transform
###Output
Collecting tensorflow-transform
[?25l Downloading https://files.pythonhosted.org/packages/2d/bd/8ba8c1310cd741e0b83d8a064645a55c557df5a2f6b4beb11cd3a37457ed/tensorflow-transform-0.21.2.tar.gz (241kB)
[K |█▍ | 10kB 23.1MB/s eta 0:00:01
[K |██▊ | 20kB 3.0MB/s eta 0:00:01
[K |████ | 30kB 4.0MB/s eta 0:00:01
[K |█████▍ | 40kB 4.3MB/s eta 0:00:01
[K |██████▉ | 51kB 3.5MB/s eta 0:00:01
[K |████████▏ | 61kB 3.9MB/s eta 0:00:01
[K |█████████▌ | 71kB 4.2MB/s eta 0:00:01
[K |██████████▉ | 81kB 4.7MB/s eta 0:00:01
[K |████████████▏ | 92kB 5.0MB/s eta 0:00:01
[K |█████████████▋ | 102kB 4.8MB/s eta 0:00:01
[K |███████████████ | 112kB 4.8MB/s eta 0:00:01
[K |████████████████▎ | 122kB 4.8MB/s eta 0:00:01
[K |█████████████████▋ | 133kB 4.8MB/s eta 0:00:01
[K |███████████████████ | 143kB 4.8MB/s eta 0:00:01
[K |████████████████████▍ | 153kB 4.8MB/s eta 0:00:01
[K |█████████████████████▊ | 163kB 4.8MB/s eta 0:00:01
[K |███████████████████████ | 174kB 4.8MB/s eta 0:00:01
[K |████████████████████████▍ | 184kB 4.8MB/s eta 0:00:01
[K |█████████████████████████▊ | 194kB 4.8MB/s eta 0:00:01
[K |███████████████████████████▏ | 204kB 4.8MB/s eta 0:00:01
[K |████████████████████████████▌ | 215kB 4.8MB/s eta 0:00:01
[K |█████████████████████████████▉ | 225kB 4.8MB/s eta 0:00:01
[K |███████████████████████████████▏| 235kB 4.8MB/s eta 0:00:01
[K |████████████████████████████████| 245kB 4.8MB/s
[?25hRequirement already satisfied: absl-py<0.9,>=0.7 in /usr/local/lib/python2.7/dist-packages (from tensorflow-transform) (0.7.1)
Collecting apache-beam[gcp]<3,>=2.17
[?25l Downloading https://files.pythonhosted.org/packages/33/02/539f40be7b4d2ba338890cc7ca18fb55617199834070856a09b47e40cabe/apache_beam-2.20.0-cp27-cp27mu-manylinux1_x86_64.whl (3.4MB)
[K |████████████████████████████████| 3.4MB 15.1MB/s
[?25hRequirement already satisfied: numpy<2,>=1.16 in /usr/local/lib/python2.7/dist-packages (from tensorflow-transform) (1.16.4)
Requirement already satisfied: protobuf<4,>=3.7 in /usr/local/lib/python2.7/dist-packages (from tensorflow-transform) (3.8.0)
Requirement already satisfied: pydot<2,>=1.2 in /usr/local/lib/python2.7/dist-packages (from tensorflow-transform) (1.3.0)
Requirement already satisfied: six<2,>=1.12 in /usr/local/lib/python2.7/dist-packages (from tensorflow-transform) (1.12.0)
Collecting tensorflow-metadata<0.22,>=0.21
Downloading https://files.pythonhosted.org/packages/57/12/213dc5982e45283591ee0cb535b08ff603200ba84643bbea0aaa2109ed7c/tensorflow_metadata-0.21.2-py2.py3-none-any.whl
Requirement already satisfied: tensorflow<2.2,>=1.15 in /usr/local/lib/python2.7/dist-packages (from tensorflow-transform) (2.1.0)
Collecting tfx-bsl<0.22,>=0.21.3
[?25l Downloading https://files.pythonhosted.org/packages/e6/a8/45fc7c95154caa82f15c047d1312f9a8e9e29392f9137c22f972359996e8/tfx_bsl-0.21.4-cp27-cp27mu-manylinux2010_x86_64.whl (1.9MB)
[K |████████████████████████████████| 1.9MB 59.0MB/s
[?25hRequirement already satisfied: enum34; python_version < "3.4" in /usr/local/lib/python2.7/dist-packages (from absl-py<0.9,>=0.7->tensorflow-transform) (1.1.6)
Collecting pyarrow<0.17.0,>=0.15.1; python_version >= "3.0" or platform_system != "Windows"
[?25l Downloading https://files.pythonhosted.org/packages/21/c7/20a5dab16391f7a31e1daf02b639c11210378a92ce66dcf07f529a491951/pyarrow-0.16.0-cp27-cp27mu-manylinux2010_x86_64.whl (20.5MB)
[K |████████████████████████████████| 20.5MB 1.4MB/s
[?25hCollecting fastavro<0.22,>=0.21.4
[?25l Downloading https://files.pythonhosted.org/packages/15/e3/5956c75f68906b119191ef30d9acff661b422cf918a29a03ee0c3ba774be/fastavro-0.21.24-cp27-cp27mu-manylinux1_x86_64.whl (1.0MB)
[K |████████████████████████████████| 1.0MB 54.2MB/s
[?25hRequirement already satisfied: future<1.0.0,>=0.16.0 in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (0.16.0)
Collecting avro<1.10.0,>=1.8.1; python_version < "3.0"
[?25l Downloading https://files.pythonhosted.org/packages/d7/0b/592692ed26de33f35bf596780e6adb85c47e3e58061369bbc99125b902ec/avro-1.9.2.tar.gz (49kB)
[K |████████████████████████████████| 51kB 8.6MB/s
[?25hRequirement already satisfied: pytz>=2018.3 in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (2018.9)
Requirement already satisfied: httplib2<=0.12.0,>=0.8 in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (0.11.3)
Requirement already satisfied: futures<4.0.0,>=3.2.0; python_version < "3.0" in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (3.2.0)
Requirement already satisfied: funcsigs<2,>=1.0.2; python_version < "3.0" in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (1.0.2)
Requirement already satisfied: grpcio<2,>=1.12.1 in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (1.15.0)
Collecting python-dateutil<3,>=2.8.0
[?25l Downloading https://files.pythonhosted.org/packages/d4/70/d60450c3dd48ef87586924207ae8907090de0b306af2bce5d134d78615cb/python_dateutil-2.8.1-py2.py3-none-any.whl (227kB)
[K |████████████████████████████████| 235kB 53.9MB/s
[?25hCollecting pyvcf<0.7.0,>=0.6.8; python_version < "3.0"
Downloading https://files.pythonhosted.org/packages/20/b6/36bfb1760f6983788d916096193fc14c83cce512c7787c93380e09458c09/PyVCF-0.6.8.tar.gz
Collecting typing-extensions<3.8.0,>=3.7.0
Downloading https://files.pythonhosted.org/packages/55/17/3f65ede2450a51ab7b8c6f9f4aa1ba07cddd980422e2409ea5d68ccdf38d/typing_extensions-3.7.4.2-py2-none-any.whl
Requirement already satisfied: crcmod<2.0,>=1.7 in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (1.7)
Collecting hdfs<3.0.0,>=2.1.0
[?25l Downloading https://files.pythonhosted.org/packages/82/39/2c0879b1bcfd1f6ad078eb210d09dbce21072386a3997074ee91e60ddc5a/hdfs-2.5.8.tar.gz (41kB)
[K |████████████████████████████████| 51kB 7.4MB/s
[?25hRequirement already satisfied: typing<3.8.0,>=3.7.0; python_version < "3.5.3" in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (3.7.4)
Requirement already satisfied: mock<3.0.0,>=1.0.1 in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (2.0.0)
Collecting dill<0.3.2,>=0.3.1.1
[?25l Downloading https://files.pythonhosted.org/packages/c7/11/345f3173809cea7f1a193bfbf02403fff250a3360e0e118a1630985e547d/dill-0.3.1.1.tar.gz (151kB)
[K |████████████████████████████████| 153kB 55.4MB/s
[?25hRequirement already satisfied: pymongo<4.0.0,>=3.8.0 in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (3.8.0)
Collecting oauth2client<4,>=2.0.1
[?25l Downloading https://files.pythonhosted.org/packages/c0/7b/bc893e35d6ca46a72faa4b9eaac25c687ce60e1fbe978993fe2de1b0ff0d/oauth2client-3.0.0.tar.gz (77kB)
[K |████████████████████████████████| 81kB 12.4MB/s
[?25hCollecting google-cloud-pubsub<1.1.0,>=0.39.0; extra == "gcp"
[?25l Downloading https://files.pythonhosted.org/packages/d3/91/07a82945a7396ea34debafd476724bb5fc267c292790fdf2138c693f95c5/google_cloud_pubsub-1.0.2-py2.py3-none-any.whl (118kB)
[K |████████████████████████████████| 122kB 47.7MB/s
[?25hCollecting google-cloud-dlp<=0.13.0,>=0.12.0; extra == "gcp"
[?25l Downloading https://files.pythonhosted.org/packages/24/65/c74f730d5c08affdb056250e601f77c54c0f7c13dfd1c865e02f98b4e7b4/google_cloud_dlp-0.13.0-py2.py3-none-any.whl (151kB)
[K |████████████████████████████████| 153kB 63.3MB/s
[?25hRequirement already satisfied: google-cloud-bigquery<=1.24.0,>=1.6.0; extra == "gcp" in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (1.14.0)
Collecting proto-google-cloud-datastore-v1<=0.90.4,>=0.90.0; python_version < "3.0" and extra == "gcp"
Downloading https://files.pythonhosted.org/packages/2a/1f/4124f15e1132a2eeeaf616d825990bb1d395b4c2c37362654ea5cd89bb42/proto-google-cloud-datastore-v1-0.90.4.tar.gz
Collecting google-cloud-language<2,>=1.3.0; extra == "gcp"
[?25l Downloading https://files.pythonhosted.org/packages/ba/b8/965a97ba60287910d342623da1da615254bded3e0965728cf7fc6339b7c8/google_cloud_language-1.3.0-py2.py3-none-any.whl (83kB)
[K |████████████████████████████████| 92kB 12.7MB/s
[?25hCollecting google-cloud-videointelligence<1.14.0,>=1.8.0; extra == "gcp"
[?25l Downloading https://files.pythonhosted.org/packages/bb/bd/9945e21aace32bc45a17b65944b5cd20efb7370985d8984425831a47ca22/google_cloud_videointelligence-1.13.0-py2.py3-none-any.whl (177kB)
[K |████████████████████████████████| 184kB 48.3MB/s
[?25hRequirement already satisfied: cachetools<4,>=3.1.0; extra == "gcp" in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (3.1.1)
Collecting google-apitools<0.5.29,>=0.5.28; extra == "gcp"
[?25l Downloading https://files.pythonhosted.org/packages/07/5e/3e04cb66f5ced9267a854184bb09863d85d199646ea8480fee26b4313a00/google_apitools-0.5.28-py2-none-any.whl (134kB)
[K |████████████████████████████████| 143kB 63.2MB/s
[?25hCollecting google-cloud-bigtable<1.1.0,>=0.31.1; extra == "gcp"
[?25l Downloading https://files.pythonhosted.org/packages/95/af/0ef7d097a1d5ad0c843867600e86de915e8ab8864740f49a4636cfb51af6/google_cloud_bigtable-1.0.0-py2.py3-none-any.whl (232kB)
[K |████████████████████████████████| 235kB 59.4MB/s
[?25hCollecting grpcio-gcp<1,>=0.2.2; extra == "gcp"
Downloading https://files.pythonhosted.org/packages/ba/83/1f1095815be0de19102df41e250ebbd7dae97d7d14e22c18da07ed5ed9d4/grpcio_gcp-0.2.2-py2.py3-none-any.whl
Collecting googledatastore<7.1,>=7.0.1; python_version < "3.0" and extra == "gcp"
Downloading https://files.pythonhosted.org/packages/3a/cf/5d90efdb2a513d5c02ba0675eefb246250b67a6ec81de610ac94d47cf1ca/googledatastore-7.0.2.tar.gz
Requirement already satisfied: google-cloud-core<2,>=0.28.1; extra == "gcp" in /usr/local/lib/python2.7/dist-packages (from apache-beam[gcp]<3,>=2.17->tensorflow-transform) (1.0.2)
Collecting google-cloud-vision<0.43.0,>=0.38.0; extra == "gcp"
[?25l Downloading https://files.pythonhosted.org/packages/eb/23/6d5a728333ce568fb484d0d7edd0b7c04b16cf6325af31d957eb51ed077d/google_cloud_vision-0.42.0-py2.py3-none-any.whl (435kB)
[K |████████████████████████████████| 440kB 51.4MB/s
[?25hCollecting google-cloud-datastore<1.8.0,>=1.7.1; extra == "gcp"
[?25l Downloading https://files.pythonhosted.org/packages/d0/aa/29cbcf8cf7d08ce2d55b9dce858f7c632b434cb6451bed17cb4275804217/google_cloud_datastore-1.7.4-py2.py3-none-any.whl (82kB)
[K |████████████████████████████████| 92kB 12.0MB/s
[?25hCollecting google-cloud-spanner<1.14.0,>=1.13.0; extra == "gcp"
[?25l Downloading https://files.pythonhosted.org/packages/9e/39/c5e470bf59ce15716490bea1945e2c03b4f08f2153285f19dc6f9337b9e9/google_cloud_spanner-1.13.0-py2.py3-none-any.whl (212kB)
[K |████████████████████████████████| 215kB 59.0MB/s
[?25hRequirement already satisfied: setuptools in /usr/local/lib/python2.7/dist-packages (from protobuf<4,>=3.7->tensorflow-transform) (44.1.0)
Requirement already satisfied: pyparsing>=2.1.4 in /usr/local/lib/python2.7/dist-packages (from pydot<2,>=1.2->tensorflow-transform) (2.4.0)
Requirement already satisfied: googleapis-common-protos in /usr/local/lib/python2.7/dist-packages (from tensorflow-metadata<0.22,>=0.21->tensorflow-transform) (1.6.0)
Requirement already satisfied: gast==0.2.2 in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (0.2.2)
Requirement already satisfied: scipy==1.2.2; python_version < "3" in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (1.2.2)
Requirement already satisfied: wheel; python_version < "3" in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (0.34.2)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (1.11.2)
Requirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (1.1.0)
Requirement already satisfied: backports.weakref>=1.0rc1; python_version < "3.4" in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (1.0.post1)
Collecting tensorflow-estimator<2.2.0,>=2.1.0rc0
[?25l Downloading https://files.pythonhosted.org/packages/18/90/b77c328a1304437ab1310b463e533fa7689f4bfc41549593056d812fab8e/tensorflow_estimator-2.1.0-py2.py3-none-any.whl (448kB)
[K |████████████████████████████████| 450kB 52.4MB/s
[?25hRequirement already satisfied: keras-applications>=1.0.8 in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (1.0.8)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (1.1.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (2.3.2)
Requirement already satisfied: functools32>=3.2.3; python_version < "3" in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (3.2.3.post2)
Requirement already satisfied: tensorboard<2.2.0,>=2.1.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (2.1.0)
Requirement already satisfied: google-pasta>=0.1.6 in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (0.1.7)
Requirement already satisfied: astor>=0.6.0 in /usr/local/lib/python2.7/dist-packages (from tensorflow<2.2,>=1.15->tensorflow-transform) (0.8.0)
Collecting google-api-python-client<2,>=1.7.11
[?25l Downloading https://files.pythonhosted.org/packages/bb/7c/39028024ad733d0a6b0d5a4a5715c675f6940b7441c356336d0e97382f06/google-api-python-client-1.8.3.tar.gz (141kB)
[K |████████████████████████████████| 143kB 55.4MB/s
[?25hCollecting tensorflow-serving-api<3,>=1.15
Downloading https://files.pythonhosted.org/packages/6d/0b/be364dc6271a633629174fc02d36b2837cc802250d6a0afd96f8e7f2fae6/tensorflow_serving_api-2.1.0-py2.py3-none-any.whl
Collecting docopt
Downloading https://files.pythonhosted.org/packages/a2/55/8f8cab2afd404cf578136ef2cc5dfb50baa1761b68c9da1fb1e4eed343c9/docopt-0.6.2.tar.gz
Requirement already satisfied: requests>=2.7.0 in /usr/local/lib/python2.7/dist-packages (from hdfs<3.0.0,>=2.1.0->apache-beam[gcp]<3,>=2.17->tensorflow-transform) (2.23.0)
Requirement already satisfied: pbr>=0.11 in /usr/local/lib/python2.7/dist-packages (from mock<3.0.0,>=1.0.1->apache-beam[gcp]<3,>=2.17->tensorflow-transform) (5.4.0)
Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python2.7/dist-packages (from oauth2client<4,>=2.0.1->apache-beam[gcp]<3,>=2.17->tensorflow-transform) (0.4.5)
Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python2.7/dist-packages (from oauth2client<4,>=2.0.1->apache-beam[gcp]<3,>=2.17->tensorflow-transform) (0.2.5)
Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python2.7/dist-packages (from oauth2client<4,>=2.0.1->apache-beam[gcp]<3,>=2.17->tensorflow-transform) (4.0)
Collecting google-api-core[grpc]<2.0.0dev,>=1.14.0
[?25l Downloading https://files.pythonhosted.org/packages/4c/b9/c0dd70bcdf06a43d1e21f387448e7997e0ce91f10d0fbee359af4cde1571/google_api_core-1.17.0-py2.py3-none-any.whl (70kB)
[K |████████████████████████████████| 71kB 10.6MB/s
[?25hCollecting grpc-google-iam-v1<0.13dev,>=0.12.3
Downloading https://files.pythonhosted.org/packages/65/19/2060c8faa325fddc09aa67af98ffcb6813f39a0ad805679fa64815362b3a/grpc-google-iam-v1-0.12.3.tar.gz
Requirement already satisfied: google-resumable-media>=0.3.1 in /usr/local/lib/python2.7/dist-packages (from google-cloud-bigquery<=1.24.0,>=1.6.0; extra == "gcp"->apache-beam[gcp]<3,>=2.17->tensorflow-transform) (0.3.2)
Collecting fasteners>=0.14
Downloading https://files.pythonhosted.org/packages/18/bd/55eb2d6397b9c0e263af9d091ebdb756b15756029b3cededf6461481bc63/fasteners-0.15-py2.py3-none-any.whl
Requirement already satisfied: h5py in /usr/local/lib/python2.7/dist-packages (from keras-applications>=1.0.8->tensorflow<2.2,>=1.15->tensorflow-transform) (2.8.0)
Collecting google-auth-oauthlib<0.5,>=0.4.1
Downloading https://files.pythonhosted.org/packages/7b/b8/88def36e74bee9fce511c9519571f4e485e890093ab7442284f4ffaef60b/google_auth_oauthlib-0.4.1-py2.py3-none-any.whl
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python2.7/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow<2.2,>=1.15->tensorflow-transform) (0.15.5)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python2.7/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow<2.2,>=1.15->tensorflow-transform) (1.7.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python2.7/dist-packages (from tensorboard<2.2.0,>=2.1.0->tensorflow<2.2,>=1.15->tensorflow-transform) (3.1.1)
Requirement already satisfied: google-auth-httplib2>=0.0.3 in /usr/local/lib/python2.7/dist-packages (from google-api-python-client<2,>=1.7.11->tfx-bsl<0.22,>=0.21.3->tensorflow-transform) (0.0.3)
Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python2.7/dist-packages (from google-api-python-client<2,>=1.7.11->tfx-bsl<0.22,>=0.21.3->tensorflow-transform) (3.0.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python2.7/dist-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam[gcp]<3,>=2.17->tensorflow-transform) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python2.7/dist-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam[gcp]<3,>=2.17->tensorflow-transform) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python2.7/dist-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam[gcp]<3,>=2.17->tensorflow-transform) (2019.6.16)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python2.7/dist-packages (from requests>=2.7.0->hdfs<3.0.0,>=2.1.0->apache-beam[gcp]<3,>=2.17->tensorflow-transform) (2.8)
Collecting monotonic>=0.1
Downloading https://files.pythonhosted.org/packages/ac/aa/063eca6a416f397bd99552c534c6d11d57f58f2e94c14780f3bbf818c4cf/monotonic-1.5-py2.py3-none-any.whl
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python2.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2,>=1.15->tensorflow-transform) (1.2.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python2.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.2.0,>=2.1.0->tensorflow<2.2,>=1.15->tensorflow-transform) (3.0.2)
Building wheels for collected packages: tensorflow-transform, avro, pyvcf, hdfs, dill, oauth2client, proto-google-cloud-datastore-v1, googledatastore, google-api-python-client, docopt, grpc-google-iam-v1
Building wheel for tensorflow-transform (setup.py) ... [?25l[?25hdone
Created wheel for tensorflow-transform: filename=tensorflow_transform-0.21.2-cp27-none-any.whl size=301095 sha256=82c0253d8fc5e56e1da2f4a0872bc172ef822e377bfe186d5963434e11deb521
Stored in directory: /root/.cache/pip/wheels/0e/fa/f9/b167f10a3392a6d90659bb821c570255458f83ad5a4a321712
Building wheel for avro (setup.py) ... [?25l[?25hdone
Created wheel for avro: filename=avro-1.9.2-cp27-none-any.whl size=41686 sha256=bb6ceb4610d41468faf8bf5c8623748a61f1606b945c3106d91f82ef7e49b2fc
Stored in directory: /root/.cache/pip/wheels/51/14/c1/d4e383d261ced6c549ea2d072cc3a3955744948d9b0d2698f6
Building wheel for pyvcf (setup.py) ... [?25l[?25hdone
Created wheel for pyvcf: filename=PyVCF-0.6.8-cp27-cp27mu-linux_x86_64.whl size=80307 sha256=ff30d868efdddbd3680cb493d2afd2c2b9cd12dc4751766fd034310eef96de8e
Stored in directory: /root/.cache/pip/wheels/81/91/41/3272543c0b9c61da9c525f24ee35bae6fe8f60d4858c66805d
Building wheel for hdfs (setup.py) ... [?25l[?25hdone
Created wheel for hdfs: filename=hdfs-2.5.8-cp27-none-any.whl size=33213 sha256=7b1e135ff23c10b81461865ac80e8632bded8e54abc76158746313ef4179559f
Stored in directory: /root/.cache/pip/wheels/fe/a7/05/23e3699975fc20f8a30e00ac1e515ab8c61168e982abe4ce70
Building wheel for dill (setup.py) ... [?25l[?25hdone
Created wheel for dill: filename=dill-0.3.1.1-cp27-none-any.whl size=78533 sha256=2700bb1a8dba12809006708f3ca3edca4a2993c3107fff7790ef7df7623f9fc7
Stored in directory: /root/.cache/pip/wheels/59/b1/91/f02e76c732915c4015ab4010f3015469866c1eb9b14058d8e7
Building wheel for oauth2client (setup.py) ... [?25l[?25hdone
Created wheel for oauth2client: filename=oauth2client-3.0.0-cp27-none-any.whl size=106382 sha256=47a61e922717636629637faccef6612d67ddd0299b3c2faa0fbe5daf8d027215
Stored in directory: /root/.cache/pip/wheels/48/f7/87/b932f09c6335dbcf45d916937105a372ab14f353a9ca431d7d
Building wheel for proto-google-cloud-datastore-v1 (setup.py) ... [?25l[?25hdone
Created wheel for proto-google-cloud-datastore-v1: filename=proto_google_cloud_datastore_v1-0.90.4-cp27-none-any.whl size=23754 sha256=7eeddaaacf606944a02a2b9a8deff9ab3c9f18bb40cad1dc40e8ad6a217ee972
Stored in directory: /root/.cache/pip/wheels/bd/ce/33/8b769968db3761c42c7a91d8a0dbbafc50acfa0750866c8abd
Building wheel for googledatastore (setup.py) ... [?25l[?25hdone
Created wheel for googledatastore: filename=googledatastore-7.0.2-cp27-none-any.whl size=18155 sha256=4f0073a74cfa507c5aa043ea971f39d9f45127493a820971a871d4ec7c244f3b
Stored in directory: /root/.cache/pip/wheels/09/61/a5/7e8f4442b3c3d406ee9eb6c06e1ecbe5625f62f8cb19c08f5b
Building wheel for google-api-python-client (setup.py) ... [?25l[?25hdone
Created wheel for google-api-python-client: filename=google_api_python_client-1.8.3-cp27-none-any.whl size=58914 sha256=97dafb5a34d6d33fbba24639bc597d5aaa2a5060ce3f97b7bcda24854885126a
Stored in directory: /root/.cache/pip/wheels/ff/d9/d5/5c685642aed9acebb10f85586a80c339d54ab921460fb09ddc
Building wheel for docopt (setup.py) ... [?25l[?25hdone
Created wheel for docopt: filename=docopt-0.6.2-py2.py3-none-any.whl size=13704 sha256=7e6023461a867fc2ee0a4ba4a19564156c16d2c5296a119085da7ba5fcad27c4
Stored in directory: /root/.cache/pip/wheels/9b/04/dd/7daf4150b6d9b12949298737de9431a324d4b797ffd63f526e
Building wheel for grpc-google-iam-v1 (setup.py) ... [?25l[?25hdone
Created wheel for grpc-google-iam-v1: filename=grpc_google_iam_v1-0.12.3-cp27-none-any.whl size=18499 sha256=dcb8a04b105e090c936d293d3e56816a7ea307e94b4bac232a5b5befa1f57469
Stored in directory: /root/.cache/pip/wheels/de/3a/83/77a1e18e1a8757186df834b86ce6800120ac9c79cd8ca4091b
Successfully built tensorflow-transform avro pyvcf hdfs dill oauth2client proto-google-cloud-datastore-v1 googledatastore google-api-python-client docopt grpc-google-iam-v1
[31mERROR: tfx-bsl 0.21.4 has requirement pyarrow<0.16.0,>=0.15.0, but you'll have pyarrow 0.16.0 which is incompatible.[0m
[31mERROR: fastai 0.7.0 has requirement torch<0.4, but you'll have torch 1.4.0 which is incompatible.[0m
[31mERROR: google-api-core 1.17.0 has requirement google-auth<2.0dev,>=1.14.0, but you'll have google-auth 1.7.2 which is incompatible.[0m
[31mERROR: tensorboard 2.1.0 has requirement grpcio>=1.24.3, but you'll have grpcio 1.15.0 which is incompatible.[0m
[31mERROR: google-cloud-spanner 1.13.0 has requirement google-cloud-core<2.0dev,>=1.0.3, but you'll have google-cloud-core 1.0.2 which is incompatible.[0m
Installing collected packages: pyarrow, fastavro, avro, python-dateutil, pyvcf, typing-extensions, docopt, hdfs, dill, oauth2client, google-api-core, grpc-google-iam-v1, google-cloud-pubsub, google-cloud-dlp, proto-google-cloud-datastore-v1, google-cloud-language, google-cloud-videointelligence, monotonic, fasteners, google-apitools, google-cloud-bigtable, grpcio-gcp, googledatastore, google-cloud-vision, google-cloud-datastore, google-cloud-spanner, apache-beam, tensorflow-metadata, google-api-python-client, tensorflow-serving-api, tfx-bsl, tensorflow-transform, tensorflow-estimator, google-auth-oauthlib
Found existing installation: pyarrow 0.14.0
Uninstalling pyarrow-0.14.0:
Successfully uninstalled pyarrow-0.14.0
Found existing installation: python-dateutil 2.5.3
Uninstalling python-dateutil-2.5.3:
Successfully uninstalled python-dateutil-2.5.3
Found existing installation: dill 0.3.0
Uninstalling dill-0.3.0:
Successfully uninstalled dill-0.3.0
Found existing installation: oauth2client 4.1.3
Uninstalling oauth2client-4.1.3:
Successfully uninstalled oauth2client-4.1.3
Found existing installation: google-api-core 1.13.0
Uninstalling google-api-core-1.13.0:
Successfully uninstalled google-api-core-1.13.0
Found existing installation: google-cloud-language 1.2.0
Uninstalling google-cloud-language-1.2.0:
Successfully uninstalled google-cloud-language-1.2.0
Found existing installation: google-cloud-datastore 1.8.0
Uninstalling google-cloud-datastore-1.8.0:
Successfully uninstalled google-cloud-datastore-1.8.0
Found existing installation: tensorflow-metadata 0.14.0
Uninstalling tensorflow-metadata-0.14.0:
Successfully uninstalled tensorflow-metadata-0.14.0
Found existing installation: google-api-python-client 1.7.9
Uninstalling google-api-python-client-1.7.9:
Successfully uninstalled google-api-python-client-1.7.9
Found existing installation: tensorflow-estimator 1.15.0
Uninstalling tensorflow-estimator-1.15.0:
Successfully uninstalled tensorflow-estimator-1.15.0
Found existing installation: google-auth-oauthlib 0.4.0
Uninstalling google-auth-oauthlib-0.4.0:
Successfully uninstalled google-auth-oauthlib-0.4.0
Successfully installed apache-beam-2.20.0 avro-1.9.2 dill-0.3.1.1 docopt-0.6.2 fastavro-0.21.24 fasteners-0.15 google-api-core-1.17.0 google-api-python-client-1.8.3 google-apitools-0.5.28 google-auth-oauthlib-0.4.1 google-cloud-bigtable-1.0.0 google-cloud-datastore-1.7.4 google-cloud-dlp-0.13.0 google-cloud-language-1.3.0 google-cloud-pubsub-1.0.2 google-cloud-spanner-1.13.0 google-cloud-videointelligence-1.13.0 google-cloud-vision-0.42.0 googledatastore-7.0.2 grpc-google-iam-v1-0.12.3 grpcio-gcp-0.2.2 hdfs-2.5.8 monotonic-1.5 oauth2client-3.0.0 proto-google-cloud-datastore-v1-0.90.4 pyarrow-0.16.0 python-dateutil-2.8.1 pyvcf-0.6.8 tensorflow-estimator-2.1.0 tensorflow-metadata-0.21.2 tensorflow-serving-api-2.1.0 tensorflow-transform-0.21.2 tfx-bsl-0.21.4 typing-extensions-3.7.4.2
###Markdown
Importing libraries
###Code
import tempfile
import pandas as pd
import tensorflow as tf
import tensorflow_transform as tft
import tensorflow_transform.beam.impl as tft_beam
import apache_beam.io.iobase #Adicionado novo import
from __future__ import print_function
from tensorflow_transform.tf_metadata import dataset_metadata, dataset_schema, schema_utils #Adicionado schema_utils
###Output
_____no_output_____
###Markdown
Preprocessing Loading database
###Code
from google.colab import drive
drive.mount('/content/drive')
dataset = pd.read_csv("/content/drive/My Drive/Presentations/TensorFlow on Google Cloud/polution_small.csv")
dataset.head()
###Output
_____no_output_____
###Markdown
Droping column with datetime
###Code
features = dataset.drop("Date", axis = 1)
features.head()
###Output
_____no_output_____
###Markdown
Converting to a dictionary
###Code
dict_features = list(features.to_dict("index").values())
dict_features[0:2]
###Output
_____no_output_____
###Markdown
Defining metadata
###Code
data_metadata = dataset_metadata.DatasetMetadata(dataset_schema.from_feature_spec({
"no2": tf.io.FixedLenFeature([], tf.float32),
"pm10": tf.io.FixedLenFeature([], tf.float32),
"so2": tf.io.FixedLenFeature([], tf.float32),
"soot": tf.io.FixedLenFeature([], tf.float32),
}))
data_metadata
###Output
_____no_output_____
###Markdown
Preprocing function
###Code
def preprocessing_fn(inputs):
no2 = inputs["no2"]
pm10 = inputs["pm10"]
so2 = inputs["so2"]
soot = inputs["soot"]
no2_normalized = no2 - tft.mean(no2)
so2_normalized = so2 - tft.mean(so2)
pm10_normalized = tft.scale_to_0_1(pm10)
soot_normalized = tft.scale_by_min_max(soot)
return {
"no2_normalized": no2_normalized,
"so2_normalized": so2_normalized,
"pm10_normalized": pm10_normalized,
"sott_normalized": soot_normalized
}
###Output
_____no_output_____
###Markdown
CodingTensorflow Transform use **Apache Beam** background to perform operations. Function parameters: dict_features - Our database converted to dict data_metadata - Defined metadata preprocessing_fn - preprocessing functionApache Beam Syntax```result = data_to_pass | where_to_pass_the_data```Explaining:**result** -> `transformed_dataset, transform_fn`**data_to_pass** -> `(dict_features, data_metadata)`**where_to_pass_the_data** -> `tft_beam.AnalyzeAndTransformDataset(preprocessing_fn)` ```transformed_dataset, transform_fn = ((dict_features, data_metadata) | tft_beam.AnalyzeAndTransformDataset(preprocessing_fn))```Learn more: https://beam.apache.org/documentation/programming-guide/applying-transformshttps://beam.apache.org/
###Code
def data_transform():
with tft_beam.Context(temp_dir = tempfile.mkdtemp()):
transformed_dataset, transform_fn = ((dict_features, data_metadata) | tft_beam.AnalyzeAndTransformDataset(preprocessing_fn))
transformed_data, transformed_metadata = transformed_dataset
for i in range(len(transformed_data)):
print("Initial: ", dict_features[i])
print("Transformed: ", transformed_data[i])
data_transform()
###Output
W0519 18:42:22.559719 139799407392640 impl.py:425] Tensorflow version (2.1.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
W0519 18:42:22.574245 139799407392640 interactive_environment.py:112] Interactive Beam requires Python 3.5.3+.
W0519 18:42:22.575671 139799407392640 interactive_environment.py:125] Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.
W0519 18:42:22.948338 139799407392640 impl.py:425] Tensorflow version (2.1.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
W0519 18:42:24.590297 139799407392640 deprecation.py:323] From /usr/local/lib/python2.7/dist-packages/tensorflow_core/python/saved_model/signature_def_utils_impl.py:201: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
W0519 18:42:24.600390 139799407392640 meta_graph.py:436] Issue encountered when serializing tft_analyzer_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
W0519 18:42:24.601831 139799407392640 meta_graph.py:436] Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
W0519 18:42:25.302077 139799407392640 meta_graph.py:436] Issue encountered when serializing tft_analyzer_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
W0519 18:42:25.303611 139799407392640 meta_graph.py:436] Issue encountered when serializing tft_mapper_use.
Type is unsupported, or the types of the items don't match field type in CollectionDef. Note this is a warning and probably safe to ignore.
'Counter' object has no attribute 'name'
W0519 18:42:25.413558 139799407392640 impl.py:425] Tensorflow version (2.1.0) found. Note that Tensorflow Transform support for TF 2.0 is currently in beta, and features such as tf.function may not work as intended.
|
Disney Movies Data Scraping.ipynb | ###Markdown
Disney Dataset CreationWebscraping solution using beautifulsoupFollowing along Keith Galli's video: https://www.youtube.com/watch?v=Ewgy-G9cmbg
###Code
import pandas as pd
from bs4 import BeautifulSoup as bs
import requests
import json
import re
from datetime import datetime
import pickle
import os
import urllib
# getting the page
r = requests.get("https://en.wikipedia.org/wiki/Toy_Story_3")
# creating the soup
soup = bs(r.content)
# making it readable
contents = soup.prettify()
# getting only the infobox with the main info
info_box = soup.find(class_="infobox vevent")
info_rows = info_box.find_all("tr")
movie_info = {}
def get_content(row_data):
if row_data.find("li"):
return [li.get_text(" ", strip=True).replace("\xa0", " ") for li in row_data.find_all("li")]
else:
return row_data.get_text(" ", strip=True).replace("\xa0", " ")
for index, row in enumerate(info_rows):
if index == 0:
movie_info['title'] = row.find('th').get_text()
elif index == 1:
continue
else:
content_key = row.find("th").get_text(" ", strip=True)
content_value = get_content(row.find('td'))
movie_info[content_key] = content_value
# getting the page
r = requests.get("https://en.wikipedia.org/wiki/List_of_Walt_Disney_Pictures_films")
# creating the soup
soup = bs(r.content)
# making it readable
contents = soup.prettify()
def get_content(row_data):
if row_data.find("li"):
return [li.get_text(" ", strip=True).replace("\xa0", " ") for li in row_data.find_all("li")]
elif row_data.find("br"):
return [text for text in row_data.stripped_strings]
else:
return row_data.get_text(" ", strip=True).replace("\xa0", " ")
def clean_tags(soup):
for tag in soup.find_all("sup"):
tag.decompose()
for tag in soup.find_all("span"):
tag.decompose()
def get_info_box(url):
r = requests.get(url)
soup = bs(r.content)
clean_tags(soup)
info_box = soup.find(class_="infobox vevent")
info_rows = info_box.find_all("tr")
movie_info = {}
for index, row in enumerate(info_rows):
if index == 0:
movie_info['title'] = row.find('th').get_text()
else:
header = row.find('th')
if header:
content_key = row.find("th").get_text(" ", strip=True)
content_value = get_content(row.find('td'))
movie_info[content_key] = content_value
return movie_info
r = requests.get("https://en.wikipedia.org/wiki/List_of_Walt_Disney_Pictures_films")
soup = bs(r.content)
movies = soup.select('.wikitable.sortable i a')
base_path = 'https://en.wikipedia.org'
movie_info_list = []
for index, movie in enumerate(movies):
if index % 10 == 0:
print(index)
try:
relative_path = movie['href']
full_path = base_path + relative_path
title = movie['title']
movie_info_list.append(get_info_box(full_path))
except Exception as e:
print(movie.get_text())
print(e)
def save_data(title, data):
with open(title, 'w', encoding='utf-8') as f:
json.dump(data, f, ensure_ascii=False, indent=2)
def load_data(title):
with open(title, encoding='utf-8') as f:
return json.load(f)
save_data("disney_data_cleaned.json", movie_info_list)
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
movie_info_list = load_data("disney_data_cleaned.json")
###Output
_____no_output_____
###Markdown
List of subtasks1. using python datetime 2. ~~convert running type and money to integer~~3. ~~remove references [1]~~4. ~~standardize data~~5. ~~some 'starring' are not in a list~~6. ~~look at what is going on at the error ones~~
###Code
# Clean up references (remove [1], [2])
# convert running time into an integer
def minutes_to_integer(running_time):
if running_time == "N/A":
return None
if isinstance(running_time, list):
return int(running_time[0].split(" ")[0])
else:
return int(running_time.split(" ")[0])
for movie in movie_info_list:
movie['Runnig time (int)'] = minutes_to_integer(movie.get('Running time', "N/A"))
print ([movie.get('Budget', 'N/A') for movie in movie_info_list])
# clean up budget & Box office
amounts = r"thousand|million|billion"
number = r"\d+(,\d{3})*\.*\d*"
value_re = rf"\${number}"
word_re = rf"\${number}(-|\sto\s|–)?({number})?\s({amounts})"
'''
Possible values:
$600,000 -> 600000 ## value syntax
$12.2 million -> 12200000 ## word syntax (million, billion, etc)
$12-13 million -> 12000000 ## word syntax with a range
$16 to 20 million -> 16000000 ## word syntax with a different range
[12]
'''
def word_to_value(word):
value_dict = {"thousand": 1000, "million": 1000000, "billion": 1000000000}
return value_dict[word]
def parse_word_syntax(string):
value_string = re.search(number, string).group()
value = float(value_string.replace(",", ""))
word = re.search(amounts, string, flags=re.I).group().lower()
total_amount = value * word_to_value(word)
return total_amount
def parse_value_syntax(string):
value_string = re.search(number, string).group()
value = float(value_string.replace(",", ""))
return value
def money_conversion(money):
if money == "N/A":
return None
if isinstance(money, list):
money = money[0]
value_syntax = re.search(value_re, money)
word_syntax = re.search(word_re, money, flags=re.I)
if word_syntax:
return parse_word_syntax(word_syntax.group())
elif value_syntax:
return parse_value_syntax(value_syntax.group())
else:
return None
for movie in movie_info_list:
movie['Budget (float)'] = money_conversion(movie.get('Budget', "N/A"))
movie['Box office (float)'] = money_conversion(movie.get('Box office', "N/A"))
for movie in movie_info_list:
if movie.get('Release date') != None:
print (movie.get('Release date'))
elif movie.get('Release dates') != None:
print (movie.get('Release dates'))
else:
print ("N/A")
# Transforming the date into a datetime python object
# types of date:
# July 24, 2009
# 20 July 2001
dates = [movie.get('Release date', movie.get('Release dates', "N/A")) for movie in movie_info_list]
def clean_date(date):
return date.split("(")[0].strip()
def date_conversion(date):
if isinstance(date, list):
date = date[0]
if date == "N/A":
return None
date_str = clean_date(date)
fmts = ["%B %d, %Y", "%d %B %Y"]
for fmt in fmts:
try:
return datetime.strptime(date_str,fmt)
except:
pass
return None
for movie in movie_info_list:
movie['Release date (datetime)'] = date_conversion(movie.get('Release date', movie.get('Release dates', "N/A")))
# saving the data now using pickle to keep datetime format
# therefore, creating new save and load formats
def save_data_pickle(name, data):
with open(name, 'wb') as f:
pickle.dump(data, f)
def load_data_pickle(name):
with open(name, 'rb') as f:
return pickle.load(f)
save_data_pickle("disney_movie_data_better_cleaned.pickle", movie_info_list)
movie_info_list = load_data_pickle('disney_movie_data_better_cleaned.pickle')
###Output
_____no_output_____
###Markdown
Task 4 Attach IMDB/Rotten Totatoes/metascore scores
###Code
# using the OMDb API
def get_omdb_info(title):
base_url = 'http://www.omdbapi.com/?'
parameters = {'apikey': os.environ['OMDB_API_KEY'], 't': title}
params_encoded = urllib.parse.urlencode(parameters)
full_url = base_url + params_encoded
return requests.get(full_url).json()
def get_rotten_tomato_score(omdb_info):
ratings = omdb_info.get('Ratings', [])
for rating in ratings:
if rating['Source'] == 'Rotten Tomatoes':
return rating['Value']
return None
for movie in movie_info_list:
title = movie['title']
omdb_info = get_omdb_info(title)
movie['imdb'] = omdb_info.get('imdbRating', None)
movie['metascore'] = omdb_info.get('Metascore', None)
movie['rotten_tomatoes'] = get_rotten_tomato_score(omdb_info)
movie_info_list[-150]
save_data_pickle('disney_movie_data_final.pickle', movie_info_list)
### Save data as json and csv
movie_info_copy = [movie.copy() for movie in movie_info_list]
for movie in movie_info_copy:
current_date = movie['Release date (datetime)']
if current_date:
movie['Release date (datetime)'] = current_date.strftime("%B %d, %Y")
else:
movie['Release date (datetime)'] = None
save_data('disney_movie_data_final.json', movie_info_copy)
import pandas as pd
df = pd.DataFrame(movie_info_list)
df.to_csv("disney_movie_data_final.csv")
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 518 entries, 0 to 517
Data columns (total 50 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 title 518 non-null object
1 Production company 214 non-null object
2 Distributed by 516 non-null object
3 Release date 339 non-null object
4 Running time 495 non-null object
5 Country 463 non-null object
6 Language 497 non-null object
7 Box office 400 non-null object
8 Runnig time (int) 495 non-null float64
9 Budget (float) 307 non-null float64
10 Box office (float) 389 non-null float64
11 Release date (datetime) 500 non-null datetime64[ns]
12 imdb 498 non-null object
13 metascore 498 non-null object
14 rotten_tomatoes 371 non-null object
15 Directed by 513 non-null object
16 Written by 226 non-null object
17 Based on 277 non-null object
18 Produced by 504 non-null object
19 Starring 479 non-null object
20 Music by 508 non-null object
21 Release dates 171 non-null object
22 Budget 316 non-null object
23 Story by 171 non-null object
24 Narrated by 68 non-null object
25 Cinematography 389 non-null object
26 Edited by 463 non-null object
27 Languages 19 non-null object
28 Screenplay by 244 non-null object
29 Countries 49 non-null object
30 Color process 4 non-null object
31 Production companies 301 non-null object
32 Japanese 5 non-null object
33 Hepburn 5 non-null object
34 Adaptation by 1 non-null object
35 Animation by 1 non-null object
36 Traditional 2 non-null object
37 Simplified 2 non-null object
38 Original title 1 non-null object
39 Layouts by 2 non-null object
40 Original concept by 1 non-null object
41 Created by 1 non-null object
42 Original work 1 non-null object
43 Owner 1 non-null object
44 Music 1 non-null object
45 Lyrics 1 non-null object
46 Book 1 non-null object
47 Basis 1 non-null object
48 Productions 1 non-null object
49 Awards 1 non-null object
dtypes: datetime64[ns](1), float64(3), object(46)
memory usage: 202.5+ KB
|
code/others/fixmatch/v2_hwkim_fixmatch_2019_fast_thr085_bs9_mu2_5e5_CusSwa2.ipynb | ###Markdown
TO-DO LIST - Label Smoothing - https://www.kaggle.com/chocozzz/train-cassava-starter-using-label-smoothing - https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/173733 - Class Imbalance - SWA / SWAG - Augmentation - https://www.kaggle.com/sachinprabhu/pytorch-resnet50-snapmix-train-pipeline
###Code
import os
print(os.listdir("./input/"))
package_paths = [
'./input/pytorch-image-models/pytorch-image-models-master', #'../input/efficientnet-pytorch-07/efficientnet_pytorch-0.7.0'
'./input/pytorch-gradual-warmup-lr-master'
]
import sys;
for pth in package_paths:
sys.path.append(pth)
# from warmup_scheduler import GradualWarmupScheduler
from glob import glob
from sklearn.model_selection import GroupKFold, StratifiedKFold
import cv2
from skimage import io
import torch
from torch import nn
import os
from datetime import datetime
import time
import random
import cv2
import torchvision
from torchvision import transforms
import pandas as pd
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
from torch.utils.data import Dataset,DataLoader
from torch.utils.data.sampler import SequentialSampler, RandomSampler
from torch.cuda.amp import autocast, GradScaler
from torch.nn.modules.loss import _WeightedLoss
import torch.nn.functional as F
import timm
from adamp import AdamP
import sklearn
import warnings
import joblib
from sklearn.metrics import roc_auc_score, log_loss
from sklearn import metrics
import warnings
import cv2
#from efficientnet_pytorch import EfficientNet
from scipy.ndimage.interpolation import zoom
##SWA
from torch.optim.swa_utils import AveragedModel, SWALR, update_bn
from torch.optim.lr_scheduler import CosineAnnealingLR
CFG = {
'fold_num': 5,
'seed': 719,
'model_arch': 'tf_efficientnet_b4_ns',
'img_size': 512,
'epochs': 7,
'train_bs': 9,
'valid_bs': 8,
'T_0': 10,
'lr': 5e-5,
'min_lr': 5e-5,
'weight_decay':1e-6,
'num_workers': 4,
'accum_iter': 2, # suppoprt to do batch accumulation for backprop with effectively larger batch size
'verbose_step': 1,
'device': 'cuda:0',
'target_size' : 5,
'smoothing' : 0.2,
'swa_start_epoch' : 2,
## Following four are related to FixMatch
'mu' : 2,
'T' : 1, # temperature
'lambda_u' : 1.,
'threshold' : 0.85,
##
'debug' : False
}
train = pd.read_csv('./input/cassava-leaf-disease-classification/train.csv')
delete_id = ['2947932468.jpg', '2252529694.jpg', '2278017076.jpg']
train = train[~train['image_id'].isin(delete_id)].reset_index(drop=True)
train.head()
###Output
_____no_output_____
###Markdown
> We could do stratified validation split in each fold to make each fold's train and validation set looks like the whole train set in target distributions.
###Code
submission = pd.read_csv('./input/cassava-leaf-disease-classification/sample_submission.csv')
submission.head()
###Output
_____no_output_____
###Markdown
Helper Functions
###Code
def seed_everything(seed):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
def get_img(path):
im_bgr = cv2.imread(path)
im_rgb = im_bgr[:, :, ::-1]
#print(im_rgb)
return im_rgb
###Output
_____no_output_____
###Markdown
Dataset
###Code
def rand_bbox(size, lam):
W = size[0]
H = size[1]
cut_rat = np.sqrt(1. - lam)
cut_w = np.int(W * cut_rat)
cut_h = np.int(H * cut_rat)
# uniform
cx = np.random.randint(W)
cy = np.random.randint(H)
bbx1 = np.clip(cx - cut_w // 2, 0, W)
bby1 = np.clip(cy - cut_h // 2, 0, H)
bbx2 = np.clip(cx + cut_w // 2, 0, W)
bby2 = np.clip(cy + cut_h // 2, 0, H)
return bbx1, bby1, bbx2, bby2
class CassavaDataset(Dataset):
def __init__(self, df, data_root,
transforms=None,
output_label=True,
):
super().__init__()
self.df = df.reset_index(drop=True).copy()
self.transforms = transforms
self.data_root = data_root
self.output_label = output_label
self.labels = self.df['label'].values
def __len__(self):
return self.df.shape[0]
def __getitem__(self, index: int):
# get labels
if self.output_label:
target = self.labels[index]
img = get_img("{}/{}".format(self.data_root, self.df.loc[index]['image_id']))
if self.transforms:
img = self.transforms(image=img)['image']
if self.output_label == True:
return img, target
else:
return img
###Output
_____no_output_____
###Markdown
Define Train\Validation Image Augmentations
###Code
from albumentations.core.transforms_interface import DualTransform
# from albumentations.augmentations import functional as F
class GridMask(DualTransform):
"""GridMask augmentation for image classification and object detection.
Author: Qishen Ha
Email: [email protected]
2020/01/29
Args:
num_grid (int): number of grid in a row or column.
fill_value (int, float, lisf of int, list of float): value for dropped pixels.
rotate ((int, int) or int): range from which a random angle is picked. If rotate is a single int
an angle is picked from (-rotate, rotate). Default: (-90, 90)
mode (int):
0 - cropout a quarter of the square of each grid (left top)
1 - reserve a quarter of the square of each grid (left top)
2 - cropout 2 quarter of the square of each grid (left top & right bottom)
Targets:
image, mask
Image types:
uint8, float32
Reference:
| https://arxiv.org/abs/2001.04086
| https://github.com/akuxcw/GridMask
"""
def __init__(self, num_grid=3, fill_value=0, rotate=0, mode=0, always_apply=False, p=0.5):
super(GridMask, self).__init__(always_apply, p)
if isinstance(num_grid, int):
num_grid = (num_grid, num_grid)
if isinstance(rotate, int):
rotate = (-rotate, rotate)
self.num_grid = num_grid
self.fill_value = fill_value
self.rotate = rotate
self.mode = mode
self.masks = None
self.rand_h_max = []
self.rand_w_max = []
def init_masks(self, height, width):
if self.masks is None:
self.masks = []
n_masks = self.num_grid[1] - self.num_grid[0] + 1
for n, n_g in enumerate(range(self.num_grid[0], self.num_grid[1] + 1, 1)):
grid_h = height / n_g
grid_w = width / n_g
this_mask = np.ones((int((n_g + 1) * grid_h), int((n_g + 1) * grid_w))).astype(np.uint8)
for i in range(n_g + 1):
for j in range(n_g + 1):
this_mask[
int(i * grid_h) : int(i * grid_h + grid_h / 2),
int(j * grid_w) : int(j * grid_w + grid_w / 2)
] = self.fill_value
if self.mode == 2:
this_mask[
int(i * grid_h + grid_h / 2) : int(i * grid_h + grid_h),
int(j * grid_w + grid_w / 2) : int(j * grid_w + grid_w)
] = self.fill_value
if self.mode == 1:
this_mask = 1 - this_mask
self.masks.append(this_mask)
self.rand_h_max.append(grid_h)
self.rand_w_max.append(grid_w)
def apply(self, image, mask, rand_h, rand_w, angle, **params):
h, w = image.shape[:2]
mask = F.rotate(mask, angle) if self.rotate[1] > 0 else mask
mask = mask[:,:,np.newaxis] if image.ndim == 3 else mask
image *= mask[rand_h:rand_h+h, rand_w:rand_w+w].astype(image.dtype)
return image
def get_params_dependent_on_targets(self, params):
img = params['image']
height, width = img.shape[:2]
self.init_masks(height, width)
mid = np.random.randint(len(self.masks))
mask = self.masks[mid]
rand_h = np.random.randint(self.rand_h_max[mid])
rand_w = np.random.randint(self.rand_w_max[mid])
angle = np.random.randint(self.rotate[0], self.rotate[1]) if self.rotate[1] > 0 else 0
return {'mask': mask, 'rand_h': rand_h, 'rand_w': rand_w, 'angle': angle}
@property
def targets_as_params(self):
return ['image']
def get_transform_init_args_names(self):
return ('num_grid', 'fill_value', 'rotate', 'mode')
from albumentations import (
HorizontalFlip, VerticalFlip, IAAPerspective, ShiftScaleRotate, CLAHE, RandomRotate90,
Transpose, ShiftScaleRotate, Blur, OpticalDistortion, GridDistortion, HueSaturationValue,
IAAAdditiveGaussianNoise, GaussNoise, MotionBlur, MedianBlur, IAAPiecewiseAffine, RandomResizedCrop,
IAASharpen, IAAEmboss, RandomBrightnessContrast, Flip, OneOf, Compose, Normalize, Cutout, CoarseDropout, ShiftScaleRotate, CenterCrop, Resize
)
from albumentations.pytorch import ToTensorV2
def get_train_transforms():
return Compose([
OneOf([
Resize(CFG['img_size'], CFG['img_size'], p=1.),
CenterCrop(CFG['img_size'], CFG['img_size'], p=1.),
RandomResizedCrop(CFG['img_size'], CFG['img_size'], p=1.)
], p=1.),
Transpose(p=0.5),
HorizontalFlip(p=0.5),
VerticalFlip(p=0.5),
ShiftScaleRotate(p=0.5),
HueSaturationValue(hue_shift_limit=0.2, sat_shift_limit=0.2, val_shift_limit=0.2, p=0.5),
RandomBrightnessContrast(brightness_limit=(-0.1,0.1), contrast_limit=(-0.1, 0.1), p=0.5),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0),
CoarseDropout(p=0.5),
GridMask(num_grid=3, p=0.5),
ToTensorV2(p=1.0),
], p=1.)
def get_valid_transforms():
return Compose([
CenterCrop(CFG['img_size'], CFG['img_size'], p=1.),
Resize(CFG['img_size'], CFG['img_size']),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0),
ToTensorV2(p=1.0),
], p=1.)
def get_inference_transforms():
return Compose([
OneOf([
Resize(CFG['img_size'], CFG['img_size'], p=1.),
CenterCrop(CFG['img_size'], CFG['img_size'], p=1.),
RandomResizedCrop(CFG['img_size'], CFG['img_size'], p=1.)
], p=1.),
Transpose(p=0.5),
HorizontalFlip(p=0.5),
VerticalFlip(p=0.5),
Resize(CFG['img_size'], CFG['img_size']),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], max_pixel_value=255.0, p=1.0),
ToTensorV2(p=1.0),
], p=1.)
###Output
_____no_output_____
###Markdown
Model
###Code
class CassvaImgClassifier(nn.Module):
def __init__(self, model_arch, n_class, pretrained=False):
super().__init__()
self.model = timm.create_model(model_arch, pretrained=pretrained)
n_features = self.model.classifier.in_features
self.model.classifier = nn.Linear(n_features, n_class)
def forward(self, x):
x = self.model(x)
return x
###Output
_____no_output_____
###Markdown
For FixMatch Unlabeled DataLoader
###Code
#######
o = os.listdir('./input/cassava-disease/all/')
o = np.array([o]).T
label_col = np.ones_like(o)
o = np.concatenate((o,label_col),axis=1)
unlabeled = pd.DataFrame(o,columns=['image_id','label'])
unlabeled.head()
# unlabeled = train
import PIL
import PIL.ImageOps
import PIL.ImageEnhance
import PIL.ImageDraw
from PIL import Image
PARAMETER_MAX = 10
def AutoContrast(img, **kwarg):
return PIL.ImageOps.autocontrast(img)
def Brightness(img, v, max_v, bias=0):
v = _float_parameter(v, max_v) + bias
return PIL.ImageEnhance.Brightness(img).enhance(v)
def Color(img, v, max_v, bias=0):
v = _float_parameter(v, max_v) + bias
return PIL.ImageEnhance.Color(img).enhance(v)
def Contrast(img, v, max_v, bias=0):
v = _float_parameter(v, max_v) + bias
return PIL.ImageEnhance.Contrast(img).enhance(v)
def Cutout(img, v, max_v, bias=0):
if v == 0:
return img
v = _float_parameter(v, max_v) + bias
v = int(v * min(img.size))
return CutoutAbs(img, v)
def CutoutAbs(img, v, **kwarg):
w, h = img.size
x0 = np.random.uniform(0, w)
y0 = np.random.uniform(0, h)
x0 = int(max(0, x0 - v / 2.))
y0 = int(max(0, y0 - v / 2.))
x1 = int(min(w, x0 + v))
y1 = int(min(h, y0 + v))
xy = (x0, y0, x1, y1)
# gray
color = (127, 127, 127)
img = img.copy()
PIL.ImageDraw.Draw(img).rectangle(xy, color)
return img
def Equalize(img, **kwarg):
return PIL.ImageOps.equalize(img)
def Identity(img, **kwarg):
return img
def Invert(img, **kwarg):
return PIL.ImageOps.invert(img)
def Posterize(img, v, max_v, bias=0):
v = _int_parameter(v, max_v) + bias
return PIL.ImageOps.posterize(img, v)
def Rotate(img, v, max_v, bias=0):
v = _int_parameter(v, max_v) + bias
if random.random() < 0.5:
v = -v
return img.rotate(v)
def Sharpness(img, v, max_v, bias=0):
v = _float_parameter(v, max_v) + bias
return PIL.ImageEnhance.Sharpness(img).enhance(v)
def ShearX(img, v, max_v, bias=0):
v = _float_parameter(v, max_v) + bias
if random.random() < 0.5:
v = -v
return img.transform(img.size, PIL.Image.AFFINE, (1, v, 0, 0, 1, 0))
def ShearY(img, v, max_v, bias=0):
v = _float_parameter(v, max_v) + bias
if random.random() < 0.5:
v = -v
return img.transform(img.size, PIL.Image.AFFINE, (1, 0, 0, v, 1, 0))
def Solarize(img, v, max_v, bias=0):
v = _int_parameter(v, max_v) + bias
return PIL.ImageOps.solarize(img, 256 - v)
def SolarizeAdd(img, v, max_v, bias=0, threshold=128):
v = _int_parameter(v, max_v) + bias
if random.random() < 0.5:
v = -v
img_np = np.array(img).astype(np.int)
img_np = img_np + v
img_np = np.clip(img_np, 0, 255)
img_np = img_np.astype(np.uint8)
img = Image.fromarray(img_np)
return PIL.ImageOps.solarize(img, threshold)
def TranslateX(img, v, max_v, bias=0):
v = _float_parameter(v, max_v) + bias
if random.random() < 0.5:
v = -v
v = int(v * img.size[0])
return img.transform(img.size, PIL.Image.AFFINE, (1, 0, v, 0, 1, 0))
def TranslateY(img, v, max_v, bias=0):
v = _float_parameter(v, max_v) + bias
if random.random() < 0.5:
v = -v
v = int(v * img.size[1])
return img.transform(img.size, PIL.Image.AFFINE, (1, 0, 0, 0, 1, v))
def _float_parameter(v, max_v):
return float(v) * max_v / PARAMETER_MAX
def _int_parameter(v, max_v):
return int(v * max_v / PARAMETER_MAX)
class RandAugmentMC(object):
def __init__(self, n, m):
assert n >= 1
assert 1 <= m <= 10
self.n = n
self.m = m
self.augment_pool = fixmatch_augment_pool()
def __call__(self, img):
ops = random.choices(self.augment_pool, k=self.n)
for op, max_v, bias in ops:
v = np.random.randint(1, self.m)
if random.random() < 0.5:
img = op(img, v=v, max_v=max_v, bias=bias)
img = CutoutAbs(img, int(CFG['img_size']*0.5))
return img
def fixmatch_augment_pool():
# FixMatch paper
augs = [(AutoContrast, None, None),
(Brightness, 0.9, 0.05),
(Color, 0.9, 0.05),
(Contrast, 0.9, 0.05),
(Equalize, None, None),
(Identity, None, None),
(Posterize, 4, 4),
(Rotate, 30, 0),
(Sharpness, 0.9, 0.05),
(ShearX, 0.3, 0),
(ShearY, 0.3, 0),
(Solarize, 256, 0),
(TranslateX, 0.3, 0),
(TranslateY, 0.3, 0)]
return augs
class TransformFixMatch(object):
def __init__(self, mean, std):
self.weak = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(size=CFG['img_size'],
padding=int(CFG['img_size']*0.125),
padding_mode='reflect')])
self.strong = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomCrop(size=CFG['img_size'],
padding=int(CFG['img_size']*0.125),
padding_mode='reflect'),
RandAugmentMC(n=2, m=10)])
self.normalize = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(mean=mean, std=std)])
def __call__(self, x):
weak = self.weak(x)
strong = self.strong(x)
return self.normalize(weak), self.normalize(strong)
class CassavaDataset_ul(Dataset):
def __init__(self, df, data_root,
transforms=None,
output_label=True,
):
super().__init__()
self.df = df.reset_index(drop=True).copy()
self.transforms = transforms
self.data_root = data_root
self.output_label = output_label
self.labels = self.df['label'].values
def __len__(self):
return self.df.shape[0]
def __getitem__(self, index: int):
# get labels
if self.output_label:
target = self.labels[index]
img = Image.open("{}/{}".format(self.data_root, self.df.loc[index]['image_id']))
if self.transforms:
img = self.transforms(img)
if self.output_label == True:
return img, target
else:
return img
from torch.utils.data import RandomSampler
######################## 바꿔주자!!! 2019 데이터셋으로
# unlabeled_dataset = CassavaDataset_ul(unlabeled, './input/cassava-disease/all', transforms=TransformFixMatch(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]))
unlabeled_dataset = CassavaDataset_ul(unlabeled, './input/cassava-disease/all/', transforms=TransformFixMatch(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]))
train_loader_ul = torch.utils.data.DataLoader(
unlabeled_dataset,
sampler = RandomSampler(unlabeled_dataset),
batch_size=CFG['train_bs'] * CFG['mu'],
pin_memory=False,
drop_last=True,
num_workers=CFG['num_workers'],
)
def interleave(x, size):
s = list(x.shape)
return x.reshape([-1, size] + s[1:]).transpose(0, 1).reshape([-1] + s[1:])
def de_interleave(x, size):
s = list(x.shape)
return x.reshape([size, -1] + s[1:]).transpose(0, 1).reshape([-1] + s[1:])
# train_loader_ul = iter(train_loader_ul)
# (inputs_u_w, inputs_u_s), _ = train_loader_ul.next()
# print(len(inputs_u_s), len(inputs_u_w))
###Output
_____no_output_____
###Markdown
Training APIs
###Code
def prepare_dataloader(df, trn_idx, val_idx, data_root='./input/cassava-leaf-disease-classification/train_images/'):
# from catalyst.data.sampler import BalanceClassSampler
train_ = df.loc[trn_idx,:].reset_index(drop=True)
valid_ = df.loc[val_idx,:].reset_index(drop=True)
train_ds = CassavaDataset(train_, data_root, transforms=get_train_transforms(), output_label=True)
valid_ds = CassavaDataset(valid_, data_root, transforms=get_valid_transforms(), output_label=True)
train_loader = torch.utils.data.DataLoader(
train_ds,
batch_size=CFG['train_bs'],
pin_memory=False,
drop_last=True,###
shuffle=True,
num_workers=CFG['num_workers'],
#sampler=BalanceClassSampler(labels=train_['label'].values, mode="downsampling")
)
val_loader = torch.utils.data.DataLoader(
valid_ds,
batch_size=CFG['valid_bs'],
num_workers=CFG['num_workers'],
shuffle=False,
pin_memory=False,
)
return train_loader, val_loader
def train_one_epoch(epoch, model, loss_fn, optimizer, train_loader, unlabeled_trainloader, device, scheduler=None, swa_scheduler=None, schd_batch_update=False):
model.train()
t = time.time()
running_loss = None
# pbar = tqdm(enumerate(train_loader), total=len(train_loader))
for step, (imgs, image_labels) in enumerate(train_loader):
imgs = imgs.float()
image_labels = image_labels.to(device).long()
try:
(inputs_u_s, inputs_u_w), _ = unlabeled_iter.next()
except:
unlabeled_iter = iter(unlabeled_trainloader)
(inputs_u_s, inputs_u_w), _ = unlabeled_iter.next()
inputs = interleave(
torch.cat((imgs, inputs_u_w, inputs_u_s)), 2*CFG['mu']+1).contiguous().to(device)
with autocast():
image_preds = model(inputs) #output = model(input)
logits = de_interleave(image_preds, 2*CFG['mu']+1)
logits_x = logits[:CFG['train_bs']]
logits_u_w, logits_u_s = logits[CFG['train_bs']:].chunk(2)
del logits
Lx = loss_fn(logits_x, image_labels)
pseudo_label = torch.softmax(logits_u_w.detach()/CFG['T'], dim=-1)
max_probs, targets_u = torch.max(pseudo_label, dim=-1)
mask = max_probs.ge(CFG['threshold']).float()
# Lu = (F.cross_entropy(logits_u_s, targets_u, reduction='none') * mask).mean()
Lu = (loss_fn(logits_u_s, targets_u, reduction='none')*mask).mean()
loss = Lx + CFG['lambda_u'] * Lu
scaler.scale(loss).backward()
if running_loss is None:
running_loss = loss.item()
else:
running_loss = running_loss * .99 + loss.item() * .01
if ((step + 1) % CFG['accum_iter'] == 0) or ((step + 1) == len(train_loader)):
scaler.step(optimizer)
scaler.update()
optimizer.zero_grad()
if scheduler is not None and schd_batch_update:
scheduler.step()
# if ((step + 1) % CFG['verbose_step'] == 0) or ((step + 1) == len(train_loader)):
# description = f'epoch {epoch} loss: {running_loss:.4f}'
# print(description)
# pbar.set_description(description)
if scheduler is not None and not schd_batch_update:
if epoch >= CFG['swa_start_epoch']:
swa_scheduler.step()
else:
scheduler.step()
def valid_one_epoch(epoch, model, loss_fn, val_loader, device, scheduler=None, schd_loss_update=False):
model.eval()
t = time.time()
loss_sum = 0
sample_num = 0
image_preds_all = []
image_targets_all = []
# pbar = tqdm(enumerate(val_loader), total=len(val_loader))
for step, (imgs, image_labels) in enumerate(val_loader):
imgs = imgs.to(device).float()
image_labels = image_labels.to(device).long()
image_preds = model(imgs) #output = model(input)
image_preds_all += [torch.argmax(image_preds, 1).detach().cpu().numpy()]
image_targets_all += [image_labels.detach().cpu().numpy()]
loss = loss_fn(image_preds, image_labels)
loss_sum += loss.item()*image_labels.shape[0]
sample_num += image_labels.shape[0]
# if ((step + 1) % CFG['verbose_step'] == 0) or ((step + 1) == len(val_loader)):
# description = f'epoch {epoch} loss: {loss_sum/sample_num:.4f}'
# pbar.set_description(description)
image_preds_all = np.concatenate(image_preds_all)
image_targets_all = np.concatenate(image_targets_all)
print('epoch = {}'.format(epoch+1), 'validation multi-class accuracy = {:.4f}'.format((image_preds_all==image_targets_all).mean()))
if scheduler is not None:
if schd_loss_update:
scheduler.step(loss_sum/sample_num)
else:
scheduler.step()
def inference_one_epoch(model, data_loader, device):
model.eval()
image_preds_all = []
# pbar = tqdm(enumerate(data_loader), total=len(data_loader))
with torch.no_grad():
for step, (imgs, image_labels) in enumerate(data_loader):
imgs = imgs.to(device).float()
image_preds = model(imgs) #output = model(input)
image_preds_all += [torch.softmax(image_preds, 1).detach().cpu().numpy()]
image_preds_all = np.concatenate(image_preds_all, axis=0)
return image_preds_all
# reference: https://www.kaggle.com/c/siim-isic-melanoma-classification/discussion/173733
class MyCrossEntropyLoss(_WeightedLoss):
def __init__(self, weight=None, reduction='mean'):
super().__init__(weight=weight, reduction=reduction)
self.weight = weight
self.reduction = reduction
def forward(self, inputs, targets):
lsm = F.log_softmax(inputs, -1)
if self.weight is not None:
lsm = lsm * self.weight.unsqueeze(0)
loss = -(targets * lsm).sum(-1)
if self.reduction == 'sum':
loss = loss.sum()
elif self.reduction == 'mean':
loss = loss.mean()
return loss
# ====================================================
# Label Smoothing
# ====================================================
class LabelSmoothingLoss(nn.Module):
def __init__(self, classes, smoothing=0.0, dim=-1):
super(LabelSmoothingLoss, self).__init__()
self.confidence = 1.0 - smoothing
self.smoothing = smoothing
self.cls = classes
self.dim = dim
def forward(self, pred, target, reduction = 'mean'):
pred = pred.log_softmax(dim=self.dim)
with torch.no_grad():
true_dist = torch.zeros_like(pred)
true_dist.fill_(self.smoothing / (self.cls - 1))
true_dist.scatter_(1, target.data.unsqueeze(1), self.confidence)
if reduction == 'mean':
return torch.mean(torch.sum(-true_dist * pred, dim=self.dim))
else:
return torch.sum(-true_dist * pred, dim=self.dim)
###Output
_____no_output_____
###Markdown
Main Loop
###Code
from sklearn.metrics import accuracy_score
os.environ['CUDA_VISIBLE_DEVICES'] = '0' # specify GPUs locally
# #debug
# train = pd.read_csv('./input/cassava-leaf-disease-classification/train_debug.csv')
# CFG['epochs']=7
# model_path = 'temporary'
# !mkdir -p temporary
model_path='v2_hwkim_fixmatch_2019_fast_thr085_bs9_mu2_5e5_CusSwa2'
# !mkdir -p v2_hwkim_fixmatch_2019_fast_thr085_bs9_mu2_5e5_CusSwa2
if __name__ == '__main__':
for c in range(5):
train[c] = 0
folds = StratifiedKFold(n_splits=CFG['fold_num'], shuffle=True, random_state=CFG['seed']).split(np.arange(train.shape[0]), train.label.values)
for fold, (trn_idx, val_idx) in enumerate(folds):
print('Training with {} started'.format(fold))
print(len(trn_idx), len(val_idx))
train_loader, val_loader = prepare_dataloader(train, trn_idx, val_idx, data_root='./input/cassava-leaf-disease-classification/train_images/')
unlabeled_trainloader = train_loader_ul
device = torch.device(CFG['device'])
model = CassvaImgClassifier(CFG['model_arch'], train.label.nunique(), pretrained=True).to(device)
swa_model = AveragedModel(model)
scaler = GradScaler()
optimizer = AdamP(model.parameters(), lr=CFG['lr'], weight_decay=CFG['weight_decay'])
scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0=CFG['swa_start_epoch']+1, T_mult=1, eta_min=CFG['min_lr'], last_epoch=-1)
swa_scheduler = SWALR(optimizer, swa_lr = CFG['min_lr'], anneal_epochs=1)
loss_tr = LabelSmoothingLoss(classes=CFG['target_size'], smoothing=CFG['smoothing']).to(device)
loss_fn = nn.CrossEntropyLoss().to(device)
for epoch in range(CFG['epochs']):
print(optimizer.param_groups[0]["lr"])
train_one_epoch(epoch, model, loss_tr, optimizer, train_loader, unlabeled_trainloader, device, scheduler=scheduler, swa_scheduler=swa_scheduler, schd_batch_update=False)
if epoch > CFG['swa_start_epoch']:
if epoch-1 == CFG['swa_start_epoch']:
swa_model = AveragedModel(model)
else:
swa_model.update_parameters(model)
with torch.no_grad():
print('non swa')
valid_one_epoch(epoch, model, loss_fn, val_loader, device, scheduler=None, schd_loss_update=False)
if epoch > CFG['swa_start_epoch']:
print('swa')
valid_one_epoch(epoch, swa_model, loss_fn, val_loader, device, scheduler=None, schd_loss_update=False)
# torch.save(model.state_dict(),'./model9_2/{}_fold_{}_{}_{}'.format(CFG['model_arch'], fold, epoch, seed))
del unlabeled_trainloader, model
with torch.no_grad():
# valid_one_epoch(epoch, swa_model, loss_fn, val_loader, device, scheduler=None, schd_loss_update=False)
torch.save(swa_model.state_dict(),'./'+model_path+'/swa_{}_fold_{}_{}'.format(CFG['model_arch'], fold, epoch))
print('swa_BN')
update_bn(train_loader, swa_model, device=device)
valid_one_epoch(epoch, swa_model, loss_fn, val_loader, device, scheduler=None, schd_loss_update=False)
torch.save(swa_model.state_dict(),'./'+model_path+'/noBN_swa_{}_fold_{}_{}'.format(CFG['model_arch'], fold, epoch))
tst_preds = []
for tta in range(5):
tst_preds += [inference_one_epoch(swa_model, val_loader, device)]
train.loc[val_idx, [0, 1, 2, 3, 4]] = np.mean(tst_preds, axis=0)
del swa_model, optimizer, train_loader, val_loader, scaler, scheduler
torch.cuda.empty_cache()
train['pred'] = np.array(train[[0, 1, 2, 3, 4]]).argmax(axis=1)
print(accuracy_score(train['label'].values, train['pred'].values))
###Output
Training with 0 started
37 10
|
TeachingDocs/Templates/Assignment_Skeleton.ipynb | ###Markdown
This Notebook - Goals - FOR EDINA**What?:**- Provides skeleton which can be used as base to copy/paste other formats of worksheets into.- Specifically for use with nbgrader.**Who?:**- Teachers**Why?:**- Allows quick transfer from written worksheet/pdf/latex straight to Noteable.**Noteable features to exploit:**- Markdown options.**How?:**- Provides skeleton assignment. How to use this template- By default, this worksheet is set up to contain 3 code questions followed by 3 written answer questions. You can delete, copy and paste cells as appropriate for your worksheet.- To delete a cell, click on it and press the scissor button in the toolbar above.- Copy and paste a cell using the two buttons to the right of the scissor button in the toolbar above.- Click on a cell to select it, then press Enter to switch to edit mode.- In edit mode, type or paste questions in question cells.- To get out of edit mode, press Ctrl + Enter. - pressing Ctrl + Enter from a code cell will execute the cell.- Include links to resources with the following syntax: [text to display](https://www.google.com).- "Written answers" (Markdown cells) can include text, basic tables and latex (see markdown reference). They are written in markdown cells.- "Code answers" (code cells) can optionally include commented code skeleton to prompt students in answer cell below. They are written in code cells. Assignment TitleAssignment due date: {INSERT DUE DATE}.This assignment will make up {INSERT ASSIGNMENT WEIGHTING}% of your overall grade in this class. Instructions to studentsIf the assignment was fetched from the assignments tab, do not change the name of the assignment file(s).Cells which are left blank for your responses will either require a text response or a code response. This will be clear from the question, but you should check that a text response is written in a markdown cell, and a code response is written in a code cell (as indicated in the toolbar). Code answersIn questions that require you to write code, there will usually be a code cell containing: YOUR CODE HEREraise NotImplementedError() When you are ready to write your answer, delete raise NotImplementedError() and write your code. Text answersFor questions with a text answer, there will be a markdown cell following the question. There will usually be an indication that the cell is intended for your answer such as "YOUR ANSWER HERE". Submitting your work You should save your work before you submit ("Save" icon in top menu). Before you submit, ensure that the notebook can be run from start to finish by pressing the "Restart & Run All" option in the "Kernel" menu above. Once you are ready, go to the assignments tab on the Noteable landing page and click "Submit" on the relevant assignment.
###Code
# IMPORT LIBRARIES HERE
# this cell can usually be ignored by students
# Common libraries include:
import math
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib is a magic command - see IPython documentation
%matplotlib inline
# hide unnecessary warnings
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Assignment Introduction*{INSERT EXPLANATION OF ASSIGNMENT}* Question 1 - codeThis question has a code answer. *{PASTE/WRITE QUESTION HERE}*
###Code
# include any comments or code stub here
# Tests for answer
# You can make these visible/invisible from Formgrader
###Output
_____no_output_____
###Markdown
Question 2 - codeThis question has a code answer. *{PASTE/WRITE QUESTION HERE}*
###Code
# include any comments or code stub here
# Tests for answer
# You can make these visible/invisible from Formgrader
###Output
_____no_output_____
###Markdown
Question 3 - codeThis question has a code answer. *{PASTE/WRITE QUESTION HERE}*
###Code
# include any comments or code stub here
# Tests for answer
# You can make these visible/invisible from Formgrader
###Output
_____no_output_____ |
notebooks/ensemble_bagging.ipynb | ###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
data_train, data_test, target_train
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
Percentage of samples present in the original dataset: 63.2%
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the tree topredict on the testing data. They shall give slightly different predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa result compute `n` predicted values for the target variable. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test, bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend()
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal preditions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decisiontrees. We set `n_estimtators=100` instead of 3 in our manual implementationabove to get a stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same test data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
regressor_predictions = regressor.predict(data_test)
base_model_line = plt.plot(
data_test, regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test, bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____
###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
###Output
_____no_output_____
###Markdown
Remember that the term "test" here refers to data that was not used fortraining and computing an evaluation metric on such a synthetic test setwould be meaningless.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
_____no_output_____
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the trees topredict the samples within the range of data. They shall give slightlydifferent predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa result compute `n` predicted values for the target variable. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test, bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend()
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal predictions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decisiontrees. We set `n_estimators=100` instead of 3 in our manual implementationabove to get a stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same interval of data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
import warnings
with warnings.catch_warnings():
# ignore scikit-learn warning when accessing bagged estimators
warnings.filterwarnings(
"ignore",
message="X has feature names, but DecisionTreeRegressor was fitted without feature names",
)
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
regressor_predictions = regressor.predict(data_test)
base_model_line = plt.plot(
data_test, regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test, bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____
###Markdown
BaggingIn this notebook, we will present the first ensemble using bootstrap samplescalled bagging.Bagging stands for Bootstrap AGGregatING. It uses bootstrap (random samplingwith replacement) to learn several models. At predict time, the predictionsof each learner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(0)
def generate_data(n_samples=50):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_max, x_min = 1.4, -1.4
len_x = x_max - x_min
x = rng.rand(n_samples) * len_x - len_x / 2
noise = rng.randn(n_samples) * 0.3
y = x ** 3 - 0.5 * x ** 2 + noise
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=50)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The link between our feature and the target to predict is non-linear.However, a decision tree is capable of fitting such data.
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap sampleA bootstrap sample corresponds to a resampling, with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return abootstrap sample `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
data_bootstrap_sample = data.iloc[bootstrap_indices]
target_bootstrap_sample = target.iloc[bootstrap_indices]
return data_bootstrap_sample, target_bootstrap_sample
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
bootstraps_illustration = pd.DataFrame()
bootstraps_illustration["Original"] = data_train["Feature"]
n_bootstrap = 3
for bootstrap_idx in range(n_bootstrap):
# draw a bootstrap from the original data
bootstrap_data, target_data = bootstrap_sample(data_train, target_train)
# store only the bootstrap sample
bootstraps_illustration[f"Boostrap sample #{bootstrap_idx + 1}"] = \
bootstrap_data["Feature"].to_numpy()
###Output
_____no_output_____
###Markdown
In the cell above, we generated three bootstrap samples and we stored onlythe feature values. In this manner, we will plot the features value from eachset and check the how different they are.NoteIn the next cell, we transform the dataframe from wide to long format. Thecolumn name become a by row information. pd.melt is in charge of doing thistransformation. We make this transformation because we will use the seabornfunction sns.swarmplot that expect long format dataframe.
###Code
bootstraps_illustration = bootstraps_illustration.melt(
var_name="Type of data", value_name="Feature")
sns.swarmplot(x=bootstraps_illustration["Feature"],
y=bootstraps_illustration["Type of data"])
_ = plt.title("Feature values for the different sets")
###Output
_____no_output_____
###Markdown
We observe that the 3 generated bootstrap samples are all different from theoriginal dataset. The sampling with replacement is the cause of thisfluctuation. To confirm this intuition, we can check the number of uniquesamples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
_____no_output_____
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in the bootstrap sample. The other ~36.8% are just repeatedsamples.So, we are able to generate many datasets, all slightly different. Now, wecan fit a decision tree for each of these datasets and they allshall be slightly different as well.
###Code
forest = []
for bootstrap_idx in range(n_bootstrap):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
forest.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a forest with many different trees, we can use each ofthe tree to predict on the testing data. They shall give slightly differentresults.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(forest):
target_predicted = tree.predict(data_test)
plt.plot(data_test, target_predicted, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem, we need to combine them. In regression, the most straightforwardapproach is to average the different predictions from all learners. We canplot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
target_predicted_forest = []
for tree_idx, tree in enumerate(forest):
target_predicted = tree.predict(data_test)
plt.plot(data_test, target_predicted, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
target_predicted_forest.append(target_predicted)
target_predicted_forest = np.mean(target_predicted_forest, axis=0)
plt.plot(data_test, target_predicted_forest, label="Averaged predictions",
linestyle="-")
plt.legend()
plt.title("Predictions of individual and combined tree")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal preditions given by our 'bag' of decision tree regressors. Bagging in scikit-learnScikit-learn implements bagging estimators. It takes a base model that is themodel trained on each bootstrap sample.
###Code
from sklearn.ensemble import BaggingRegressor
bagging = BaggingRegressor(base_estimator=DecisionTreeRegressor(),
n_estimators=3)
bagging.fit(data_train, target_train)
target_predicted_forest = bagging.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, target_predicted_forest, label="Bag of decision trees")
plt.legend()
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
While we used a decision tree as a base model, nothing prevent us of usingany other type of model. We will give an example that will use a linearregression.
###Code
from sklearn.linear_model import LinearRegression
bagging = BaggingRegressor(base_estimator=LinearRegression(),
n_estimators=3)
bagging.fit(data_train, target_train)
target_predicted_linear = bagging.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, target_predicted_forest, label="Bag of decision trees")
plt.plot(data_test, target_predicted_linear, label="Bag of linear regression")
plt.legend()
_ = plt.title("Bagging classifiers using \ndecision trees and linear models")
###Output
_____no_output_____
###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
_____no_output_____
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the tree topredict on the testing data. They shall give slightly different predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa result compute `n` predicted values for the target variable. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test, bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend()
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal predictions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decisiontrees. We set `n_estimators=100` instead of 3 in our manual implementationabove to get a stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same test data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
regressor_predictions = regressor.predict(data_test)
base_model_line = plt.plot(
data_test, regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test, bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____
###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
###Output
_____no_output_____
###Markdown
Remember that the term "test" here refers to data that was not used fortraining and computing an evaluation metric on such a synthetic test setwould be meaningless.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test["Feature"], y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
_____no_output_____
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the trees topredict the samples within the range of data. They shall give slightlydifferent predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa result compute `n` predicted values for the target variable. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test["Feature"], bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal predictions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decisiontrees. We set `n_estimators=100` instead of 3 in our manual implementationabove to get a stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same interval of data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test["Feature"], bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
# we convert `data_test` into a NumPy array to avoid a warning raised in scikit-learn
tree_predictions = tree.predict(data_test.to_numpy())
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test["Feature"], bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
# we convert `data_test` into a NumPy array to avoid a warning raised in scikit-learn
regressor_predictions = regressor.predict(data_test.to_numpy())
base_model_line = plt.plot(
data_test["Feature"], regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test["Feature"], bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____
###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
Percentage of samples present in the original dataset: 63.2%
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the tree topredict on the testing data. They shall give slightly different predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa result compute `n` predicted values for the target variable. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test, bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend()
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal preditions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decisiontrees. We set `n_estimtators=100` instead of 3 in our manual implementationabove to get a stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same test data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
regressor_predictions = regressor.predict(data_test)
base_model_line = plt.plot(
data_test, regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test, bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____
###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
###Output
_____no_output_____
###Markdown
Remember that the term "test" here refers to data that was not used fortraining and computing an evaluation metric on such a synthetic test setwould be meaningless.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test["Feature"], y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
Percentage of samples present in the original dataset: 63.2%
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the trees topredict the samples within the range of data. They shall give slightlydifferent predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa result compute `n` predicted values for the target variable. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test["Feature"], bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal predictions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decisiontrees. We set `n_estimators=100` instead of 3 in our manual implementationabove to get a stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same interval of data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test["Feature"], bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
# we convert `data_test` into a NumPy array to avoid a warning raised in scikit-learn
tree_predictions = tree.predict(data_test.to_numpy())
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test["Feature"], bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
# we convert `data_test` into a NumPy array to avoid a warning raised in scikit-learn
regressor_predictions = regressor.predict(data_test.to_numpy())
base_model_line = plt.plot(
data_test["Feature"], regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test["Feature"], bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____
###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
_____no_output_____
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the tree topredict on the testing data. They shall give slightly different predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa result compute `n` predicted values for the target variable. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test, bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend()
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal preditions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decisiontrees. We set `n_estimtators=100` instead of 3 in our manual implementationabove to get a stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same test data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
regressor_predictions = regressor.predict(data_test)
base_model_line = plt.plot(
data_test, regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test, bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____
###Markdown
BaggingIn this notebook, we will present the first ensemble using bootstrap samplescalled bagging.Bagging stands for Bootstrap AGGregatING. It uses bootstrap (random samplingwith replacement) to learn several models. At predict time, the predictionsof each learner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(0)
def generate_data(n_samples=50):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_max, x_min = 1.4, -1.4
len_x = x_max - x_min
x = rng.rand(n_samples) * len_x - len_x / 2
noise = rng.randn(n_samples) * 0.3
y = x ** 3 - 0.5 * x ** 2 + noise
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=50)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The link between our feature and the target to predict is non-linear.However, a decision tree is capable of fitting such data.
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap sampleA bootstrap sample corresponds to a resampling, with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return abootstrap sample `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
data_bootstrap_sample = data.iloc[bootstrap_indices]
target_bootstrap_sample = target.iloc[bootstrap_indices]
return data_bootstrap_sample, target_bootstrap_sample
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
bootstraps_illustration = pd.DataFrame()
bootstraps_illustration["Original"] = data_train["Feature"]
n_bootstrap = 3
for bootstrap_idx in range(n_bootstrap):
# draw a bootstrap from the original data
bootstrap_data, target_data = bootstrap_sample(data_train, target_train)
# store only the bootstrap sample
bootstraps_illustration[f"Boostrap sample #{bootstrap_idx + 1}"] = \
bootstrap_data["Feature"].to_numpy()
###Output
_____no_output_____
###Markdown
In the cell above, we generated three bootstrap samples and we stored onlythe feature values. In this manner, we will plot the features value from eachset and check the how different they are.NoteIn the next cell, we transform the dataframe from wide to long format. Thecolumn name become a by row information. pd.melt is in charge of doing thistransformation. We make this transformation because we will use the seabornfunction sns.swarmplot that expect long format dataframe.
###Code
bootstraps_illustration = bootstraps_illustration.melt(
var_name="Type of data", value_name="Feature")
sns.swarmplot(x=bootstraps_illustration["Feature"],
y=bootstraps_illustration["Type of data"])
_ = plt.title("Feature values for the different sets")
###Output
_____no_output_____
###Markdown
We observe that the 3 generated bootstrap samples are all different from theoriginal dataset. The sampling with replacement is the cause of thisfluctuation. To confirm this intuition, we can check the number of uniquesamples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
_____no_output_____
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in the bootstrap sample. The other ~36.8% are just repeatedsamples.So, we are able to generate many datasets, all slightly different. Now, wecan fit a decision tree for each of these datasets and they allshall be slightly different as well.
###Code
forest = []
for bootstrap_idx in range(n_bootstrap):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
forest.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a forest with many different trees, we can use each ofthe tree to predict on the testing data. They shall give slightly differentresults.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(forest):
target_predicted = tree.predict(data_test)
plt.plot(data_test, target_predicted, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem, we need to combine them. In regression, the most straightforwardapproach is to average the different predictions from all learners. We canplot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
target_predicted_forest = []
for tree_idx, tree in enumerate(forest):
target_predicted = tree.predict(data_test)
plt.plot(data_test, target_predicted, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
target_predicted_forest.append(target_predicted)
target_predicted_forest = np.mean(target_predicted_forest, axis=0)
plt.plot(data_test, target_predicted_forest, label="Averaged predictions",
linestyle="-")
plt.legend()
plt.title("Predictions of individual and combined tree")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal preditions given by our 'bag' of decision tree regressors. Bagging in scikit-learnScikit-learn implements bagging estimators. It takes a base model that is themodel trained on each bootstrap sample.
###Code
from sklearn.ensemble import BaggingRegressor
bagging = BaggingRegressor(base_estimator=DecisionTreeRegressor(),
n_estimators=3)
bagging.fit(data_train, target_train)
target_predicted_forest = bagging.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, target_predicted_forest, label="Bag of decision trees")
plt.legend()
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
While we used a decision tree as a base model, nothing prevent us of usingany other type of model. We will give an example that will use a linearregression.
###Code
from sklearn.linear_model import LinearRegression
bagging = BaggingRegressor(base_estimator=LinearRegression(),
n_estimators=3)
bagging.fit(data_train, target_train)
target_predicted_linear = bagging.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, target_predicted_forest, label="Bag of decision trees")
plt.plot(data_test, target_predicted_linear, label="Bag of linear regression")
plt.legend()
_ = plt.title("Bagging classifiers using \ndecision trees and linear models")
###Output
_____no_output_____
###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
_____no_output_____
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the tree topredict on the testing data. They shall give slightly different predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa result compute `n` predicted values for the target variable. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test, bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend()
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal preditions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decisiontrees. We set `n_estimators=100` instead of 3 in our manual implementationabove to get a stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same test data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
regressor_predictions = regressor.predict(data_test)
base_model_line = plt.plot(
data_test, regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test, bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____
###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
_____no_output_____
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the tree topredict on the testing data. They shall give slightly different predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa resutl compute `n` predicted values for the target varible. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test, bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend()
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal preditions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decision trees. We set`n_estimtators=100` instead of 3 in our manual implementation above to geta stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same test data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
regressor_predictions = regressor.predict(data_test)
base_model_line = plt.plot(
data_test, regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test, bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____
###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
###Output
_____no_output_____
###Markdown
Remember that the term "test" here refers to data that was not used fortraining and computing an evaluation metric on such a synthetic test setwould be meaningless.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test["Feature"], y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_bootstrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_bootstrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
_____no_output_____
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the trees topredict the samples within the range of data. They shall give slightlydifferent predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted, we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa result compute `n` predicted values for the target variable. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test["Feature"], bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal predictions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decisiontrees. We set `n_estimators=100` instead of 3 in our manual implementationabove to get a stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same interval of data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test["Feature"], bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
# we convert `data_test` into a NumPy array to avoid a warning raised in scikit-learn
tree_predictions = tree.predict(data_test.to_numpy())
plt.plot(data_test["Feature"], tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test["Feature"], bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
# we convert `data_test` into a NumPy array to avoid a warning raised in scikit-learn
regressor_predictions = regressor.predict(data_test.to_numpy())
base_model_line = plt.plot(
data_test["Feature"], regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test["Feature"], bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____
###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
_____no_output_____
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the tree topredict on the testing data. They shall give slightly different predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa result compute `n` predicted values for the target variable. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test, bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend()
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal preditions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decisiontrees. We set `n_estimtators=100` instead of 3 in our manual implementationabove to get a stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same test data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
regressor_predictions = regressor.predict(data_test)
base_model_line = plt.plot(
data_test, regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test, bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____
###Markdown
BaggingThis notebook introduces a very natural strategy to build ensembles ofmachine learning models named "bagging"."Bagging" stands for Bootstrap AGGregatING. It uses bootstrap resampling(random sampling with replacement) to learn several models on randomvariations of the training set. At predict time, the predictions of eachlearner are aggregated to give the final predictions.First, we will generate a simple synthetic dataset to get insights regardingbootstraping.
###Code
import pandas as pd
import numpy as np
# create a random number generator that will be used to set the randomness
rng = np.random.RandomState(1)
def generate_data(n_samples=30):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_min, x_max = -3, 3
x = rng.uniform(x_min, x_max, size=n_samples)
noise = 4.0 * rng.randn(n_samples)
y = x ** 3 - 0.5 * (x + 1) ** 2 + noise
y /= y.std()
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(
np.linspace(x_max, x_min, num=300), columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
import matplotlib.pyplot as plt
import seaborn as sns
data_train, data_test, target_train = generate_data(n_samples=30)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
The relationship between our feature and the target to predict is non-linear.However, a decision tree is capable of approximating such a non-lineardependency:
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
y_pred = tree.predict(data_test)
###Output
_____no_output_____
###Markdown
Remember that the term "test" here refers to data that was not used fortraining and computing an evaluation metric on such a synthetic test setwould be meaningless.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test, y_pred, label="Fitted tree")
plt.legend()
_ = plt.title("Predictions by a single decision tree")
###Output
_____no_output_____
###Markdown
Let's see how we can use bootstraping to learn several trees. Bootstrap resamplingA bootstrap sample corresponds to a resampling with replacement, of theoriginal dataset, a sample that is the same size as the original dataset.Thus, the bootstrap sample will contain some data points several times whilesome of the original data points will not be present.We will create a function that given `data` and `target` will return aresampled variation `data_bootstrap` and `target_bootstrap`.
###Code
def bootstrap_sample(data, target):
# Indices corresponding to a sampling with replacement of the same sample
# size than the original data
bootstrap_indices = rng.choice(
np.arange(target.shape[0]), size=target.shape[0], replace=True,
)
# In pandas, we need to use `.iloc` to extract rows using an integer
# position index:
data_bootstrap = data.iloc[bootstrap_indices]
target_bootstrap = target.iloc[bootstrap_indices]
return data_bootstrap, target_bootstrap
###Output
_____no_output_____
###Markdown
We will generate 3 bootstrap samples and qualitatively check the differencewith the original dataset.
###Code
n_bootstraps = 3
for bootstrap_idx in range(n_bootstraps):
# draw a bootstrap from the original data
data_bootstrap, target_booststrap = bootstrap_sample(
data_train, target_train,
)
plt.figure()
plt.scatter(data_bootstrap["Feature"], target_booststrap,
color="tab:blue", facecolors="none",
alpha=0.5, label="Resampled data", s=180, linewidth=5)
plt.scatter(data_train["Feature"], target_train,
color="black", s=60,
alpha=1, label="Original data")
plt.title(f"Resampled data #{bootstrap_idx}")
plt.legend()
###Output
_____no_output_____
###Markdown
Observe that the 3 variations all share common points with the originaldataset. Some of the points are randomly resampled several times and appearas darker blue circles.The 3 generated bootstrap samples are all different from the original datasetand from each other. To confirm this intuition, we can check the number ofunique samples in the bootstrap samples.
###Code
data_train_huge, data_test_huge, target_train_huge = generate_data(
n_samples=100_000)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train_huge, target_train_huge)
ratio_unique_sample = (np.unique(data_bootstrap_sample).size /
data_bootstrap_sample.size)
print(
f"Percentage of samples present in the original dataset: "
f"{ratio_unique_sample * 100:.1f}%"
)
###Output
_____no_output_____
###Markdown
On average, ~63.2% of the original data points of the original dataset willbe present in a given bootstrap sample. The other ~36.8% are repeatedsamples.We are able to generate many datasets, all slightly different.Now, we can fit a decision tree for each of these datasets and they all shallbe slightly different as well.
###Code
bag_of_trees = []
for bootstrap_idx in range(n_bootstraps):
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
data_bootstrap_sample, target_bootstrap_sample = bootstrap_sample(
data_train, target_train)
tree.fit(data_bootstrap_sample, target_bootstrap_sample)
bag_of_trees.append(tree)
###Output
_____no_output_____
###Markdown
Now that we created a bag of different trees, we can use each of the trees topredict the samples within the range of data. They shall give slightlydifferent predictions.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
plt.legend()
_ = plt.title("Predictions of trees trained on different bootstraps")
###Output
_____no_output_____
###Markdown
AggregatingOnce our trees are fitted and we are able to get predictions for each ofthem. In regression, the most straightforward way to combine thosepredictions is just to average them: for a given test data point, we feed theinput feature values to each of the `n` trained models in the ensemble and asa result compute `n` predicted values for the target variable. The finalprediction of the ensemble for the test data point is the average of those`n` values.We can plot the averaged predictions from the previous example.
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bag_predictions = []
for tree_idx, tree in enumerate(bag_of_trees):
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.8,
label=f"Tree #{tree_idx} predictions")
bag_predictions.append(tree_predictions)
bag_predictions = np.mean(bag_predictions, axis=0)
plt.plot(data_test, bag_predictions, label="Averaged predictions",
linestyle="-")
plt.legend()
_ = plt.title("Predictions of bagged trees")
###Output
_____no_output_____
###Markdown
The unbroken red line shows the averaged predictions, which would be thefinal predictions given by our 'bag' of decision tree regressors. Note thatthe predictions of the ensemble is more stable because of the averagingoperation. As a result, the bag of trees as a whole is less likely to overfitthan the individual trees. Bagging in scikit-learnScikit-learn implements the bagging procedure as a "meta-estimator", that isan estimator that wraps another estimator: it takes a base model that iscloned several times and trained independently on each bootstrap sample.The following code snippet shows how to build a bagging ensemble of decisiontrees. We set `n_estimators=100` instead of 3 in our manual implementationabove to get a stronger smoothing effect.
###Code
from sklearn.ensemble import BaggingRegressor
bagged_trees = BaggingRegressor(
base_estimator=DecisionTreeRegressor(max_depth=3),
n_estimators=100,
)
_ = bagged_trees.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Let us visualize the predictions of the ensemble on the same interval of data:
###Code
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions)
_ = plt.title("Predictions from a bagging classifier")
###Output
_____no_output_____
###Markdown
Because we use 100 trees in the ensemble, the average prediction is indeedslightly smoother but very similar to our previous average plot.It is possible to access the internal models of the ensemble stored as aPython list in the `bagged_trees.estimators_` attribute after fitting.Let us compare the based model predictions with their average:
###Code
import warnings
with warnings.catch_warnings():
# ignore scikit-learn warning when accesing bagged estimators
warnings.filterwarnings(
"ignore",
message="X has feature names, but DecisionTreeRegressor was fitted without feature names",
)
for tree_idx, tree in enumerate(bagged_trees.estimators_):
label = "Predictions of individual trees" if tree_idx == 0 else None
tree_predictions = tree.predict(data_test)
plt.plot(data_test, tree_predictions, linestyle="--", alpha=0.1,
color="tab:blue", label=label)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagged_trees_predictions = bagged_trees.predict(data_test)
plt.plot(data_test, bagged_trees_predictions,
color="tab:orange", label="Predictions of ensemble")
_ = plt.legend()
###Output
_____no_output_____
###Markdown
We used a low value of the opacity parameter `alpha` to better appreciate theoverlap in the prediction functions of the individual trees.This visualization gives some insights on the uncertainty in the predictionsin different areas of the feature space. Bagging complex pipelinesWhile we used a decision tree as a base model, nothing prevents us of usingany other type of model.As we know that the original data generating function is a noisy polynomialtransformation of the input variable, let us try to fit a bagged polynomialregression pipeline on this dataset:
###Code
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.preprocessing import MinMaxScaler
from sklearn.pipeline import make_pipeline
polynomial_regressor = make_pipeline(
MinMaxScaler(),
PolynomialFeatures(degree=4),
Ridge(alpha=1e-10),
)
###Output
_____no_output_____
###Markdown
This pipeline first scales the data to the 0-1 range with `MinMaxScaler`.Then it extracts degree-4 polynomial features. The resulting features willall stay in the 0-1 range by construction: if `x` lies in the 0-1 range then`x ** n` also lies in the 0-1 range for any value of `n`.Then the pipeline feeds the resulting non-linear features to a regularizedlinear regression model for the final prediction of the target variable.Note that we intentionally use a small value for the regularization parameter`alpha` as we expect the bagging ensemble to work well with slightly overfitbase models.The ensemble itself is simply built by passing the resulting pipeline as the`base_estimator` parameter of the `BaggingRegressor` class:
###Code
bagging = BaggingRegressor(
base_estimator=polynomial_regressor,
n_estimators=100,
random_state=0,
)
_ = bagging.fit(data_train, target_train)
for i, regressor in enumerate(bagging.estimators_):
regressor_predictions = regressor.predict(data_test)
base_model_line = plt.plot(
data_test, regressor_predictions, linestyle="--", alpha=0.2,
label="Predictions of base models" if i == 0 else None,
color="tab:blue"
)
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
bagging_predictions = bagging.predict(data_test)
plt.plot(data_test, bagging_predictions,
color="tab:orange", label="Predictions of ensemble")
plt.ylim(target_train.min(), target_train.max())
plt.legend()
_ = plt.title("Bagged polynomial regression")
###Output
_____no_output_____ |
data-types/data_types.ipynb | ###Markdown
Types of things Every value in Python, has a type.We can show what type of thing something is, by calling `type`, like this:
###Code
type(1)
a = 1
type(a)
###Output
_____no_output_____ |
Clase4.ipynb | ###Markdown
###Code
###Output
_____no_output_____
###Markdown
**Continuacion de estructuras de control iteractivas**---**Acumuladores**se le da este nombre a las variables que se encargan de almacenar algun tipo de información, un ejemplo:El caso de la compra de viveres en la tienda:
###Code
nombre=input("nombre del consumidor: ")
listaComp=""
print(nombre, "escribe los siguientes viveres para su compra en el supermercado: ")
listaComp=listaComp + " 1 paca de papel higenico,"
print("----Compras que tengo que hacer----")
print(listaComp)
listaComp=listaComp+" shampoo,"
print(listaComp)
listaComp=listaComp+" pañales,"
print(listaComp)
###Output
nombre del consumidor: gisela
gisela escribe los siguientes viveres para su compra en el supermercado:
----Compras que tengo que hacer----
1 paca de papel higenico,
1 paca de papel higenico, shampoo,
1 paca de papel higenico, shampoo, pañales,
###Markdown
la variable "listaComp" nos esta sirviendo para acumular informacion de la lista de compras, podemos observar que **NO** estamos creando una variable para cada item, sino una variable definida que nos sirve para almacenar información.a coninuacion observemos un ejemplo donde se ponga en practica el uso de una acumulacion en una variable usando cantidades y precios
###Code
preciopapel= 14000 #precio paca palpel higenico
cantidadpapel= 2 #cantidad de papel que se va a comprar
precioshampoo= 18000
cantidadshampoo= 4
preciopanal=17000 #precio de pañales
cantidadpanal= 3 #cantidad pañales
subtotal=0
print("calculando el total de la compra")
total_papel= preciopapel * cantidadpapel
print("el valor total de papel higenico es de: $", total_papel )
subtotal= subtotal + total_papel
print("---- subtotal es: $", subtotal)
total_shampoo= precioshampoo * cantidadshampoo
print("el valor total del shampoo es de: $", total_shampoo)
subtotal= subtotal + total_shampoo
print("---- subtotal es: $", subtotal)
total_panal= preciopanal * cantidadpanal
print("el valor total de los pañales es es de: $", total_panal)
subtotal= subtotal + total_panal
print("---- subtotal es: $", subtotal)
###Output
calculando el total de la compra
el valor total de papel higenico es de: $ 28000
---- subtotal es: $ 28000
el valor total del shampoo es de: $ 72000
---- subtotal es: $ 100000
el valor total de los pañales es es de: $ 51000
---- subtotal es: $ 151000
###Markdown
**Contadores**---tiene mucha relacion con los "acumuladores" vistos en el apartado anterior.estas variables se caractarizan por ser variables de control, es decir, controlan la cantidad de veces que se ejecuta determinada accion.Usando el ejemplo anterior y modificandolo un poco, podemos desarrollar el siguiente algoritmo :
###Code
#ahora se va a comprar solo pañales por unidad
#conteo
conteop=0 #almacena valores numericos.
print("se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito, en total hay: ",conteop, "de pañales: ")
#cuenta de 1 en 1
conteop= conteop + 1
print("ahora hay: ",conteop, " pañales ")
conteop= conteop + 1
print("ahora hay: ",conteop, " pañales ")
conteop= conteop + 1
print("ahora hay: ",conteop, " pañales ")
conteop= conteop + 1
print("ahora hay: ",conteop, " pañales")
conteop= conteop + 1
print("ahora hay: ",conteop, " pañales ")
conteop= conteop + 1
###Output
se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito, en total hay: 0 de pañales:
ahora hay: 1 pañales
ahora hay: 2 pañales
ahora hay: 3 pañales
ahora hay: 4 pañales
ahora hay: 5 pañales
###Markdown
**CICLOS CONTROLADOS POR CONDICIONES**---*WHILE*---recordemos que las variables de control nos permite manejar estados, pasar de un estado a otro es por ejemplo: una variable que no contiene elementos a contenerlos o una bariables con un elemento en particular(Acumulador o contador) y cambiarlo por completo (Bandera).Estas variables de control son la base de los ciclos de control. siendo mas claros, pasar de una adicion manual a algo automatizado.empezamos con el ciclo "while". En español es "mientras". este ciclo se compone de una **condicion** y su **bloque de codigo**, lo que nos quiere decir el while es que el bloque de codigo ejecutara **mientras** la condicion da como resultado True or False.
###Code
lapiz= 5
conteo= 0
print("se ha iniciado la compra. En total hay:", conteo, lapiz)
while (conteo < lapiz):
conteo=conteo + 1
print("se ha realizado la compra de lapices, ahora hay: ", conteo, "lapices")
###Output
se ha iniciado la compra. En total hay: 0 5
se ha realizado la compra de lapices, ahora hay: 1 lapices
se ha realizado la compra de lapices, ahora hay: 2 lapices
se ha realizado la compra de lapices, ahora hay: 3 lapices
se ha realizado la compra de lapices, ahora hay: 4 lapices
se ha realizado la compra de lapices, ahora hay: 5 lapices
###Markdown
tener en cuenta que, dentro del ciclo de while, se va afectando las variables implicadas en la declaracion de la condicion que debe cumplir. En el ejemplo anterior la variable "conteo" para que en algun momento la condicion sea verdadera y termine el ciclo, se tiene que cumplir la condicion (conteo < lapi<), de lo contrario, tendriamos un ciclo infinito. **Ciclo for**---es un ciclo especializado y optimizado para los ciclos controlador por cantidad. Se compone de tres elementos:1. la variable de iteracion2. elemento de iteracion3. bloque de codigo a iterar.¿ventajas de usar el FOR?en python es muy importante y se considera una herramienta bastante flexible y poderosa por permitir ingresar estructuras de datos complejas, cadena de caracteres, rangos, entre otros. Los elementos de iteracion usados en esta estructura de datos son necesarios que tengan la siguiente caracteristica :1. una cantidad definida (Esto lo diferencia totalmente del while)el while parte de una condicion de verdad, pero el **FOR** parte de una cantidad definida
###Code
#retomamos el ejemplo de los lapices.
print("se ha iniciado la compra, en total hay: 0 lapices.")
#el i, es la variable de iteracion, (1,6) es el elemento de iteracion
for i in range(1, 6): #en los rangos, la funcion range, maneja un intervalo abierto a la derecha y cerrado a la izquiera
#por ejemplo ahi empieza con 1, pero termina antes del 6, osea en 5.
print("se ha realizado la compra de lapices. Ahora hay:", i, "lapices")
###Output
se ha iniciado la compra, en total hay: 0 lapices.
se ha realizado la compra de lapices. Ahora hay: 1 lapices
se ha realizado la compra de lapices. Ahora hay: 2 lapices
se ha realizado la compra de lapices. Ahora hay: 3 lapices
se ha realizado la compra de lapices. Ahora hay: 4 lapices
se ha realizado la compra de lapices. Ahora hay: 5 lapices
###Markdown
**Continuacion de estructuras de control iterativa **---**Acumuladores**Sel da este nombre a la variables que se encargan de almcenar algun tipo de informacion.**Ejemplo**El caso de la compra de viveres en la tiends.
###Code
nombre = input("Nombre del comprador")
Listacompra = "";
print(nombre, "escribe los siguientes niveles para su compra ene el supermercado:")
listacompra = (listacompra , + "1 paca de papel de higienico")
print("----compras que tengo que hacer----")
print(listacompra)
listacompra=(listacompra ,+ "Shampoo pantene 2 and 1")
listacompra=(listacompra, +"2 pacas de pañales pequeñin etapa 3")
print(listacompra)
###Output
_____no_output_____
###Markdown
la variable "listacompra" nos esta sirviendooppara acumular informacion de la lista de compra.podemos observar, que **NO** estamos creando una variable por cada item, sino una variable definida nos sirve para almacenar la informacionA continuacion observemos un ejemplo en donde se pone en practica el uso de acumulacion en una variable usando cantidades y precios
###Code
ppph=14000 #precio de papel higienico
cpph =2 #cantidad de pacas de papel
pshampoo = 18000 #Precio de shampoo pantene 2 and 1
cshampoo =4 #Cantidad de shampoo
ppbebe = 17000 #precio de pacas de pañales pequeña
cpbebe = 3 #cantidad de pañales pequeños
subtotal = 0
print("Calculando el total de la compra...")
total_ppph=ppph*cpph
print("el valor de la compra del papel higiencio es", total_ppph)
subtotal=subtotal + total_ppph
print("---el subtotal es:",subtotal)
total_shampoo = pshampoo *cshampoo
print("El valor del total de Shampoo es:$",total_shampoo )
subtotal = subtotal+ total_shampoo
print("---el subtotal es:$",subtotal)
total_ppbebe = ppbebe*cpbebe
print("el valor total de pañales es:$",total_ppbebe)
subtotal = subtotal + total_ppbebe
print("el total de su compra es:$",subtotal)
###Output
Calculando el total de la compra...
el valor de la compra del papel higiencio es 28000
---el subtotal es: 28000
El valor del total de Shampoo es:$ 72000
---el subtotal es:$ 100000
el valor total de pañales es:$ 51000
el total de su compra es:$ 151000
###Markdown
**Contadores**tiene mucha relacion con los "acumuladores" visto en el apartado anteriorEstas variables se caracterizan por ser variables de control, es decir controlan la **cantidad** de veces que se ejecutan determinada accion.Usando el ejemplo anterior y modificandoo un poco, podemos desarrollar el siguient algoritmo
###Code
#Se comprara pañales por unidad en este caso.
contp = 0
print("Se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito. En total hay :", contp, "pañales")
contp = contp+1
print("Se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito. Ahora hay :", contp, "pañales")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
###Output
Se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito. En total hay : 0 pañales
Se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito. Ahora hay : 1 pañales
Ahora hay: 2 pañal1
Ahora hay: 3 pañal1
Ahora hay: 4 pañal1
Ahora hay: 5 pañal1
###Markdown
**Ciclos controlados por condicicones****WHILE**---Recordemos que las variables de control, nos permten manejar estados, pasar de un estado a otro es por ejemplo: una variable que no contiene elementos a contenerlo o una variable un elemento en particular (Acumulador o contador) y cambiarlo po completo(Bnadera)Estas Variables de cocntrol son la base de ciclos de control. Siendo mas claros, pasar de una accion manual a algo mas automatizadoEmpezamos con el ciclo "WHILE" En español es "mientras". Este ciclo compone una condiciion y su bloque de codigoloque nos quiere decir While es que el bloque de codigo se ejecutara mientrasc la condicion da como resultado True or False
###Code
lapiz = 5
contlapiz = 0
print("Se ha iniciado la compra. en total hay :", contlapiz,lapiz)
while (contlapiz < lapiz):
contlapiz = contlapiz+1
print("Se ha realizado la compra de lapices ahora hay",str(contlapiz) + "lapiz")
a = str(contlapiz)
print(type(contlapiz))
print(type(a))
###Output
Se ha iniciado la compra. en total hay : 0 5
Se ha realizado la compra de lapices ahora hay 1lapiz
Se ha realizado la compra de lapices ahora hay 2lapiz
Se ha realizado la compra de lapices ahora hay 3lapiz
Se ha realizado la compra de lapices ahora hay 4lapiz
Se ha realizado la compra de lapices ahora hay 5lapiz
###Markdown
Tener en cuenta que dentro del ciclo de WHILE se va afectando las variables implicadas en la declracion de la condicicon que debe cumplir el ciclo en el ejemplo anterior la variable contlapiz para que en algun momento la condicion sea vedadera y termine el ciclo se tiene que cumplir la condicion(contlapiz). De lo contrario, tendriamos un ciclo que nunca se detendria, lo cual decantaria en un cilo interminable **CICLO DE FOR**---Es un ciclo especializado y optimizado parta los ciclos controlados por cantidad. Se compone de tres elementos:1. la variable de iteraccion2. elemento de iteraccion3. bloque de ocdigo iterar**¿ventajas de usar el FOR ?**en PYTHON es muy importante y se considera una herramienta bastante flexible y poderos, por permitir ingresar estructuras de datos complejas, cadena de caracteres, rangos , entre otros. los elementos de iteraccion en esta estructura de datos, son necesarios que tengan la siguiente caracteristica :1. cantidad definida(Esto lo diferencia totalmente del WHILE)el WHILE parte de una condicion de verdad, pero el **FOR** parte de una cantidad definida
###Code
##Retomando el ejemplo de la compra de lapices
print("se ha iniciado la compra. En total hay:0 lapices.")
for i in range(1,6): # en los rangos, la funcion range maneja un intervalo abierto a la derecha y cerrado al a izquierda
print("Se ha realizado la ocmpra de lapices. Ahora hay",i,"lapices")
###Output
se ha iniciado la compra. En total hay:0 lapices.
Se ha realizado la ocmpra de lapices. Ahora hay 1 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 2 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 3 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 4 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 5 lapices
###Markdown
**Continuacion de estructuras de control iterativas**---***ACUMULADORES**Se le da este nombre a las variables que se encargan de "almacenar" algun tipo de informacion. Ejemplo:El caso de la compra de viveres en la tienda:
###Code
nombre=input("Nombre del consumidor")
listacomp=""
print(nombre, "escribe los siguientes viveres para su compra en el supermercado")
listacomp=listacomp+",1 paca de papel higienico"
print("---Compras que tengo que hacer---")
listacomp=listacomp+",2 shampoo pantene2 and 1"
listacomp=listacomp+",2 pacas de pañales pequeñin etapa 3"
print(listacomp)
###Output
Nombre del consumidorana
ana escribe los siguientes viveres para su compra en el supermercado
---Compras que tengo que hacer---
1 paca de papel higienico,2 shampoo pantene2 and 1,2 pacas de pañales pequeñin etapa 3
###Markdown
La variable "lista comp nos esta sirviendo para acumular informacion de la lista de compras. Podemos observar, que no estamos creando una variable por cada item, sino una variable definida nos sirve para almacenar la informacion.A continuacion observemos un ejemplo donde se ponga en practica el uso de acumulacion en una variable usando cantidades y precio.
###Code
ppph=14000 # precio de paquete de papel higienico
cpph=2 # Cantidad de paquete de papel higienico
pshampoo=18000 # precio unidad de shampoo pantene 2 and 1
cshampoo=4# Cantidad shampoo pantene 2 and 1
ppbebe=17000 # precio de pacas de pañales pequeñin
cpbebe=3#cantidad de pacas de pañales pequeñin
subtotal=0
print("Calculando el total de la compra...")
total_pph=ppph*cpph
print("El valor total del papel higienico es:$", total_pph)
subtotal=subtotal+total_pph
print("--El subtotal es:$ ",subtotal)
total_shampoo=pshampoo*cshampoo
print("El valor total del shampoo es:$",total_shampoo)
subtotal=subtotal+total_shampoo
print("---EL subtotal es:$",subtotal)
total_pbebe=ppbebe*cpbebe
print("El valor total para pañales es:$",total_pbebe)
subtotal=subtotal+total_pbebe
print("El total de su compra es:$",subtotal)
###Output
Calculando el total de la compra...
El valor total del papel higienico es:$ 28000
--El subtotal es:$ 28000
El valor total del shampoo es:$ 72000
---EL subtotal es:$ 100000
El valor total para pañales es:$ 51000
El total de su compra es:$ 151000
###Markdown
**Contenedores**---Tiene mucha relacion con los "acumuladores" visto en el apartado anterior. Estas variables se caracterizan por ser variables de control, es decir, controlan la cantidad de veces que se ejecuta determinada accion. Usando el ejemplo anterior y modificandolo un poco,podemos desarrollar el siguiente algoritmo:
###Code
#Se comprara pañales por unidad en este caso.
contp=0
print("Se realizara la compra de pañales etapa 3... Se ha iniciado la compra y asignacion en el carrito. En total hay:",contp,"pañales")
contp=contp+1
print("Ahora hay:",contp ,"pañal")
contp=contp+1
print("Ahora hay:",contp ,"pañal")
contp=contp+1
print("Ahora hay:",contp ,"pañal")
contp=contp+1
print("Ahora hay:",contp ,"pañal")
###Output
Se realizara la compra de pañales etapa 3... Se ha iniciado la compra y asignacion en el carrito. En total hay: 0 pañales
Ahora hay: 1 pañal
Ahora hay: 2 pañal
Ahora hay: 3 pañal
Ahora hay: 4 pañal
###Markdown
**Cliclos controlados por condiciones**"WHILE"---Recordemos que las variables de control nos permiten pasar de un estado a otro, por ejemplo, una variable que no contiene elementos a contenerlo o una variable u elemento particular(Acumulador o Control) y cambiarlo por completo(Bandera).Estas variables de control son la base de ciclos de control. Siendo mas claros, pasar de una adicion manual a algo mas automatizado.Empezamos con el ciclo "While". En español es "mientras". Este ciclo se compone de una **condicion** y su **bloque de codigo** Lo que quiere While es que el bloque de codigo d¿se ejecutara **mientras** la condicion da como resultado TRue or False.
###Code
lapiz=5
contlapiz=0
print("Se ha iniciado la compra. En total hay:", contlapiz, lapiz)
while (contlapiz <lapiz):
contlapiz=contlapiz+1
print("Se ha realizado la compra de Lapices. Ahora hay" + str(contlapiz)+ " lapiz")
a=str(contlapiz)
print(type(contlapiz))
print(type(a))
###Output
Se ha iniciado la compra. En total hay: 0 5
Se ha realizado la compra de Lapices. Ahora hay1 lapiz
Se ha realizado la compra de Lapices. Ahora hay2 lapiz
Se ha realizado la compra de Lapices. Ahora hay3 lapiz
Se ha realizado la compra de Lapices. Ahora hay4 lapiz
Se ha realizado la compra de Lapices. Ahora hay5 lapiz
<class 'int'>
<class 'str'>
###Markdown
Tener en cuenta que dentro del ciclo de WHILE se va afectando las variables implicadas en la declaracion de la condicion que debe cumplir el ciclo. En el ejemplo anterior la variable contlapizpara que en algun momento la condicion sea verdadera y termine el ciclo se tiene que cumplir la condicion (contlapiz<lapiz). De lo contrario tendriamos un ciclo que nunca se detendria, lo cual decantaria en un ciclo interminable. **CLICLO DE FOR**---Es un ciclo especializado y optimizado para los ciclos controlados por cantidad. Se compone de tres elementos:1.variable de iteracion2.Elemento de iteracion3.Bloque de codigo a iterar**¿ventajas de usar el FOR?**En Python es muy importante y se considera una e¿herramienta bastante flexible y poderosa por permitir ingresar estructuaras de datos bastante complejas, cadena de caracteres, estructuras, rangos, entre otros. Los elementos de iteracion usados en esta estructura de datos son necesarios que tengan la siguiente caracteristica: 1. cantidad definida (Esto lo diferencia totalmente del while) El while parte de una condicion de verdad pero el for parte de una cantidad definida.
###Code
#Retomando el ejemplo de la compra de los lapices
print("se ha iniciado la compra. En total hay: 0 lapices,")
for i in range(1,10): #En los rangos, la funcion range manejan un intervalo abierto a la derecha y cerrado a la izquierda
print("Se ha realizado la compra de lapices, Ahora hay",i,"lapices")
###Output
se ha iniciado la compra. En total hay: 0 lapices,
Se ha realizado la compra de lapices, Ahora hay 1 lapices
Se ha realizado la compra de lapices, Ahora hay 2 lapices
Se ha realizado la compra de lapices, Ahora hay 3 lapices
Se ha realizado la compra de lapices, Ahora hay 4 lapices
Se ha realizado la compra de lapices, Ahora hay 5 lapices
Se ha realizado la compra de lapices, Ahora hay 6 lapices
Se ha realizado la compra de lapices, Ahora hay 7 lapices
Se ha realizado la compra de lapices, Ahora hay 8 lapices
Se ha realizado la compra de lapices, Ahora hay 9 lapices
###Markdown
**Continuación de estructutas de control Iterativas**---**ACUMULADORES**Se le da este nombre a las varibles que se encargan de "almacenar" algún tipo de información. *Ejemplo*:El caso de la compra de viveres en la tienda:
###Code
nombre=input("Nombre del consumidor ")
listacomp=""
print(nombre, "escribe los siguientes viveres para su compra en el supermercado")
listacomp=listacomp+"Paca de papel higiénico, "
print("-----Compras que tengo que hacer-----")
listacomp=listacomp+"2 Shampoo Pantene 2 and 1, "
listacomp=listacomp+"2 pacasde pañales Pequeñin etapa 3 "
print(listacomp)
###Output
Nombre del consumidor july
july escribe los siguientes viveres para su compra en el supermercado
-----Compras que tengo que hacer-----
Paca de papel higiénico, 2 Shampoo Pantene 2 and 1, 2 pacasde pañales Pequeñin etapa 3
###Markdown
La variable "listacomp" nos esta sirviendo para acumular información de lista de compras.Podemos observar, que **No** estamos creando una variabe por cada ictem, sino una variable definida nos sirve para almacenar la información.---A continuación observamos un ejemplo donde se ponga en práctica el uso de acumulacioón en una variable usando cantidades y precios.
###Code
ppph=14000 #precio de paquete de papel higiénico
cpph=2 #cantidad de paquete de papel higiénico
pshampoo=18000 #precio de shampoo pantene 2 and 1
cshampoo=4 #unidades de shampoo
ppbebe=17000 #precio de pacas de pañales pequeñin
cpbebe=3 #cantidad de pacas de pañales pequenin
subtotal=0
print("Calculando el total de la compra...")
total_pph=ppph+cpph
print("El valor total del papel higienico es: ", total_pph)
subtotal=subtotal+total_pph
print("--- El subtotal es: $", subtotal)
total_shampoo=pshampoo*cshampoo
print("El valor total de Shampoo es:$", total_shampoo)
subtotal=subtotal+total_shampoo
print("---El subtotal es: $", subtotal)
total_pbebe=ppbebe*cpbebe
print("El valor total de Pañales es:$", subtotal)
subtotal=subtotal+total_pbebe
print("El total de su compra es:$", subtotal)
###Output
Calculando el total de la compra...
El valor total del papel higienico es: 14002
--- El subtotal es: $ 14002
El valor total de Shampoo es:$ 72000
---El subtotal es: $ 86002
El valor total de Pañales es:$ 86002
El total de su compra es:$ 137002
###Markdown
**Contadores**---Tiene mucha relación con los *acumuladores* visto en el apartado anterior. Estás variables se caracterizan por ser variables de contol, es decir, contolan la **cantidad** de veces que se ejecuta determinada acción.Usando el ejemplo anterior y modificandolo un poco, podemos desarrollar el siguiente algoritmo:
###Code
#Se comprará pañales por unidad en este caso.
contp=0 #la variable se llama conteo de variables, se declara vacia, proque sera de tipo cadena, y alcacenara datos de tipo númerico
print("Se realizara la compra de pañales etapa 3--- Se ha iniciado la compra y asignación en el carrito. En total hay:", contp, "pañales")
contp=contp+1
print("Se realizara la compra de pañales etapa 3--- Se ha iniciado la compra y asignación en el carrito. Ahora hay:", contp, "pañal(es)")
contp=contp+1
print("Ahora hay:", contp, "pañal(es)")
contp=contp+1
print("Ahora hay:", contp, "pañal(es)")
contp=contp+1
print("Ahora hay:", contp, "pañal(es)")
contp=contp+1
print("Ahora hay:", contp, "pañal(es)")
contp=contp+1
###Output
Se realizara la compra de pañales etapa 3--- Se ha iniciado la compra y asignación en el carrito. En total hay: 0 pañales
Se realizara la compra de pañales etapa 3--- Se ha iniciado la compra y asignación en el carrito. Ahora hay: 1 pañal(es)
Ahora hay: 2 pañal(es)
Ahora hay: 3 pañal(es)
Ahora hay: 4 pañal(es)
Ahora hay: 5 pañal(es)
###Markdown
**CICLOS CONTROLADOS POR CONDICIONES***WHILE*---Recordemos que las variables de control, nos permite manejar estados, pasar de un estado a otro es por ejemplo: Una variable que no contiene elementos a contenerlo o una variable con un elemento particular (Acumulador o contador) y cambiarlo por completo (Bandera).Estas variables de control son la base de ciclos de control. Siendo más claros, pasar de una adición manual a algo más automatizado.Empezamos con el ciclo "WHILE". En español es "mientras". Este ciclo se compone de una **condición** y su ** bloque de código**. Lo que nos quiere de While es que el bloque de código se ejecutará **mientras** la condición da como resultado True or False.
###Code
lapiz=5
contlapiz=0
print("Se ha iniciado la compra. En total hay : ", contlapiz,lapiz)
while (contlapiz<lapiz):
contlapiz=contlapiz+1
print("Se ha realizado la compra de Lapices. Ahora hay ", contlapiz, "lapiz")
###Output
Se ha iniciado la compra. En total hay : 0 5
Se ha realizado la compra de Lapices. Ahora hay 1 lapiz
Se ha realizado la compra de Lapices. Ahora hay 2 lapiz
Se ha realizado la compra de Lapices. Ahora hay 3 lapiz
Se ha realizado la compra de Lapices. Ahora hay 4 lapiz
Se ha realizado la compra de Lapices. Ahora hay 5 lapiz
###Markdown
**Nota**---Tener en cuenta que dentro del ciclo de WHILE se va afectando las varibles implicadas en la declaración de la condicón que debe complir el ciclo. En el ejemplo anterior la variable contlapiz para que en algún momento la condición sea verdadera y termine el ciclo se tiene que cmplir la condición (contlapiz<lapiz). De lo contrario , tendríamos un ciclo que nunca se detendría, lo cual decantaría en cliclo interminable.En el caso, es que la varible de almacenamiento, mientras sea menor, en este caso lapices 5, se hara el conteo, después ya no. **CICLO DE FOR**--- Es un ciclo especializado optimizado para los ciclos controlados por cantidad. Se compone de tres elementos:1. La variable de iteración2. Elemento de iteración3. Bloque de código a iterar**¿Ventajas de usar el FOR?**En Python es muy importante y se considera una herramienta bastante flexible y poderosa, por permitir ingresar estructuras de datos complejas, cadena de caracteres, rangos, entre otros. Los elementos de iteración usados en esta estructura de datos, son necesarios que tengan la siguente característica:1.cantidad definia(Esto lo diferencia totalmente el *while*)*¿Por qué?*El while parte de una condición de verdad, pero el **FOR** parte de una cantidad definida.
###Code
##Retomando el ejemplo de la compra de lapices
print("Se ha iniciado la compra. En total hay: 0 lapices.")
for i in range(1,6): ##En los rangos, la función range manejan un intervalo abierto a la derecha y cerrado a la izquierda
print("Se ha realizado la compra de lapices. Ahora hay ", i , "lapices.")
# la iteración se representa pro la letra i
###Output
_____no_output_____
###Markdown
Continuacion de estruccturas de control iterativasACUMULADORESSe le da este nombre a las variables que se encarrgan de "almacenar" algun tipo de informacion. Ejemplo:El caso de la compra de viveres en la tienda
###Code
nombre= input("nombre del consumidor")
listacomp=""
print(nombre, "escribe los siguientes viveres para su compra en el supermercado:")
listacomp= Listacomp+",1 Paca de papel higienico"
print("----Copras que tengo que hacer----")
print(listacomp)
listacomp=listacomp+",Shampoo Pantene 2 en 1"
listacomp=listacomp+",2 pacas de pañales pequeñin"
print(listacomp)
###Output
_____no_output_____
###Markdown
La variable "listacomp" nos esta sirviendo para acumular informacion de la lista de compras.Podemos observar, que No estamos creando una variable por cada item,sino una variable definida nos sirve para almacenar la informacion.A continuacion obsrrvemmos un ejemplo donde se ponga en practica el uso de acumulacion en una variable usando cantidades y precios.
###Code
ppph=14000 #precio papel higienico
cpph=2 #cantidad de paquetes de papel higienico
pshampoo=18000 #Precio de Shampoo Ppantene 2 en !
CShampoo=4 #unidades por Shampoo
ppbebe=17000 #precio de paca de pañales pequeñin
cpbebe=3 #precio de la paca de pañales pequeñin
subtotal=0
print("calculando el total de la compra...")
total_pph=ppph*cpph
print("el valor total de papel higienico es: $", total_pph)
subtotal=subtotal+total_pph
print("----El lsubtotal es: $", subtotal)
total_shampoo=pshampoo*cshampoo
print("el valor total de Shampoo es:",total_shampoo)
subtotal=subtotal+total_shampoo
print("----El subtotal es:$:",subtotal)
total_ppbebe=ppbebe*copbebe
print:("El valor total paa pañales es:$",total_phbebe)
subtotal=subtotal+total_ppbebe
print("el total de su compra es:$",subtotal)
###Output
_____no_output_____
###Markdown
**CONTADORES**---Tiene mucha relacion con los "acumuladores" visto en el apartado anterior. Estas variables se caracterizan por ser vriables de control, es decir, controlan la **cantidad**de veces que se ejecuta detminada accion.Usando el ejemplo anterior y modificandolo un poco, podemos desarrollar el siguiente algoritmo:
###Code
# se comprará pañakes por unidad
contp=0
print("Se realizara la compra de pañales etapa 3... Se ha iniciado la compra de asignacion en el carrito. En total hay:",contp, "pañales")
contp=contp+1
print("Se realizara la compra de pañales etapa 3... Se ha iniciado la compra de asignacion en el carrito. Ahora hay:",contp, "pañales")
contp=contp+1
print("ahora hay:", contp, "pañal")
contp=contp+1
print("ahora hay:", contp, "pañal")
contp=contp+1
print("ahora hay:", contp, "pañal")
contp=contp+1
###Output
_____no_output_____
###Markdown
CICLOS CONTROLADOS POR CONDICIONES"WHILE"---Recodemos que las vaiables de cotrol nos permiten maneja estados, pasar de un estado a otro es por ejemplo: una variable que no contiene elemetos o contenerlo o una variable un elemento a partivular(acumulador o contador)y cambiarlo por completo (Bandera)Estas variables de control son la base de los ciclos de control. Siendo mas claros, pasar de una adiccion manual a algo mas automatizadoEmpezamos con el ciclo WHILE. En español es mientras. Este ciclo se compone de una **condicion** y su **Bloque de codigo**. Lo que nos quiere decir de While es que el bloque de codigo se ejecutara ¨**mientras** la condicion da como resultado True or False
###Code
lapiz=5
contlapiz=0
print("Se ha iniciado la compra. En total hay:", contlapiz,lapiz)
while (contlapiz <lapiz):
contlapiz=contlapiz+1
print("se ha realizado la compra de Lapices. Ahora hay"+str(contlapiz)+"lapiz")
print("se ha realizado la compra de Lapices. Ahora hay",contlapiz,"lapiz")
a=str(contlapiz)
print(type(contlapiz))
print(a)
###Output
_____no_output_____
###Markdown
Tener en cuenta que dentro del ciclo de while seva afectndo las variables implicadas en la condicion que debe cumplir el ciclo. En el ejemlo anterior la variable contlapiz para que en algun momento la condicion sea verdadera y termine el ciclo se tiene que cumplir la condicion (contlapiz<lapiz). De lo contraario, tendriamos un ciclo que nunca se detendria. Lo cual decantaria en un ciclo interminable **CICLO DE FOR**---Es un ciclo especializado y optimizado para los ciclos controlados por cantidad. Se compone de tres elementos:1.La varaiable de Iteracion2. El elemento de iteracion3. Bloque de codigo a iterar**Ventajas de usar el FOR**En Python es muy importante y se considera una herramienta bastante flexible y poderosa, por permitit ingresar estructuras de datos complejas, cadena de caracteres, rangos, ntre otros. Los elementos de iteracion usados en esta estructura de datos son necesario que tengan la siguiente caracteristica:1. Una cantidada definida(Esto lo diferencia totlmente del While)El While parte de una condicion de verdad,pero el FOR parte de una cantidada definida.
###Code
##Retomando el ejemplo de la compra de lapices
print("se ha iniciado la compra. En total hay: 0 lapices.")
for i in range(1,6): #en los rango, la funcion range maneja un intervalo abierto a la derecha y cerrado a la izquierda.
print("Se ha realizado la compra de lapices. Ahora hay", i, "lapices.")
###Output
_____no_output_____
###Markdown
**Continuación de estructuras de control iterativas**---**ACUMULADORES**Se le da este resultado a las variables que se encargan de "almacenar" algún tipo de información*Ejemplo*El caso de la compra viveres en la tienda:
###Code
nombre=input("Nombre del consumidor: ")
listacomp=""
print(nombre,"Escribe los siguientes viveres para su compra en el supermercado:")
listacomp=listacomp+"1 Paca de papel higiénico"
print("-----Compras que tengo que hacer-----")
print(listacomp)
listacomp=listacomp+". Shampoo Pantene 2 and 1"
print(listacomp)
listacomp=listacomp+". 2 pacas de pañales pequeñin etapa 3"
print(listacomp)
###Output
Nombre del consumidor: sandra
sandra Escribe los siguientes viveres para su compra en el supermercado:
-----Compras que tengo que hacer-----
1 Paca de papel higiénico
('1 Paca de papel higiénico', 'Shampoo Pantene 2 and 1')
(('1 Paca de papel higiénico', 'Shampoo Pantene 2 and 1'), ' 2 pacas de pañales pequeñin etapa 3')
###Markdown
La variable *listacomp* nos está sirviendo para acumular información de la lista de compras.Podemos observar, que **NO** estamos creando una variable por cada ítem, sino una variable definida nos sirve para almacenar información.A continuación, observemos un ejemplo donde se pone en práctica el uso de acumulación en una variable usando cantidades y precios
###Code
ppph=14000 #precio paca de papel higiénico
cpph=2 #Cantidad de pacas de papel higiénico
pshampo=18000 #precio del shampoo pantene 2 and 1
cshampo=4 #cantidad del shampoo pantene 2 and 1
ppbebe=17000 #Precio de pacas de pañales
cpbebe=3 #Precio de pacas de pañales
subtotal=0
print("Calculando el total de la compra...")
total_pph=ppph*cpph
print("El valor total de papel higiénico es: $",total_pph)
subtotal=subtotal+total_pph
print("----El subtotal es: $",subtotal)
total_shampo=pshampo*cshampo
print("El valor total del shampoo es: $",total_shampo)
subtotal=subtotal+total_shampo
print("----El subtotal es: $", subtotal)
total_pbebe=ppbebe*cpbebe
print("El valor total para pañales es: $",total_pbebe)
subtotal=subtotal+total_pbebe
print("----El total de su compra es: $", subtotal)
###Output
Calculando el total de la compra...
El valor total de papel higiénico es: $ 28000
----El subtotal es: $ 28000
El valor total del shampoo es: $ 72000
----El subtotal es: $ 100000
El valor total para pañales es: $ 51000
----El total de su compra es: $ 151000
###Markdown
**Contadores**---Tiene mucha relación con los "Acumuladores" visto en el apartado anterior, estas variables, se caracterizan por ser variables de control, es decir, controlan la **Cantidad** de veces que se ejecuta determinada acciónUsando el ejemplo anterior y modificandolo un poco, podemos desarrollar el siguiente algoritmo:
###Code
#Se comprará pañales por unidad en este caso
contp=0
print("Se realizará la compra de pañales etapa 3... se ha iniciado la compra y asignación en el carrito")
print("En total hay",contp,"pañales")
contp=contp+1
print (" Ahora hay:",contp, "pañal")
contp=contp+1
print (" Ahora hay:",contp, "pañal")
contp=contp+1
print (" Ahora hay:",contp, "pañal")
contp=contp+1
print (" Ahora hay:",contp, "pañal")
contp=contp+1
print (" Ahora hay:",contp, "pañal")
###Output
Se realizará la compra de pañales etapa 3... se ha iniciado la compra y asignación en el carrito
En total hay 0 pañales
Ahora hay: 1 pañal
Ahora hay: 2 pañal
Ahora hay: 3 pañal
Ahora hay: 4 pañal
Ahora hay: 5 pañal
###Markdown
**CICLOS CONTROLADOS POR CONDICIONES**---**WHILE:**Recordemos que las variables de control nos permite manejar estados, pasar de un estado a otro, es por ejemplo: una variable que no contiene elementos a contenerlo o una variable con un elemento en particular (Acumulador o contador) y cambiarlo por completo (Bandera).Estas variables de control son la base de los ciclos de control. Siendo más claras, pasar de una adición manual a algo más automatizado.Empezamos con el ciclo *WHILE*. En español, es "*mientras"*, este ciclo se compone de una **condición** y su **Bloque de código**. Lo que nos quiere decir el while, es que, el bloque de código se ejecutará mientras la condición da como resultado *True o False*.
###Code
lapiz=5
contlapiz=0
print("Se ha iniciado la compra. En total hay:",contlapiz)
while (contlapiz<lapiz):
contlapiz=contlapiz+1
print("Se ha realizado la compra de lapices, ahora hay",contlapiz,"lapiz")
###Output
Se ha iniciado la compra. En total hay: 0
Se ha realizado la compra de lapices, ahora hay 1 lapiz
Se ha realizado la compra de lapices, ahora hay 2 lapiz
Se ha realizado la compra de lapices, ahora hay 3 lapiz
Se ha realizado la compra de lapices, ahora hay 4 lapiz
Se ha realizado la compra de lapices, ahora hay 5 lapiz
###Markdown
**Nota:** Tener en cuenta que dentro del ciclo de WHILE se va afectando las variables implicadas en la declaración de la condición que se debe cumplir. En el ejemplo anterior, la variable *contlapiz* para que en algún momento la condición sea verdadera y termine el ciclo, se tiene que cumplir la condición (contlapiz<lapiz). De lo contrario, tendríamos un ciclo que nunca se detendría, lo cual decantaría en un ciclo interminable. **CICLO FOR**---Es un ciclo especializado y optimizado para los ciclos controlados por cantidad, se componen de tres elementos:1. Variable de iteración2. Elemento de iteración3. Bloque de código a iterar**Ventajas de usar el FOR**En python es muy importante y se considera una herramienta bastante flexible y poderosa por permitir ingresar estructuras de datos complejas, cadena de caracteres, rangos, entre otros. Los elementos de iteración usados en esta estructura de datos son necesarios que tengan la siguiente característica:1. Cantidad definida (Esto lo diferencia del WHILEEl *while* parte de una condición de verdad, pero el **FOR** parte de una cantidad definida:*Ejemplo*
###Code
#Retomando el ejemplo de la compra de lápices
print("Se ha iniciado la compra, en total hay: 0 lápices")
for i in range(1,6): #La función range maneja un intervalo abierto a la derecha y cerrado a la izquierda
print("Se ha realizado la compra de lapices, ahora hay",i,"lapiz")
###Output
Se ha iniciado la compra, en total hay: 0 lápices
Se ha realizado la compra de lapices, ahora hay 1 lapiz
Se ha realizado la compra de lapices, ahora hay 2 lapiz
Se ha realizado la compra de lapices, ahora hay 3 lapiz
Se ha realizado la compra de lapices, ahora hay 4 lapiz
Se ha realizado la compra de lapices, ahora hay 5 lapiz
###Markdown
**Continuación de estructuras de control iteractivas**---**ACUMULADORES**Se le da este nombre a las variables que se encargan de "almacenar" algún tipo de información.*Ejemplo*El caso de la compra de viveres en la tienda:
###Code
nombre= input("Nombre del consumidor ")
listacompra=""
print(nombre,"escribe los siguientes viveres para su compra en el supermercado: ")
listacompra=listacompra + "Paca papel higiénico"
print("\n----Compras que tengo que hacer----")
listacompra = listacompra + ", 2 Shampoo Pantene 2 en 1"
listacompra= listacompra + ", 2 pacas de pañales pequeñin etapa 3 "
print(listacompra)
###Output
Nombre del consumidor f
f escribe los siguientes viveres para su compra en el supermercado:
----Compras que tengo que hacer----
Paca papel higiénico, 2 Shampoo Pantene 2 en 1, 2 pacas de pañales pequeñin etapa 3
###Markdown
La variable "listacomp" nos esta sirviendo para acumular información de la lista de comrpas.Podemos observar que "**NO**" estamos creando una variable para cada item, sino una variable definida nos sirve para almacenar la información.A continuación observemos un ejemplo donde se ponga en práctica el uso de acumulación en una variable usando cantidades y precios.
###Code
ppph=14000 #precio paca de papel higiénico
cpph=2 #cantidad pacas de papel higiénico
pshampoo=18000 #precio del shampoo
cshampoo=4 #cantidad de shampoo
ppbebe=17000 #precio paca pañales
cpbebe= 3 #cantidad de pacas de pañales
subtotal=0 #Subtotal de la compra
print("Calculando el total de la compra...")
total_pph=ppph*cpph #total papel higienico
print("\nEl valor total de papel higiénico es: $",total_pph)
subtotal=subtotal+ total_pph
print ("---El subtotal es: $", subtotal )
total_shampoo= pshampoo*cshampoo
print("\nEl valor total de Shampoo es: $",total_shampoo)
subtotal=subtotal + total_shampoo
print("---El subtotal es: $", subtotal)
total_pbebe=ppbebe*cpbebe
print("\nEl valor total para pañales es: $",total_pbebe)
subtotal=subtotal + total_pbebe
print("\n---El Total de su compra es: $", subtotal)
###Output
Calculando el total de la compra...
El valor total de papel higiénico es: $ 28000
---El subtotal es: $ 28000
El valor total de Shampoo es: $ 72000
---El subtotal es: $ 100000
El valor total para pañales es: $ 51000
---El Total de su compra es: $ 151000
###Markdown
**CONTADORES**---Tiene mucha relación con los *acumuladores* visto en el apartado anterior. Estas variables se caracterizan por ser variables de control, es decir, controlan la **cantidad** de vecesque se ejecuta determinada acción.Usando con el ejemplo anterior y modificandolo un poco, podemos desarrollar el siguiente algoritmo:**texto en negrita**
###Code
#Se comprará pañales por unidad en este caso.
contp=0 #declara la variable de almacenamiento de control vacia, si es numeros se usa 0, pero si es de cadena se usa ""
print("Se realizará la compra de pañales etapa 3... Se ha iniciado la compra y asignación en el carrito. En total hay:",contp,"pañales")
contp=contp+1
print("Ahora hay:",contp,"pañal")
contp=contp+1
print("Ahora hay:",contp,"pañales")
contp=contp+1
print("Ahora hay:",contp,"pañales")
contp=contp+1
print("Ahora hay:",contp,"pañales")
contp=contp+1
print("Ahora hay:",contp,"pañales") #es conteo porque va de 1 en 1, si cambio de 1 es un acumulador
###Output
Se realizará la compra de pañales etapa 3... Se ha iniciado la compra y asignación en el carrito. En total hay: 0 pañales
Ahora hay: 1 pañal
Ahora hay: 2 pañales
Ahora hay: 3 pañales
Ahora hay: 4 pañales
Ahora hay: 5 pañales
###Markdown
**CICLOS CONTROLADOS POR CONDICIONES***WHILE*---Recordemos que las variables de control, nos permite manejar estados, pasar de un estado a otro es por ejemplo: Una variable que no contiene elementos a contenerlo o una variable con un elemento en particular (Acumulador o contador) y cambiarlo por completo (Bandera).Estas variables de control son la base de los cicloss de control. Siendo más claros, pasar de una adición manual a algo más automátizado.Empezamos con el ciclo "WHILE". En español es "mientras". Este ciclo se compone de una **condicion** y su **bloque de codigo** lo que nos quiere decir el WHIL es que el bloque de código se ejecutará **mientras** la condición da como resultado TRUE o FALSE.
###Code
lapiz=5 # la cantidad que voy a comprar
contlapiz=0 #es el contador de los lappices
print("Se ha iniciado la comrpa. En total hay:",contlapiz, lapiz)
while (contlapiz <lapiz): #condición
contlapiz+=1 # añade de 1 en 1
print("Se ha realizado la compra de lapices. Ahora hay: "+ str(contlapiz) +" lapiz") #convierte la varible int a tipo cadena str, si se deja int no deja imprimir, sale ERROR
print("Se ha realizado la compra de lapices. Ahora hay:", contlapiz,"lapiz") #esta es la varible int
###Output
Se ha iniciado la comrpa. En total hay: 0 5
Se ha realizado la compra de lapices. Ahora hay: 1 lapiz
Se ha realizado la compra de lapices. Ahora hay: 1 lapiz
Se ha realizado la compra de lapices. Ahora hay: 2 lapiz
Se ha realizado la compra de lapices. Ahora hay: 2 lapiz
Se ha realizado la compra de lapices. Ahora hay: 3 lapiz
Se ha realizado la compra de lapices. Ahora hay: 3 lapiz
Se ha realizado la compra de lapices. Ahora hay: 4 lapiz
Se ha realizado la compra de lapices. Ahora hay: 4 lapiz
Se ha realizado la compra de lapices. Ahora hay: 5 lapiz
Se ha realizado la compra de lapices. Ahora hay: 5 lapiz
###Markdown
Tener en cuenta que dentro del ciclo de WHILE se va afectando las variables implicadas en la declaración de la condición que debe cumplir el ciclo. En el ejemplo anterior la variable ***contlapiz*** para que en alg{un momento la condición sea verdadera y termine el ciclo que tiene que cumplir la condición (contlapiz < lapiz). De lo contrario, tendriamos un ciclo que nunca se detendría, lo cual decantaría en un ciclo indeterminable. **CICLO DE FOR**---Es un ciclo especializado y optimizado para los ciclos controlados por cantidad. Se compone de tres elementos:1. La variable de iteración FOR2. Elemento de iteración i in3. Bloque de código a iterar range(1,6)**Ventajas de usar el FOR**En Python es muy importante y se considera una herramienta bastante flexible y poderosa, por permitir ingresar estructuras de datos complejas, cadena de caracteres, rangos, entre otros. Los elementos de iteración usdos en esta estructura de datos son necesarios que tengan la siguiente característica:1. Cantidad definida (Esto lo diferencia del WHILE)El WHILE parte de una condición de verdad, pero el **FOR** parte de una cantidad definida.
###Code
#Retomando el ejemplo de la comrpa de los lapices
print("Se ha iniciado la compra. En total hay: 0 lapices")
for i in range(1,6): # itera 5 veces porque en los rangos la función range manejan un intervalo abierto a la derecha y cerrado a la izquierda (se resta uno 6-1=5)
# si pongo 11,16 inicia en 11 hasta 15
# si pongo un tercer valor en el rango range(1,6,2) hace un salto de 2 arranca en 1, 3, 5
print("Se ha realizado la compra de lapices. Ahora hay",i,"lapices")
###Output
Se ha iniciado la compra. En total hay: 0 lapices
Se ha realizado la compra de lapices. Ahora hay 1 lapices
Se ha realizado la compra de lapices. Ahora hay 2 lapices
Se ha realizado la compra de lapices. Ahora hay 3 lapices
Se ha realizado la compra de lapices. Ahora hay 4 lapices
Se ha realizado la compra de lapices. Ahora hay 5 lapices
###Markdown
**Continuación de estructuras de control iterativas**---**ACUMULADORES**Se le da este nombre a las variables que se encargana de "almacenar" algun tipo de información. Ejemplo: El caso de la compra de viveres en la tienda.
###Code
nombre = input("Nombre del consumidor")
listacomp = ""
print(nombre, "Escribe los siguientes viveres para su compra en el supermercado")
listacomp = listacomp + "Paca de papel higienico"
print("---------compras que tengo que hacer------")
print(listacomp)
listacomp = listacomp + "Shampoo Pantene 2 and 1"
listacomp = listacomp + "2 pacas de pañales pequeñin etapa 3"
print(listacomp)
###Output
Nombre del consumidorh
h Escribe los siguientes viveres para su compra en el supermercado
---------compras que tengo que hacer------
Paca de papel higienico
Paca de papel higienicoShampoo Pantene 2 and 12 pacas de pañales pequeñin etapa 3
###Markdown
La variable "listacomp" nos esta sirviendo para acumular informacion de la lista de compra.Podemos observar que **NO** estamos creando una variable por cada itemm,sino una variable definida nos sirve para almacenar la información.A continuacion observemos un ejemplo donde se ponga en practica el uso de la acumulacion en una variable usando cantidades y usos
###Code
ppph = 14000 # paca de papel higienico
cpph = 2 #cantidad de paquete de papel higienico
pshampoo = 18000 # precio de shampoo
cshampoo = 4 #unidadaes shampoo
ppbebe = 17000 #precio de pacas de pañales
cpbebe = 3 #cantidad
subtotal = 0
print("Calculando el total de la compra...")
total_pph = ppph * cpph
print("El valor total del papel higienico es: ", total_pph)
subtotal = subtotal + total_pph
print("------El subtotal es: ", subtotal)
total_shampoo = pshampoo * cshampoo
print("El valor total del shampoo es: ", total_shampoo)
subtotal = subtotal + total_shampoo
print("------El subtotal es: ", subtotal)
total_pbebe = ppbebe * cpbebe
print("El valor total para pañales es: ", total_pbebe)
subtotal = subtotal + total_pbebe
print("------El total de su compra es: ", subtotal)
###Output
Calculando el total de la compra...
El valor total del papel higienico es: 28000
------El subtotal es: 28000
El valor total del shampoo es: 72000
------El subtotal es: 100000
El valor total para pañales es: 51000
------El total de su compra es: 151000
###Markdown
**CONTADORES**---Tiene mucha relación con los "acumuladores" vistos en el apartado anterior. Estas variables se caracterizan por ser variables de control, es decir, controlar la **cantidad** de veces que se ejecuta determinada acción.Usando el ejemplo anterior y modificandolo un poco, podemos desarrollar el siguiente algoritmo
###Code
#Se comprará pañales por unidad en este caso
contp = 0
print("Se realizará la compra de pañales etapa 3... Se ha iniciado la compra y asignación en el carrito. En total hay ", contp, "de pañales")
contp = contp + 1
print("Ahora hay: ", contp, "de pañales")
contp = contp + 1
print("Ahora hay: ", contp, "de pañales")
contp = contp + 1
print("Ahora hay: ", contp, "de pañales")
contp = contp + 1
print("Ahora hay: ", contp, "de pañales")
contp = contp + 1
print("Ahora hay: ", contp, "de pañales")
###Output
Se realizará la compra de pañales etapa 3... Se ha iniciado la compra y asignación en el carrito. En total hay 0 de pañales
Ahora hay: 1 de pañales
Ahora hay: 2 de pañales
Ahora hay: 3 de pañales
Ahora hay: 4 de pañales
Ahora hay: 5 de pañales
###Markdown
**CICLOS CONTROLADOS POR CONDICIONES***WHILE*---Recordemos que las variables de control, nos permite manejar estados, pasar de un estado a otro es por ejemplo: Una variable que no contiene elementos a contenerlo o una variable un elemento a particular (Acumulador o contador) y cambiarlo por completo (Bandera).Estas variables de control son la base del ciclo de control. Siendo más claros, pasar de un adición manual a algo más automatizado.Empezemos con el ciclo "WHILE". En español es "mientras". Este ciclo se compone de una **condición** y su **bloque de codigo**. Lo que nos quiere de While es que el bloque de código se ejecutará **mientras** la condición da como resultado True or False.
###Code
lapiz = 5
contlapiz = 0
print("Se ha iniciado la compra. En total hay: ", contlapiz)
while(contlapiz < lapiz):
contlapiz = contlapiz + 1
print("Se ha realizado la compra de lapices. Ahora hay " + str(contlapiz) + " lapiz")
###Output
Se ha iniciado la compra. En total hay: 0
Se ha realizado la compra de lapices. Ahora hay 1 lapiz
Se ha realizado la compra de lapices. Ahora hay 2 lapiz
Se ha realizado la compra de lapices. Ahora hay 3 lapiz
Se ha realizado la compra de lapices. Ahora hay 4 lapiz
Se ha realizado la compra de lapices. Ahora hay 5 lapiz
###Markdown
Tener en cuenta que dentro del ciclo de WHILE se va afectando las variables implicadas en la declaración de la condición se debe cumplir el ciclo. En el ejemplo anterior la variable contlapiz para que en algún momento la condición sea verdadera y termine el ciclo se tiene que cumplir la condición (contlapiz<lapiz). De lo contrario, tendriamos un ciclo que nunca se detendría, lo cual decantaría en un ciclo interminable. **CICLO DE FOR**---Es un ciclo especializado y optimizado para los ciclos controlados por cantidad. Se compone de tres elementos:1. La variable de iteración2. Elemento de iteración3. Bloque de código a iterar**¿Ventajas de usar el FOR?**En Python es muy importante y se considera una herramienta bastante flexible y poderosa, por permitir ingresar estructuras de datos complejas, cadena de caracteres, rangos, entre otros. Los elementos de iteración usados en está estructura de datos son necesarios que tengan la siguiente caracteristica:1. Cantidad definida (Esto lo diferencia totalmente de WHILE). El While parte de una consición de verdad, pero el **FOR** parte de una cantidad definida.
###Code
## Retomando el ejemplo de la compra de lapices
print("Se ha iniciado la compra. En total hay: 0 lapices")
for i in range(1,6): # En los rangos, la función Range maneja un intervalo abierto a la derecha y cerrado a la izquierda
print("Se ha realizado la compra de lapices. Ahora hay",i,"lapices.")
###Output
Se ha iniciado la compra. En total hay: 0 lapices
Se ha realizado la compra de lapices. Ahora hay 1 lapices.
Se ha realizado la compra de lapices. Ahora hay 2 lapices.
Se ha realizado la compra de lapices. Ahora hay 3 lapices.
Se ha realizado la compra de lapices. Ahora hay 4 lapices.
Se ha realizado la compra de lapices. Ahora hay 5 lapices.
###Markdown
**Continuacion de estructuras de control iterartivas**---**ACUMULADORES**se le da este nombre a las variables que se encargan de "almacenar" algun tipo de informacion.Ejemplo: El caso de la compra de viveres en la tienda.
###Code
nombre = input("Nombre del consumidor: ")
listacomp = ""
print(nombre, "Escribe los siguientes viveres para su compra en el supermercado: ")
listacomp = listacomp + "Paca de pale higienico"
print("-----------compra que tengo que hacer-----------")
print(listacomp)
listacomp = listacomp + ", ShampooPantene 2 en 1"
listacomp = listacomp + ", paca de pañales pequeñin estapa 2"
print(listacomp)
###Output
Nombre del consumidor: ty
ty Escribe los siguientes viveres para su compra en el supermercado:
-----------compra que tengo que hacer-----------
Paca de pale higienico
Paca de pale higienicoShampooPantene 2 en 1paca de pañales pequeñin estapa 2
###Markdown
la variable "listacomp" nos esta sirviendo para acumular informacion de la lista de compra.Podemos observar, que NO estamos creando una variable por cada item, sino una variable definida nos sirve para almacenar la informacion.Acontinuacion ponemos un ejemplo donde se ponga en practica el uso de acumulacion de una variable usando cantidades y precios.
###Code
ppph = 14000 #precio
cpph = 2 #cantidad
pshampoo = 18000
cshampoo = 4
pppañales = 17000
cpañales = 3
subtotal = 0
print("Calculando el total de la compra...")
total_pph = ppph * cpph
print("el valor total del papel higienico es: ", total_pph)
subtotal = subtotal + total_pph
print ("--- el subtotal es: $",subtotal)
total_shampo = pshampoo * cshampoo
print("el valor total del Shampo es: ", total_shampo)
subtotal = subtotal + total_shampo
print ("--- el subtotal es: $",subtotal)
total_ppañales = pppañales * cpañales
print("el valor total de las pacas de pañales es: ", total_ppañales)
subtotal = subtotal + total_ppañales
print ("--- el subtotal es: $",subtotal)
print("El total de suc compra es: ",subtotal)
###Output
Calculando el total de la compra...
el valor total del papel higienico es: 28000
--- el subtotal es: $ 28000
el valor total del Shampo es: 72000
--- el subtotal es: $ 100000
el valor total de las pacas de pañales es: 51000
--- el subtotal es: $ 151000
El total de suc compra es: 151000
###Markdown
**Contadores**---Tienen ucha relacion con los acumuladores visto en el apartado anterior, estas variables se caracterizan por ser variables de control, es decir controlan la **cantidad **de veces que se ejecuta determinada accion.usando el ejemplo anteripor y modificandola un poco, podemos desarrollar el siguiente algoritmo.
###Code
#se comprara pañales por unidad
contp = 0
print("Se realizara la compra de pañales etapa 3. se ha iniciado la compra de asignacion en el carrito. En total hay: ", contp ," pañales")
contp = contp + 1
print("Se realizara la compra de pañales etapa 3. se ha iniciado la compra de asignacion en el carrito. Ahora hay: ",contp," pañales")
contp = contp + 1
print("Ahora hay: ",contp," pañales")
contp = contp + 1
print("Ahora hay: ",contp," pañales")
contp = contp + 1
print("Ahora hay: ",contp," pañales")
contp = contp + 1
print("Ahora hay: ",contp," pañales")
###Output
Se realizara la compra de pañales etapa 3. se ha iniciado la compra de asignacion en el carrito. En total hay: 0 pañales
Se realizara la compra de pañales etapa 3. se ha iniciado la compra de asignacion en el carrito. Ahora hay: 1 pañales
Ahora hay: 2 pañales
Ahora hay: 3 pañales
Ahora hay: 4 pañales
Ahora hay: 5 pañales
###Markdown
**CICLOS CONTROLADOS POR CONDICIONES***WHILE*---Recordemos que las variables del control, nos permite manejar estados, pasar de un estado a otro es por Ejemplo: Una variable que no contiene elementos a contenerlo o una variableun elemento a particular(acumulando o contando) y cambiarlo por completo (Bandera).esta variable de control son la base dekl ciclo control. Siendo mas claros, pasar de un adiccion manual a algo mas automatizado.Empezamos con el ciclo "WHILE" en español es "MIENTRAS". este ciclo se compone de una condicion y su bloque de codigo. lo que nos quiere decir WHILE es que el bloque de codigo se ejecutara mientras la condicion de como resultado True o False.
###Code
lapiz = 5
contlapiz = 0
print("Se ha iniciado la compra. En total hay: ",contlapiz, lapiz)
while (contlapiz < lapiz):
contlapiz = contlapiz + 1
print("Se ha realizado de lapices. Ahora hay: ",contlapiz, "lapiz")
###Output
Se ha iniciado la compra. En total hay: 0 5
Se ha realizado de lapices. Ahora hay: 1 lapiz
Se ha realizado de lapices. Ahora hay: 2 lapiz
Se ha realizado de lapices. Ahora hay: 3 lapiz
Se ha realizado de lapices. Ahora hay: 4 lapiz
Se ha realizado de lapices. Ahora hay: 5 lapiz
###Markdown
Tener en cuenta que dentro del ciclo de while se va afectando las variables implicadas en la declaracion de la condicion se debe cumpliur el ciclo. En el ejemplo anterior la variable contlapiz para que en algun momento la condicion sea verdadera y terminel ciclo, se tiene que cumplir la condicion (contlapiz < lapiz).De lo contrario tendriamos un ciclo que nunca se detendria.lo cual decantaria en un ciclo interminable. **CICLO FOR**---Es un ciclo utilizado y optimizado para los ciclos controlados poir cantidad. se compone de 3 elemtos:1. La variable de iteraccion.2. Elemento de iteracion.3. Bloque de codigo a iterar**¿ventajas de usar for?**En python es muy importante y se considera una herramienta bastante flexible y poderosa, por permitir ingresar estructuras de datos complejos, cadena de caracteres, rangos, entre otros. Los elemtos de iteracion usados en esta estructura de datos son necesarios que tengan las siguientes caracteristicas:1. cantidad definida (Esto lo diferencia totalmente del while)el while parte de una condicion de verdad, pero el **FOR** parte de una cantidad definida.
###Code
print("Se ha iniciado la compara. En total hay: 0 lapiz")
for i in range(1,6): #en los range se maneja un intervalo abierto a la derecha y cerrado a la izquierda
print("Sehe ha realizado la compra de lapices. Ahora hay: ",i," lapices")
###Output
Se ha iniciado la compara. En total hay: 0 lapiz
Sehe ha realizado la compra de lapices. Ahora hay: 1 lapices
Sehe ha realizado la compra de lapices. Ahora hay: 2 lapices
Sehe ha realizado la compra de lapices. Ahora hay: 3 lapices
Sehe ha realizado la compra de lapices. Ahora hay: 4 lapices
Sehe ha realizado la compra de lapices. Ahora hay: 5 lapices
###Markdown
**Continuacion de estructuras de control iterativas****ACUMULADORES**se le da este nombre a las variables que se encargan de "almacenar" algún tipo de información.Ejemplo:El caso de la compra de viveres en la tienda.
###Code
nombre=input("Nombre del consumidor")
listacomp=""
print(nombre, "escribe los siguientes viveres para su compra en el supermercado:")
listacomp=listacomp+"1 paca de papel higienico"
print("----Compras que tengo que hacer----")
listacomp=listacomp+",2 shampoo Pantene 2 and 1"
listacomp=listacomp+ ",2 pacas de pañales Pequeñin etapa 3"
print(listacomp)
###Output
Nombre del consumidorivon
ivon escribe los siguientes viveres para su compra en el supermercado:
----Compras que tengo que hacer----
1 paca de papel higienico,2 shampoo Pantene 2 and 1,2 pacas de pañales Pequeñin etapa 3
###Markdown
La variable "listacomp" nos esta sirviendo para acumular información de la lista de compras. Podemos observar que **no** estamos creando una variable por cada item, sino que una variable definida nosr sirve para almacenar la información.A continuación observamos un ejemplo donde se ponga en practica el uso de acumulación de variables usando cantidades y precios.
###Code
ppph= 14000 # Precio de paquete papel higiénico
cpph= 2 #cantidad de paquete de papel higiénico
pshampoo=18000 #Precio de shampoo Pantene 2 and 1
csshampoo=4 #unidades de shampoo
pcbebe=17000 #Precio de pacas de pañales pequeñin
cpbebe=3 #cantidad de pacas de pañales pequeñin
subtotal=0
print("Calculando el total de la compra...")
total_pph=ppph*cpph
print("El valor total de papel higiénico es: $", total_pph)
subtotal=subtotal+total_pph
print("----El subtotal es: $", subtotal)
total_shampoo=pshampoo*csshampoo
print("El valor total del shampoo es: $", total_shampoo)
subtotal=subtotal+total_shampoo
print("----El subtotal es: $", subtotal)
total_pbebe=pcbebe*cpbebe
print("El valor total para pañales es: $", total_pbebe)
subtotal=subtotal+total_pbebe
print("----El total de su compra es: $", subtotal)
###Output
Calculando el total de la compra...
El valor total de papel higiénico es: $ 28000
----El subtotal es: $ 28000
El valor total del shampoo es: $ 72000
----El subtotal es: $ 100000
El valor total para pañales es: $ 51000
----El subtotal es: $ 151000
###Markdown
**CONTADORES**Tiene mucha relación con los acumuladores visto en el apartado anterior.Estas variables se caracterizan por ser variables de control, es decir controlan la cantidad de veces que se ejecuta determinada acción.Usando el eejmplo anterior y modificandolo un poco, podemos desarrollar el siguiente algoritmo:
###Code
# Se comprara por unidad en este caso
contp=0
print("Se realizará la compra de pañales etapa 3 ... se ha iniciado la compra y asignación en el carrito. En total hay :"), contp, "pañales"
contp=contp+1
print("Ahora hay",contp , "")
###Output
Se realizará la compra de pañales etapa 3 ... se ha iniciado la compra y asignación en el carrito. En total hay :
Ahora hay 1
###Markdown
**CICLOS CONTROLADOS POR CONDICIÓN*****WHILE***Recordemos que las variables de control nos permite manejar estados,pasar de un estado a otro es por ejemplo una variable que no contine elementos a contenerlos o una variable un elemento a particular (Acumulador o contador) y cambiarlo por completo. (Bandera).Estas variables de control son la base del ciclo de control. Siendo mas claros, pasar de una adición a algo más automatizado.Empezamos con el ciclo **"WHILE"** en español es **"mientras"**.Este ciclo se compone de una **condición** y su **bloque de codigo **. lo que nos quiere decir el WHILE es que el bloque de codigo se ejecutara mientras la condición da como resultado TRUE O FALSE
###Code
lapiz=5
contlapiz=0
print("Se ha iniciado la compra. En total hay:", contlapiz,lapiz)
while(contlapiz <lapiz):
contlapiz=contlapiz+1
print("Se ha realizado la compra de Lapices. Ahora hay", contlapiz, "lapiz")
a=str(contlapiz)
print(type(contlapiz))
print(type(a))
###Output
Se ha iniciado la compra. En total hay: 0 5
Se ha realizado la compra de Lapices. Ahora hay 1 lapiz
Se ha realizado la compra de Lapices. Ahora hay 2 lapiz
Se ha realizado la compra de Lapices. Ahora hay 3 lapiz
Se ha realizado la compra de Lapices. Ahora hay 4 lapiz
Se ha realizado la compra de Lapices. Ahora hay 5 lapiz
<class 'int'>
<class 'str'>
###Markdown
**Nota:**Tener en cuenta que dentro del ciclo de **WHILE **se va afectando las variables implicadas en la declaración de la condición que debe cumplir el ciclo.En el ejemplo anterior la variable "contlapiz" para que en algún momento la condición sea verdadera y termine el ciclo se tiene que cumplir la condición (contlapiz<lapiz). De lo contrario, tendríamos un ciclo que nunca se detendría (infinito). lo cual decantaria en un ciclo interminable. **CICLO FOR**Es un ciclo especializado y optimizado para los ciclos controlados por cantidad. Se compone de 3 elementos:1. La variable iteración2. Elemento de iteración3. Bloque de código a iterar**¿Ventajas de usar el FOR?**En python es muy importante y se considera una herramienta bastante flexible y posderosa, por permitir ingresar estructuras de datos complejas, cadena de caracteres, rangos, entre otros. los elementos de iteración usados en esta estructura de datos son necesarios que tengan la siguiente caracteristica:1. cantidad definida (Esto lo diferencia totalmente del WHILE)Porque el While parte de una condición de verdad, pero el **FOR** parte de una cantidad definida.
###Code
#Retomando el ejemplo de la compra de lapices
print("Se ha iniciado la compra. En total hay: 0 lapices.")
for i in range(1,6): # En los rangos, la función range maneja un intervalo abierto a la derecha y cerrando a la izquierda
print("Se ha realizado la compra de lapices: Ahora hay",i,"lapices.")
###Output
se ha iniciado la compra. En total hay: 0 lapices.
se ha realizado la compra de lapices: Ahora hay 1 lapices.
se ha realizado la compra de lapices: Ahora hay 2 lapices.
se ha realizado la compra de lapices: Ahora hay 3 lapices.
se ha realizado la compra de lapices: Ahora hay 4 lapices.
se ha realizado la compra de lapices: Ahora hay 5 lapices.
###Markdown
***Continuación de estructuras de control iterativas***------**ACUMULADORES**---Se le da este nombre a las variables que se encargan de "almacenar" algun tipo de información. Ejemplo:El caso de la compra de viveres en la tienda:
###Code
nombre = input("Nombre del consumidor ")
listacomp = ""
print(nombre, "escribe los siguientes viveres para su compra en el supermercado: ")
listacomp = listacomp + "1 paca papel higienico, "
print("----- compras que tengo que hacer------")
listacomp = listacomp + " 2 Shampoo pantene 2 en 1, "
listacomp = listacomp + " 2 pacas pañales pequeñin etapa 5 "
print(listacomp)
###Output
Nombre del consumidor angie
angie escribe los siguientes viveres para su compra en el supermercado:
----- compras que tengo que hacer------
1 paca papel higienico, 2 Shampoo pantene 2 en 1, 2 pacas pañales pequeñin etapa 5
###Markdown
La variable "listacomp" nos esta sirviendo para acumular información de la lista de compras, podemos observar que no estamos creando una variable por cada item, sono una variables definida nos sirve para almacenar la información.Acontinuación observemos un ejemplo donde se ponga en practica el uso de acumulación en una variables usando cantidades y precios.
###Code
ppph = 140000 #precio de paquetes papel higienico
cpph = 2 #Cantidad de paquetes papel higienico
pshampoo = 18000 #Precio shampoo Pantene 2 en 1
cshampoo= 4 #Unidades de shampoo
ppbebe = 17000 #Precio de pacas de pañales pequeñin
cpbebe = 3 #Cantidad de pacas de pañales pequeñin
subtotal = 0
print("calculando el total de la compra")
total_pph = ppph*cpph
print("El valor total de papel higienico es: $",total_pph)
subtotal = subtotal+total_pph
print("---El subtotal es: $",subtotal)
total_shampoo = pshampoo * cshampoo
print("El valor total de Shampoo es: $",total_shampoo)
subtotal = subtotal+total_shampoo
print("---El subtotal es: $",subtotal)
total_pbebe= ppbebe*cpbebe
print("El valor total de pañales es: $",total_pbebe)
subtotal = subtotal+total_pbebe
print("---El total de su compra es: $",subtotal)
###Output
calculando el total de la compra
El valor total de papel higienico es: $ 280000
---El subtotal es: $ 280000
El valor total de Shampoo es: $ 72000
---El subtotal es: $ 352000
El valor total de pañales es: $ 51000
---El total de su compra es: $ 403000
###Markdown
**CONTADORES**---Tiene mucha relación con los acumuladores visto en el apartado anterior estas variables se caracterizan por ser variables de control, es decir controlan la cantidad de veces que se ejecuta determinada acción.Usando el ejemplo anterior y modificandolo en poco, podemos desarrollar el siguiente algoritmo
###Code
#se comprará pañales por unidad
contp = 0
print("se realizara la compra de pañales etapa 3... se ha iniciado la compra en el carrito. En total hay ",contp, "pañales")
contp =contp+1
print("ahora hay ",contp," pañal")
contp =contp+1
print("ahora hay ",contp," pañal")
contp =contp+1
print("ahora hay ",contp," pañal")
contp =contp+1
print("ahora hay ",contp," pañal")
contp =contp+1
print("ahora hay ",contp," pañal")
###Output
se realizara la compra de pañales etapa 3... se ha iniciado la compra en el carrito. En total hay 0 pañales
ahora hay 1 pañal
ahora hay 2 pañal
ahora hay 3 pañal
ahora hay 4 pañal
ahora hay 5 pañal
###Markdown
**CICLOS CONTROLADOS POR CONDICIONES** **WHILE**---Recordemos que las variables de control, nos permite manejar estados, pasar de un estado a otro es por ejemplo: Una variable que no contiene elementos a contenerlo o una variables un elemento a particular (Acumulador o contador) y cambiarlo por completo (Bandera)Estas variables de control son la base de los ciclos de control, siendo más claros, pasar de una adición manual a algo más automatizado.Empezamos con el ciclo "WHILW" en español es "Mientras"; Este ciclo se compone de una **condición** y su **bloque de código**. Lo que nos quiere decir while es que el bloque de código se ejecutará mientras la condición da como resultado True or False
###Code
lapiz = 5
contlapiz = 0
print("se ha iniciado la compra. En total hay: ",contlapiz,lapiz)
while(contlapiz<lapiz):
contlapiz =contlapiz+1
print("Se ha realizado la compra de lapices, ahora hay "+str(contlapiz)+" lapiz")
###Output
se ha iniciado la compra. En total hay: 0 5
Se ha realizado la compra de lapices, ahora hay 1 lapiz
Se ha realizado la compra de lapices, ahora hay 2 lapiz
Se ha realizado la compra de lapices, ahora hay 3 lapiz
Se ha realizado la compra de lapices, ahora hay 4 lapiz
Se ha realizado la compra de lapices, ahora hay 5 lapiz
###Markdown
Tener en cuenta que dentro de ciclo de WHILE se va afectando las variables implicadas en la declaración de la condición debe cumplir el ciclo. En el ejemplo anterior la variable contlapiz para que en algún momento la condición sea verdadera y termine el ciclo tiene que cumplir la condición (contlapiz/lapiz). De lo contrario tendriamos un ciclo que nunca se detendría. **Ciclo FOR**---Es un ciclo especializado y optimizado para los ciclos controlados por cantidad de tres elementos:1. La variable de iteración2. Elemento de iteración3. Bloque de código a iterar**Ventajas de usar FOR**En Python es muy importante y se considera una herramienta bastante flexible y poderosa por permitir ingresar estructuras de datos complejos, cadena de caracteres, rangos, entre otros.Los elementos de iteración usados en esta estructura de datos son necesarios que tengan la siguiente caracteristica:1. Una cantidad definida (esto lo diferencia totalmente del WHILE)El while parte de una condición de verdad y **FOR** parte de una cantidad definida
###Code
#retomando el ejemplo de la compra de lapices
print("se ha iniciado la compra. en total hay: 0 lapices ")
for i in range(1,6): #En los rangos la función RANGE manejan un intervalo abierto a la derecha, cerrado a la izquierda
print("se ha realizado la compra de lapices. ahora hay",i,"lapices")
###Output
se ha iniciado la compra. en total hay: 0 lapices
se ha realizado la compra de lapices. ahora hay 1 lapices
se ha realizado la compra de lapices. ahora hay 2 lapices
se ha realizado la compra de lapices. ahora hay 3 lapices
se ha realizado la compra de lapices. ahora hay 4 lapices
se ha realizado la compra de lapices. ahora hay 5 lapices
###Markdown
**Continuacion de estructuras de control iterativa **---**Acumuladores**Sel da este nombre a la variables que se encargan de almcenar algun tipo de informacion.**Ejemplo**El caso de la compra de viveres en la tienda.``````
###Code
nombre=input("Nombre del comprador")
listacompra = "";
print(nombre, "escribe los siguientes niveles para su compra en el supermercado:")
listacompra= listacompra+ "1 paca de papel de higienico"
print("----compras que tengo que hacer----")
listacompra=listacompra+ ", 1 Shampoo pantene 2 en 1"
listacompra=listacompra+" ,2 pacas de pañales pequeñin etapa 3"
print(listacompra)
###Output
Nombre del compradorgeral
geral escribe los siguientes niveles para su compra en el supermercado:
----compras que tengo que hacer----
1 paca de papel de higienico, 1 Shampoo pantene 2 en 1 ,2 pacas de pañales pequeñin etapa 3
###Markdown
la variable "listacompra" nos esta sirviendooppara acumular informacion de la lista de compra.podemos observar, que **NO** estamos creando una variable por cada item, sino una variable definida nos sirve para almacenar la informacionA continuacion observemos un ejemplo en donde se pone en practica el uso de acumulacion en una variable usando cantidades y precios
###Code
ppph=14000 #precio de papel higienico
cpph =3 #cantidad de pacas de papel
pshampoo =18000 #Precio de shampoo pantene 2 and 1
cshampoo =5 #Cantidad de shampoo
ppbebe = 17000 #precio de pacas de pañales pequeña
cpbebe = 4 #cantidad de pañales pequeños
subtotal =0
print("Calculando el total de la compra...")
total_ppph=ppph*cpph
print("el valor de la compra del papel higiencio es", total_ppph)
subtotal=subtotal + total_ppph
print("---el subtotal es:",subtotal)
total_shampoo = pshampoo *cshampoo
print("El valor del total de Shampoo es:$",total_shampoo )
subtotal = subtotal+ total_shampoo
print("---el subtotal es:$",subtotal)
total_ppbebe = ppbebe*cpbebe
print("el valor total de pañales es:$",total_ppbebe)
subtotal = subtotal + total_ppbebe
print("el total de su compra es:$",subtotal)
###Output
Calculando el total de la compra...
el valor de la compra del papel higiencio es 42000
---el subtotal es: 42000
El valor del total de Shampoo es:$ 90000
---el subtotal es:$ 132000
el valor total de pañales es:$ 68000
el total de su compra es:$ 200000
###Markdown
**Contadores**tiene mucha relacion con los "acumuladores" visto en el apartado anteriorEstas variables se caracterizan por ser variables de control, es decir controlan la **cantidad** de veces que se ejecutan determinada accion.Usando el ejemplo anterior y modificando un poco, podemos desarrollar el siguient algoritmo
###Code
#Se comprara pañales por unidad en este caso.
contp = 0
print("Se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito. En total hay :", contp, "pañales")
contp = contp+1
print("Se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito. Ahora hay :", contp, "pañales")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
###Output
_____no_output_____
###Markdown
**Ciclos controlados por condicicones****WHILE**---Recordemos que las variables de control, nos permten manejar estados, pasar de un estado a otro es por ejemplo: una variable que no contiene elementos a contenerlo o una variable un elemento en particular (Acumulador o contador) y cambiarlo po completo(Bnadera)Estas Variables de cocntrol son la base de ciclos de control. Siendo mas claros, pasar de una accion manual a algo mas automatizadoEmpezamos con el ciclo "WHILE" En español es "mientras". Este ciclo compone una condiciion y su bloque de codigoloque nos quiere decir While es que el bloque de codigo se ejecutara mientrasc la condicion da como resultado True or False
###Code
lapiz= 5
contlapiz=0
print("Se ha iniciado la compra. en total hay :", contlapiz,lapiz)
while (contlapiz < lapiz):
contlapiz = contlapiz+1
print("Se ha realizado la compra de lapices ahora hay",contlapiz," lapiz")
a=str(contlapiz)
print(type(contlapiz))
print(type(a))
###Output
Se ha iniciado la compra. en total hay : 0 5
Se ha realizado la compra de lapices ahora hay 1 lapiz
Se ha realizado la compra de lapices ahora hay 2 lapiz
Se ha realizado la compra de lapices ahora hay 3 lapiz
Se ha realizado la compra de lapices ahora hay 4 lapiz
Se ha realizado la compra de lapices ahora hay 5 lapiz
<class 'int'>
<class 'str'>
###Markdown
Tener en cuenta que dentro del ciclo de WHILE se va afectando las variables implicadas en la declracion de la condicicon que debe cumplir el ciclo en el ejemplo anterior la variable contlapiz para que en algun momento la condicion sea vedadera y termine el ciclo se tiene que cumplir la condicion(contlapiz). De lo contrario, tendriamos un ciclo que nunca se detendria, lo cual decantaria en un cilo interminable **CICLO DE FOR**Es un ciclo especializado y optimizado parta los ciclos controlados por cantidad. Se compone de tres elementos:1. la variable de iteraccion2. elemento de iteraccion3. bloque de ocdigo iterar**¿ventajas de usar el FOR ?**en PYTHON es muy importante y se considera una herramienta bastante flexible y poderos, por permitir ingresar estructuras de datos complejas, cadena de caracteres, rangos , entre otros. los elementos de iteraccion en esta estructura de datos, son necesarios que tengan la siguiente caracteristica :1. cantidad definida(Esto lo diferencia totalmente del WHILE)El WHILE parte de una condicion de verdad, pero el FOR parte de una cantidad definida
###Code
##Retomando el ejemplo de la compra de lapices
print("se ha iniciado la compra. En total hay:0 lapices.")
for i in range(1,10): # en los rangos, la funcion range maneja un intervalo abierto a la derecha y cerrado al a izquierda
print("Se ha realizado la ocmpra de lapices. Ahora hay",i,"lapices")
###Output
se ha iniciado la compra. En total hay:0 lapices.
Se ha realizado la ocmpra de lapices. Ahora hay 1 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 2 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 3 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 4 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 5 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 6 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 7 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 8 lapices
Se ha realizado la ocmpra de lapices. Ahora hay 9 lapices
###Markdown
**Continuacion de estructuras de control iterativa **---**Acumuladores**Sel da este nombre a la variables que se encargan de almcenar algun tipo de informacion.**Ejemplo**El caso de la compra de viveres en la tiends.``````
###Code
nombre = input("Nombre del comprador")
Listacompra = "";
print(nombre, "escribe los siguientes niveles para su compra ene el supermercado:")
listacompra = (listacompra , + "1 paca de papel de higienico")
print("----compras que tengo que hacer----")
print(listacompra)
listacompra=(listacompra ,+ "Shampoo pantene 2 and 1")
listacompra=(listacompra, +"2 pacas de pañales pequeñin etapa 3")
print(listacompra)
###Output
_____no_output_____
###Markdown
la variable "listacompra" nos esta sirviendooppara acumular informacion de la lista de compra.podemos observar, que **NO** estamos creando una variable por cada item, sino una variable definida nos sirve para almacenar la informacionA continuacion observemos un ejemplo en donde se pone en practica el uso de acumulacion en una variable usando cantidades y precios
###Code
ppph=14000 #precio de papel higienico
cpph =2 #cantidad de pacas de papel
pshampoo = 18000 #Precio de shampoo pantene 2 and 1
cshampoo =4 #Cantidad de shampoo
ppbebe = 17000 #precio de pacas de pañales pequeña
cpbebe = 3 #cantidad de pañales pequeños
subtotal = 0
print("Calculando el total de la compra...")
total_ppph=ppph*cpph
print("el valor de la compra del papel higiencio es", total_ppph)
subtotal=subtotal + total_ppph
print("---el subtotal es:",subtotal)
total_shampoo = pshampoo *cshampoo
print("El valor del total de Shampoo es:$",total_shampoo )
subtotal = subtotal+ total_shampoo
print("---el subtotal es:$",subtotal)
total_ppbebe = ppbebe*cpbebe
print("el valor total de pañales es:$",total_ppbebe)
subtotal = subtotal + total_ppbebe
print("el total de su compra es:$",subtotal)
###Output
_____no_output_____
###Markdown
**Contadores**tiene mucha relacion con los "acumuladores" visto en el apartado anteriorEstas variables se caracterizan por ser variables de control, es decir controlan la **cantidad** de veces que se ejecutan determinada accion.Usando el ejemplo anterior y modificandoo un poco, podemos desarrollar el siguient algoritmo
###Code
#Se comprara pañales por unidad en este caso.
contp = 0
print("Se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito. En total hay :", contp, "pañales")
contp = contp+1
print("Se realizara la compra de pañales etapa 3... se ha iniciado la compra de asignacion en el carrito. Ahora hay :", contp, "pañales")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
contp = contp+1
print("Ahora hay:",contp,"pañal1")
###Output
_____no_output_____
###Markdown
**Ciclos controlados por condicicones****WHILE**---Recordemos que las variables de control, nos permten manejar estados, pasar de un estado a otro es por ejemplo: una variable que no contiene elementos a contenerlo o una variable un elemento en particular (Acumulador o contador) y cambiarlo po completo(Bnadera)Estas Variables de cocntrol son la base de ciclos de control. Siendo mas claros, pasar de una accion manual a algo mas automatizadoEmpezamos con el ciclo "WHILE" En español es "mientras". Este ciclo compone una condiciion y su bloque de codigoloque nos quiere decir While es que el bloque de codigo se ejecutara mientrasc la condicion da como resultado True or False
###Code
lapiz = 5
contlapiz = 0
print("Se ha iniciado la compra. en total hay :", contlapiz,lapiz)
while (contlapiz < lapiz):
contlapiz = contlapiz+1
print("Se ha realizado la compra de lapices ahora hay",str(contlapiz) + "lapiz")
a = str(contlapiz)
print(type(contlapiz))
print(type(a))
###Output
_____no_output_____
###Markdown
Tener en cuenta que dentro del ciclo de WHILE se va afectando las variables implicadas en la declracion de la condicicon que debe cumplir el ciclo en el ejemplo anterior la variable contlapiz para que en algun momento la condicion sea vedadera y termine el ciclo se tiene que cumplir la condicion(contlapiz). De lo contrario, tendriamos un ciclo que nunca se detendria, lo cual decantaria en un cilo interminable CICLO DE FOREs un ciclo especializado y optimizado parta los ciclos controlados por cantidad. Se compone de tres elementos:la variable de iteraccionelemento de iteraccionbloque de ocdigo iterar¿ventajas de usar el FOR ?en PYTHON es muy importante y se considera una herramienta bastante flexible y poderos, por permitir ingresar estructuras de datos complejas, cadena de caracteres, rangos , entre otros. los elementos de iteraccion en esta estructura de datos, son necesarios que tengan la siguiente caracteristica :cantidad definida(Esto lo diferencia totalmente del WHILE)el WHILE parte de una condicion de verdad, pero el FOR parte de una cantidad definida
###Code
##Retomando el ejemplo de la compra de lapices
print("se ha iniciado la compra. En total hay:0 lapices.")
for i in range(1,6): # en los rangos, la funcion range maneja un intervalo abierto a la derecha y cerrado al a izquierda
print("Se ha realizado la ocmpra de lapices. Ahora hay",i,"lapices")
###Output
_____no_output_____ |
docs/examples/driver_examples/QCodes example with Rigol DG1062.ipynb | ###Markdown
Example notebook for the Rigol DG 1062 instrument
###Code
import time
from qcodes.instrument_drivers.rigol.DG1062 import DG1062
###Output
_____no_output_____
###Markdown
Instantiate the driver
###Code
gd = DG1062("gd", "TCPIP0::169.254.187.99::INSTR")
###Output
Connected to: Rigol Technologies DG1062Z (serial:DG1ZA195006397, firmware:03.01.12) in 0.18s
###Markdown
Basic usage Accessing the channels
###Code
gd.channels[0]
# Or...
gd.ch1
###Output
_____no_output_____
###Markdown
Trun the output for channel 1 to "on"
###Code
gd.channels[0].state(1)
# This is idential to
gd.ch1.state(1)
###Output
_____no_output_____
###Markdown
With `apply` we can check which waveform is being generated now, for example on channel 1
###Code
gd.channels[0].current_waveform()
###Output
_____no_output_____
###Markdown
We can also change the waveform
###Code
gd.channels[0].apply(waveform="SIN", freq=2000, ampl=0.5, offset=0.0, phase=0.0)
###Output
_____no_output_____
###Markdown
Change individual settings like so:
###Code
gd.channels[0].offset(0.1)
###Output
_____no_output_____
###Markdown
This works for every setting, except waveform, which is read-only
###Code
gd.channels[0].waveform()
try:
gd.channels[0].waveform("SIN")
except NotImplementedError:
print("We cannot set a waveform like this ")
###Output
We cannot set a waveform like this
###Markdown
We can however do this:
###Code
gd.channels[0].sin(freq=1E3, ampl=1.0, offset=0, phase=0)
###Output
_____no_output_____
###Markdown
To find out which arguments are applicable to a waveform: Find out which waveforms are available
###Code
print(gd.waveforms)
###Output
['HARM', 'NOIS', 'RAMP', 'SIN', 'SQU', 'TRI', 'USER', 'DC', 'ARB']
###Markdown
Setting the impedance
###Code
gd.channels[1].impedance(50)
gd.channels[1].impedance()
gd.channels[1].impedance("HighZ")
###Output
_____no_output_____
###Markdown
Alternatively, we can do ```pythongd.channels[1].impedance("INF")```
###Code
gd.channels[1].impedance()
###Output
_____no_output_____
###Markdown
Sync commands
###Code
gd.channels[0].sync()
gd.channels[0].sync("OFF")
###Output
_____no_output_____
###Markdown
Alternativly we can do ```pythongd.channels[0].sync(0) ```
###Code
gd.channels[0].sync()
gd.channels[0].sync(1)
###Output
_____no_output_____
###Markdown
Alternativly we can do```pythongd.channels[0].sync("ON")```
###Code
gd.channels[0].sync()
###Output
_____no_output_____
###Markdown
Burst commands Internally triggered burst
###Code
# Interal triggering only works if the trigger source is manual
gd.channels[0].burst.source("MAN")
# The number of cycles is infinite
gd.channels[0].burst.mode("INF")
###Output
_____no_output_____
###Markdown
If we want a finite number of cycles: ```pythongd.channels[0].burst.mode("TRIG")gd.channels[0].burst.ncycles(10000)```Setting a period for each cycle: ```pythongd.channels[0].burst.period(1E-3)```
###Code
# Put channel 1 in burst mode
gd.channels[0].burst.on(1)
# Turn on the channel. For some reason, if we turn on the channel
# immediately after turning on the burst, we trigger immediately.
time.sleep(0.1)
gd.channels[0].state(1)
# Finally, trigger the AWG
gd.channels[0].burst.trigger()
###Output
_____no_output_____
###Markdown
extranally triggered burst
###Code
gd.channels[0].burst.source("EXT")
###Output
_____no_output_____
###Markdown
Setting the idle level
###Code
# Set the idle level to First PoinT
gd.channels[0].burst.idle("FPT")
# We can also give a number
gd.channels[0].burst.idle(0)
###Output
_____no_output_____
###Markdown
QCoDeS Example with the Rigol DG 1062 Instrument
###Code
import time
from qcodes.instrument_drivers.rigol.DG1062 import DG1062
###Output
_____no_output_____
###Markdown
Instantiate the driver
###Code
gd = DG1062("gd", "TCPIP0::169.254.187.99::INSTR")
###Output
Connected to: Rigol Technologies DG1062Z (serial:DG1ZA195006397, firmware:03.01.12) in 0.18s
###Markdown
Basic usage Accessing the channels
###Code
gd.channels[0]
# Or...
gd.ch1
###Output
_____no_output_____
###Markdown
Trun the output for channel 1 to "on"
###Code
gd.channels[0].state(1)
# This is idential to
gd.ch1.state(1)
###Output
_____no_output_____
###Markdown
With `apply` we can check which waveform is being generated now, for example on channel 1
###Code
gd.channels[0].current_waveform()
###Output
_____no_output_____
###Markdown
We can also change the waveform
###Code
gd.channels[0].apply(waveform="SIN", freq=2000, ampl=0.5, offset=0.0, phase=0.0)
###Output
_____no_output_____
###Markdown
Change individual settings like so:
###Code
gd.channels[0].offset(0.1)
###Output
_____no_output_____
###Markdown
This works for every setting, except waveform, which is read-only
###Code
gd.channels[0].waveform()
try:
gd.channels[0].waveform("SIN")
except NotImplementedError:
print("We cannot set a waveform like this ")
###Output
We cannot set a waveform like this
###Markdown
We can however do this:
###Code
gd.channels[0].sin(freq=1E3, ampl=1.0, offset=0, phase=0)
###Output
_____no_output_____
###Markdown
To find out which arguments are applicable to a waveform: Find out which waveforms are available
###Code
print(gd.waveforms)
###Output
['HARM', 'NOIS', 'RAMP', 'SIN', 'SQU', 'TRI', 'USER', 'DC', 'ARB']
###Markdown
Setting the impedance
###Code
gd.channels[1].impedance(50)
gd.channels[1].impedance()
gd.channels[1].impedance("HighZ")
###Output
_____no_output_____
###Markdown
Alternatively, we can do ```pythongd.channels[1].impedance("INF")```
###Code
gd.channels[1].impedance()
###Output
_____no_output_____
###Markdown
Sync commands
###Code
gd.channels[0].sync()
gd.channels[0].sync("OFF")
###Output
_____no_output_____
###Markdown
Alternativly we can do ```pythongd.channels[0].sync(0) ```
###Code
gd.channels[0].sync()
gd.channels[0].sync(1)
###Output
_____no_output_____
###Markdown
Alternativly we can do```pythongd.channels[0].sync("ON")```
###Code
gd.channels[0].sync()
###Output
_____no_output_____
###Markdown
Burst commands Internally triggered burst
###Code
# Interal triggering only works if the trigger source is manual
gd.channels[0].burst.source("MAN")
# The number of cycles is infinite
gd.channels[0].burst.mode("INF")
###Output
_____no_output_____
###Markdown
If we want a finite number of cycles: ```pythongd.channels[0].burst.mode("TRIG")gd.channels[0].burst.ncycles(10000)```Setting a period for each cycle: ```pythongd.channels[0].burst.period(1E-3)```
###Code
# Put channel 1 in burst mode
gd.channels[0].burst.on(1)
# Turn on the channel. For some reason, if we turn on the channel
# immediately after turning on the burst, we trigger immediately.
time.sleep(0.1)
gd.channels[0].state(1)
# Finally, trigger the AWG
gd.channels[0].burst.trigger()
###Output
_____no_output_____
###Markdown
extranally triggered burst
###Code
gd.channels[0].burst.source("EXT")
###Output
_____no_output_____
###Markdown
Setting the idle level
###Code
# Set the idle level to First PoinT
gd.channels[0].burst.idle("FPT")
# We can also give a number
gd.channels[0].burst.idle(0)
###Output
_____no_output_____ |
arrays_strings/hash_map/hash_map_solution.ipynb | ###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a hash table with set, get, and remove methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* For simplicity, are the keys integers only? * Yes* For collision resolution, can we use linked lists? * Yes* Do we have to worry about load factors? * No Test Cases* get on an empty hash table index* set on an empty hash table index* set on a non empty hash table index* set on a key that already exists* remove on a key with an entry* remove on a key without an entry Algorithm Hash Function* Return key % table sizeComplexity:* Time: O(1)* Space: O(1) Set* Get hash index for lookup* If key exists, replace* Else, addComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) space for newly added element Get* Get hash index for lookup* If key exists, return value* Else, return NoneComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Remove* Get hash index for lookup* If key exists, delete the itemComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Code
###Code
class Item(object):
def __init__(self, key, value):
self.key = key
self.value = value
class HashTable(object):
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(self.size)]
def hash_function(self, key):
return key % self.size
def set(self, key, value):
hash_index = self.hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
item.value = value
return
self.table[hash_index].append(Item(key, value))
def get(self, key):
hash_index = self.hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
return item.value
return None
def remove(self, key):
hash_index = self.hash_function(key)
for i, item in enumerate(self.table[hash_index]):
if item.key == key:
del self.table[hash_index][i]
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_hash_map.py
from nose.tools import assert_equal
class TestHashMap(object):
# TODO: It would be better if we had unit tests for each
# method in addition to the following end-to-end test
def test_end_to_end(self):
hash_table = HashTable(10)
print("Test: get on an empty hash table index")
assert_equal(hash_table.get(0), None)
print("Test: set on an empty hash table index")
hash_table.set(0, 'foo')
assert_equal(hash_table.get(0), 'foo')
hash_table.set(1, 'bar')
assert_equal(hash_table.get(1), 'bar')
print("Test: set on a non empty hash table index")
hash_table.set(10, 'foo2')
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), 'foo2')
print("Test: set on a key that already exists")
hash_table.set(10, 'foo3')
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), 'foo3')
print("Test: remove on a key that already exists")
hash_table.remove(10)
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), None)
print("Test: remove on a key that doesn't exist")
hash_table.remove(-1)
print('Success: test_end_to_end')
def main():
test = TestHashMap()
test.test_end_to_end()
if __name__ == '__main__':
main()
run -i test_hash_map.py
###Output
Test: get on an empty hash table index
Test: set on an empty hash table index
Test: set on a non empty hash table index
Test: set on a key that already exists
Test: remove on a key that already exists
Test: remove on a key that doesn't exist
Success: test_end_to_end
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a hash table with set, get, and remove methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* For simplicity, are the keys integers only? * Yes* For collision resolution, can we use chaining? * Yes* Do we have to worry about load factors? * No* Do we have to validate inputs? * No* Can we assume this fits memory? * Yes Test Cases* `get` no matching key -> KeyError exception* `get` matching key -> value* `set` no matching key -> new key, value* `set` matching key -> update value* `remove` no matching key -> KeyError exception* `remove` matching key -> remove key, value Algorithm Hash Function* Return key % table sizeComplexity:* Time: O(1)* Space: O(1) Set* Get hash index for lookup* If key exists, replace* Else, addComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) space for newly added element Get* Get hash index for lookup* If key exists, return value* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Remove* Get hash index for lookup* If key exists, delete the item* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Code
###Code
class Item(object):
def __init__(self, key, value):
self.key = key
self.value = value
class HashTable(object):
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(self.size)]
def _hash_function(self, key):
return key % self.size
def set(self, key, value):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
item.value = value
return
self.table[hash_index].append(Item(key, value))
def get(self, key):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
return item.value
raise KeyError('Key not found')
def remove(self, key):
hash_index = self._hash_function(key)
for index, item in enumerate(self.table[hash_index]):
if item.key == key:
del self.table[hash_index][index]
return
raise KeyError('Key not found')
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_hash_map.py
import unittest
class TestHashMap(unittest.TestCase):
# TODO: It would be better if we had unit tests for each
# method in addition to the following end-to-end test
def test_end_to_end(self):
hash_table = HashTable(10)
print("Test: get on an empty hash table index")
self.assertRaises(KeyError, hash_table.get, 0)
print("Test: set on an empty hash table index")
hash_table.set(0, 'foo')
self.assertEqual(hash_table.get(0), 'foo')
hash_table.set(1, 'bar')
self.assertEqual(hash_table.get(1), 'bar')
print("Test: set on a non empty hash table index")
hash_table.set(10, 'foo2')
self.assertEqual(hash_table.get(0), 'foo')
self.assertEqual(hash_table.get(10), 'foo2')
print("Test: set on a key that already exists")
hash_table.set(10, 'foo3')
self.assertEqual(hash_table.get(0), 'foo')
self.assertEqual(hash_table.get(10), 'foo3')
print("Test: remove on a key that already exists")
hash_table.remove(10)
self.assertEqual(hash_table.get(0), 'foo')
self.assertRaises(KeyError, hash_table.get, 10)
print("Test: remove on a key that doesn't exist")
self.assertRaises(KeyError, hash_table.remove, -1)
print('Success: test_end_to_end')
def main():
test = TestHashMap()
test.test_end_to_end()
if __name__ == '__main__':
main()
run -i test_hash_map.py
###Output
Test: get on an empty hash table index
Test: set on an empty hash table index
Test: set on a non empty hash table index
Test: set on a key that already exists
Test: remove on a key that already exists
Test: remove on a key that doesn't exist
Success: test_end_to_end
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a hash table with set, get, and remove methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* For simplicity, are the keys integers only? * Yes* For collision resolution, can we use chaining? * Yes* Do we have to worry about load factors? * No* Do we have to validate inputs? * No* Can we assume this fits memory? * Yes Test Cases* `get` no matching key -> KeyError exception* `get` matching key -> value* `set` no matching key -> new key, value* `set` matching key -> update value* `remove` no matching key -> KeyError exception* `remove` matching key -> remove key, value Algorithm Hash Function* Return key % table sizeComplexity:* Time: O(1)* Space: O(1) Set* Get hash index for lookup* If key exists, replace* Else, addComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) space for newly added element Get* Get hash index for lookup* If key exists, return value* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Remove* Get hash index for lookup* If key exists, delete the item* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Code
###Code
class Item(object):
def __init__(self, key, value):
self.key = key
self.value = value
class HashTable(object):
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(self.size)]
def _hash_function(self, key):
return key % self.size
def set(self, key, value):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
item.value = value
return
self.table[hash_index].append(Item(key, value))
def get(self, key):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
return item.value
raise KeyError('Key not found')
def remove(self, key):
hash_index = self._hash_function(key)
for index, item in enumerate(self.table[hash_index]):
if item.key == key:
del self.table[hash_index][index]
return
raise KeyError('Key not found')
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_hash_map.py
from nose.tools import assert_equal, assert_raises
class TestHashMap(object):
# TODO: It would be better if we had unit tests for each
# method in addition to the following end-to-end test
def test_end_to_end(self):
hash_table = HashTable(10)
print("Test: get on an empty hash table index")
assert_raises(KeyError, hash_table.get, 0)
print("Test: set on an empty hash table index")
hash_table.set(0, 'foo')
assert_equal(hash_table.get(0), 'foo')
hash_table.set(1, 'bar')
assert_equal(hash_table.get(1), 'bar')
print("Test: set on a non empty hash table index")
hash_table.set(10, 'foo2')
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), 'foo2')
print("Test: set on a key that already exists")
hash_table.set(10, 'foo3')
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), 'foo3')
print("Test: remove on a key that already exists")
hash_table.remove(10)
assert_equal(hash_table.get(0), 'foo')
assert_raises(KeyError, hash_table.get, 10)
print("Test: remove on a key that doesn't exist")
assert_raises(KeyError, hash_table.remove, -1)
print('Success: test_end_to_end')
def main():
test = TestHashMap()
test.test_end_to_end()
if __name__ == '__main__':
main()
run -i test_hash_map.py
###Output
Test: get on an empty hash table index
Test: set on an empty hash table index
Test: set on a non empty hash table index
Test: set on a key that already exists
Test: remove on a key that already exists
Test: remove on a key that doesn't exist
Success: test_end_to_end
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a hash table with set, get, and remove methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* For simplicity, are the keys integers only? * Yes* For collision resolution, can we use linked lists? * Yes* Do we have to worry about load factors? * No Test Cases* get on an empty hash table index* set on an empty hash table index* set on a non empty hash table index* set on a key that already exists* remove on a key with an entry* remove on a key without an entry Algorithm Hash Function* Return key % table sizeComplexity:* Time: O(1)* Space: O(1) Set* Get hash index for lookup* If key exists, replace* Else, addComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) space for newly added element Get* Get hash index for lookup* If key exists, return value* Else, return NoneComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Remove* Get hash index for lookup* If key exists, delete the itemComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Code
###Code
class Item(object):
def __init__(self, key, value):
self.key = key
self.value = value
class HashTable(object):
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(self.size)]
def hash_function(self, key):
return key % self.size
def set(self, key, value):
hash_index = self.hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
item.value = value
return
self.table[hash_index].append(Item(key, value))
def get(self, key):
hash_index = self.hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
return item.value
return None
def remove(self, key):
hash_index = self.hash_function(key)
for i, item in enumerate(self.table[hash_index]):
if item.key == key:
del self.table[hash_index][i]
return
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_hash_map.py
from nose.tools import assert_equal
class TestHashMap(object):
# TODO: It would be better if we had unit tests for each
# method in addition to the following end-to-end test
def test_end_to_end(self):
hash_table = HashTable(10)
print("Test: get on an empty hash table index")
assert_equal(hash_table.get(0), None)
print("Test: set on an empty hash table index")
hash_table.set(0, 'foo')
assert_equal(hash_table.get(0), 'foo')
hash_table.set(1, 'bar')
assert_equal(hash_table.get(1), 'bar')
print("Test: set on a non empty hash table index")
hash_table.set(10, 'foo2')
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), 'foo2')
print("Test: set on a key that already exists")
hash_table.set(10, 'foo3')
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), 'foo3')
print("Test: remove on a key that already exists")
hash_table.remove(10)
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), None)
print("Test: remove on a key that doesn't exist")
hash_table.remove(-1)
print('Success: test_end_to_end')
def main():
test = TestHashMap()
test.test_end_to_end()
if __name__ == '__main__':
main()
run -i test_hash_map.py
###Output
Test: get on an empty hash table index
Test: set on an empty hash table index
Test: set on a non empty hash table index
Test: set on a key that already exists
Test: remove on a key that already exists
Test: remove on a key that doesn't exist
Success: test_end_to_end
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a hash table with set, get, and remove methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* For simplicity, are the keys integers only? * Yes* For collision resolution, can we use chaining? * Yes* Do we have to worry about load factors? * No* Do we have to validate inputs? * No* Can we assume this fits memory? * Yes Test Cases* `get` no matching key -> KeyError exception* `get` matching key -> value* `set` no matching key -> new key, value* `set` matching key -> update value* `remove` no matching key -> KeyError exception* `remove` matching key -> remove key, value Algorithm Hash Function* Return key % table sizeComplexity:* Time: O(1)* Space: O(1) Set* Get hash index for lookup* If key exists, replace* Else, addComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) space for newly added element Get* Get hash index for lookup* If key exists, return value* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Remove* Get hash index for lookup* If key exists, delete the item* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Code
###Code
class Item(object):
def __init__(self, key, value):
self.key = key
self.value = value
class HashTable(object):
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(self.size)]
def _hash_function(self, key):
return key % self.size
def set(self, key, value):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
item.value = value
return
self.table[hash_index].append(Item(key, value))
def get(self, key):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
return item.value
raise KeyError('Key not found')
def remove(self, key):
hash_index = self._hash_function(key)
for index, item in enumerate(self.table[hash_index]):
if item.key == key:
del self.table[hash_index][index]
return
raise KeyError('Key not found')
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_hash_map.py
import unittest
class TestHashMap(unittest.TestCase):
# TODO: It would be better if we had unit tests for each
# method in addition to the following end-to-end test
def test_end_to_end(self):
hash_table = HashTable(10)
print("Test: get on an empty hash table index")
self.assertRaises(KeyError, hash_table.get, 0)
print("Test: set on an empty hash table index")
hash_table.set(0, 'foo')
self.assertEqual(hash_table.get(0), 'foo')
hash_table.set(1, 'bar')
self.assertEqual(hash_table.get(1), 'bar')
print("Test: set on a non empty hash table index")
hash_table.set(10, 'foo2')
self.assertEqual(hash_table.get(0), 'foo')
self.assertEqual(hash_table.get(10), 'foo2')
print("Test: set on a key that already exists")
hash_table.set(10, 'foo3')
self.assertEqual(hash_table.get(0), 'foo')
self.assertEqual(hash_table.get(10), 'foo3')
print("Test: remove on a key that already exists")
hash_table.remove(10)
self.assertEqual(hash_table.get(0), 'foo')
self.assertRaises(KeyError, hash_table.get, 10)
print("Test: remove on a key that doesn't exist")
self.assertRaises(KeyError, hash_table.remove, -1)
print('Success: test_end_to_end')
def main():
test = TestHashMap()
test.test_end_to_end()
if __name__ == '__main__':
main()
run -i test_hash_map.py
###Output
Test: get on an empty hash table index
Test: set on an empty hash table index
Test: set on a non empty hash table index
Test: set on a key that already exists
Test: remove on a key that already exists
Test: remove on a key that doesn't exist
Success: test_end_to_end
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a hash table with set, get, and remove methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* For simplicity, are the keys integers only? * Yes* For collision resolution, can we use chaining? * Yes* Do we have to worry about load factors? * No* Do we have to validate inputs? * No* Can we assume this fits memory? * Yes Test Cases* `get` no matching key -> KeyError exception* `get` matching key -> value* `set` no matching key -> new key, value* `set` matching key -> update value* `remove` no matching key -> KeyError exception* `remove` matching key -> remove key, value Algorithm Hash Function* Return key % table sizeComplexity:* Time: O(1)* Space: O(1) Set* Get hash index for lookup* If key exists, replace* Else, addComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) space for newly added element Get* Get hash index for lookup* If key exists, return value* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Remove* Get hash index for lookup* If key exists, delete the item* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Code
###Code
class Item(object):
def __init__(self, key, value):
self.key = key
self.value = value
class HashTable(object):
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(self.size)]
def _hash_function(self, key):
return key % self.size
def set(self, key, value):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
item.value = value
return
self.table[hash_index].append(Item(key, value))
def get(self, key):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
return item.value
raise KeyError('Key not found')
def remove(self, key):
hash_index = self._hash_function(key)
for index, item in enumerate(self.table[hash_index]):
if item.key == key:
del self.table[hash_index][index]
return
raise KeyError('Key not found')
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_hash_map.py
import unittest
class TestHashMap(unittest.TestCase):
# TODO: It would be better if we had unit tests for each
# method in addition to the following end-to-end test
def test_end_to_end(self):
hash_table = HashTable(10)
print("Test: get on an empty hash table index")
self.assertRaises(KeyError, hash_table.get, 0)
print("Test: set on an empty hash table index")
hash_table.set(0, 'foo')
self.assertEqual(hash_table.get(0), 'foo')
hash_table.set(1, 'bar')
self.assertEqual(hash_table.get(1), 'bar')
print("Test: set on a non empty hash table index")
hash_table.set(10, 'foo2')
self.assertEqual(hash_table.get(0), 'foo')
self.assertEqual(hash_table.get(10), 'foo2')
print("Test: set on a key that already exists")
hash_table.set(10, 'foo3')
self.assertEqual(hash_table.get(0), 'foo')
self.assertEqual(hash_table.get(10), 'foo3')
print("Test: remove on a key that already exists")
hash_table.remove(10)
self.assertEqual(hash_table.get(0), 'foo')
self.assertRaises(KeyError, hash_table.get, 10)
print("Test: remove on a key that doesn't exist")
self.assertRaises(KeyError, hash_table.remove, -1)
print('Success: test_end_to_end')
def main():
test = TestHashMap()
test.test_end_to_end()
if __name__ == '__main__':
main()
run -i test_hash_map.py
###Output
Test: get on an empty hash table index
Test: set on an empty hash table index
Test: set on a non empty hash table index
Test: set on a key that already exists
Test: remove on a key that already exists
Test: remove on a key that doesn't exist
Success: test_end_to_end
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a hash table with set, get, and remove methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* For simplicity, are the keys integers only? * Yes* For collision resolution, can we use chaining? * Yes* Do we have to worry about load factors? * No* Do we have to validate inputs? * No* Can we assume this fits memory? * Yes Test Cases* `get` no matching key -> KeyError exception* `get` matching key -> value* `set` no matching key -> new key, value* `set` matching key -> update value* `remove` no matching key -> KeyError exception* `remove` matching key -> remove key, value Algorithm Hash Function* Return key % table sizeComplexity:* Time: O(1)* Space: O(1) Set* Get hash index for lookup* If key exists, replace* Else, addComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) space for newly added element Get* Get hash index for lookup* If key exists, return value* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Remove* Get hash index for lookup* If key exists, delete the item* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Code
###Code
class Item(object):
def __init__(self, key, value):
self.key = key
self.value = value
class HashTable(object):
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(self.size)]
def _hash_function(self, key):
return key % self.size
def set(self, key, value):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
item.value = value
return
self.table[hash_index].append(Item(key, value))
def get(self, key):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
return item.value
raise KeyError('Key not found')
def remove(self, key):
hash_index = self._hash_function(key)
for index, item in enumerate(self.table[hash_index]):
if item.key == key:
del self.table[hash_index][index]
return
raise KeyError('Key not found')
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_hash_map.py
import unittest
class TestHashMap(unittest.TestCase):
# TODO: It would be better if we had unit tests for each
# method in addition to the following end-to-end test
def test_end_to_end(self):
hash_table = HashTable(10)
print("Test: get on an empty hash table index")
self.assertRaises(KeyError, hash_table.get, 0)
print("Test: set on an empty hash table index")
hash_table.set(0, 'foo')
self.assertEqual(hash_table.get(0), 'foo')
hash_table.set(1, 'bar')
self.assertEqual(hash_table.get(1), 'bar')
print("Test: set on a non empty hash table index")
hash_table.set(10, 'foo2')
self.assertEqual(hash_table.get(0), 'foo')
self.assertEqual(hash_table.get(10), 'foo2')
print("Test: set on a key that already exists")
hash_table.set(10, 'foo3')
self.assertEqual(hash_table.get(0), 'foo')
self.assertEqual(hash_table.get(10), 'foo3')
print("Test: remove on a key that already exists")
hash_table.remove(10)
self.assertEqual(hash_table.get(0), 'foo')
self.assertRaises(KeyError, hash_table.get, 10)
print("Test: remove on a key that doesn't exist")
self.assertRaises(KeyError, hash_table.remove, -1)
print('Success: test_end_to_end')
def main():
test = TestHashMap()
test.test_end_to_end()
if __name__ == '__main__':
main()
run -i test_hash_map.py
###Output
Test: get on an empty hash table index
Test: set on an empty hash table index
Test: set on a non empty hash table index
Test: set on a key that already exists
Test: remove on a key that already exists
Test: remove on a key that doesn't exist
Success: test_end_to_end
###Markdown
This notebook was prepared by [Donne Martin](http://donnemartin.com). Source and license info is on [GitHub](https://github.com/donnemartin/interactive-coding-challenges). Solution Notebook Problem: Implement a hash table with set, get, and remove methods.* [Constraints](Constraints)* [Test Cases](Test-Cases)* [Algorithm](Algorithm)* [Code](Code)* [Unit Test](Unit-Test) Constraints* For simplicity, are the keys integers only? * Yes* For collision resolution, can we use chaining? * Yes* Do we have to worry about load factors? * No* Do we have to validate inputs? * No* Can we assume this fits memory? * Yes Test Cases* `get` no matching key -> KeyError exception* `get` matching key -> value* `set` no matchin gkey -> new key, value* `set` matching key -> update value* `remove` no matching key -> KeyError exception* `remove` matching key -> remove key, value Algorithm Hash Function* Return key % table sizeComplexity:* Time: O(1)* Space: O(1) Set* Get hash index for lookup* If key exists, replace* Else, addComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) space for newly added element Get* Get hash index for lookup* If key exists, return value* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Remove* Get hash index for lookup* If key exists, delete the item* Else, raise KeyErrorComplexity:* Time: O(1) average and best, O(n) worst* Space: O(1) Code
###Code
class Item(object):
def __init__(self, key, value):
self.key = key
self.value = value
class HashTable(object):
def __init__(self, size):
self.size = size
self.table = [[] for _ in range(self.size)]
def _hash_function(self, key):
return key % self.size
def set(self, key, value):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
item.value = value
return
self.table[hash_index].append(Item(key, value))
def get(self, key):
hash_index = self._hash_function(key)
for item in self.table[hash_index]:
if item.key == key:
return item.value
raise KeyError('Key not found')
def remove(self, key):
hash_index = self._hash_function(key)
for index, item in enumerate(self.table[hash_index]):
if item.key == key:
del self.table[hash_index][index]
return
raise KeyError('Key not found')
###Output
_____no_output_____
###Markdown
Unit Test
###Code
%%writefile test_hash_map.py
from nose.tools import assert_equal, assert_raises
class TestHashMap(object):
# TODO: It would be better if we had unit tests for each
# method in addition to the following end-to-end test
def test_end_to_end(self):
hash_table = HashTable(10)
print("Test: get on an empty hash table index")
assert_raises(KeyError, hash_table.get, 0)
print("Test: set on an empty hash table index")
hash_table.set(0, 'foo')
assert_equal(hash_table.get(0), 'foo')
hash_table.set(1, 'bar')
assert_equal(hash_table.get(1), 'bar')
print("Test: set on a non empty hash table index")
hash_table.set(10, 'foo2')
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), 'foo2')
print("Test: set on a key that already exists")
hash_table.set(10, 'foo3')
assert_equal(hash_table.get(0), 'foo')
assert_equal(hash_table.get(10), 'foo3')
print("Test: remove on a key that already exists")
hash_table.remove(10)
assert_equal(hash_table.get(0), 'foo')
assert_raises(KeyError, hash_table.get, 10)
print("Test: remove on a key that doesn't exist")
assert_raises(KeyError, hash_table.remove, -1)
print('Success: test_end_to_end')
def main():
test = TestHashMap()
test.test_end_to_end()
if __name__ == '__main__':
main()
run -i test_hash_map.py
###Output
Test: get on an empty hash table index
Test: set on an empty hash table index
Test: set on a non empty hash table index
Test: set on a key that already exists
Test: remove on a key that already exists
Test: remove on a key that doesn't exist
Success: test_end_to_end
|
machine_learning/3_classification/assigment/week6/module-9-precision-recall-assignment-blank.ipynb | ###Markdown
Exploring precision and recallThe goal of this second notebook is to understand precision-recall in the context of classifiers. * Use Amazon review data in its entirety. * Train a logistic regression model. * Explore various evaluation metrics: accuracy, confusion matrix, precision, recall. * Explore how various metrics can be combined to produce a cost of making an error. * Explore precision and recall curves. Because we are using the full Amazon review dataset (not a subset of words or reviews), in this assignment we return to using GraphLab Create for its efficiency. As usual, let's start by **firing up GraphLab Create**.Make sure you have the latest version of GraphLab Create (1.8.3 or later). If you don't find the decision tree module, then you would need to upgrade graphlab-create using``` pip install graphlab-create --upgrade```See [this page](https://dato.com/download/) for detailed instructions on upgrading.
###Code
import graphlab
from __future__ import division
import numpy as np
graphlab.canvas.set_target('ipynb')
###Output
_____no_output_____
###Markdown
Load amazon review dataset
###Code
products = graphlab.SFrame('amazon_baby.gl/')
###Output
_____no_output_____
###Markdown
Extract word counts and sentiments As in the first assignment of this course, we compute the word counts for individual words and extract positive and negative sentiments from ratings. To summarize, we perform the following:1. Remove punctuation.2. Remove reviews with "neutral" sentiment (rating 3).3. Set reviews with rating 4 or more to be positive and those with 2 or less to be negative.
###Code
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
# Remove punctuation.
review_clean = products['review'].apply(remove_punctuation)
# Count words
products['word_count'] = graphlab.text_analytics.count_words(review_clean)
# Drop neutral sentiment reviews.
products = products[products['rating'] != 3]
# Positive sentiment to +1 and negative sentiment to -1
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
###Output
_____no_output_____
###Markdown
Now, let's remember what the dataset looks like by taking a quick peek:
###Code
products
###Output
_____no_output_____
###Markdown
Split data into training and test setsWe split the data into a 80-20 split where 80% is in the training set and 20% is in the test set.
###Code
train_data, test_data = products.random_split(.8, seed=1)
###Output
_____no_output_____
###Markdown
Train a logistic regression classifierWe will now train a logistic regression classifier with **sentiment** as the target and **word_count** as the features. We will set `validation_set=None` to make sure everyone gets exactly the same results. Remember, even though we now know how to implement logistic regression, we will use GraphLab Create for its efficiency at processing this Amazon dataset in its entirety. The focus of this assignment is instead on the topic of precision and recall.
###Code
model = graphlab.logistic_classifier.create(train_data, target='sentiment',
features=['word_count'],
validation_set=None)
###Output
_____no_output_____
###Markdown
Model Evaluation We will explore the advanced model evaluation concepts that were discussed in the lectures. AccuracyOne performance metric we will use for our more advanced exploration is accuracy, which we have seen many times in past assignments. Recall that the accuracy is given by$$\mbox{accuracy} = \frac{\mbox{ correctly classified data points}}{\mbox{ total data points}}$$To obtain the accuracy of our trained models using GraphLab Create, simply pass the option `metric='accuracy'` to the `evaluate` function. We compute the **accuracy** of our logistic regression model on the **test_data** as follows:
###Code
accuracy= model.evaluate(test_data, metric='accuracy')['accuracy']
print "Test Accuracy: %s" % accuracy
###Output
_____no_output_____
###Markdown
Baseline: Majority class predictionRecall from an earlier assignment that we used the **majority class classifier** as a baseline (i.e reference) model for a point of comparison with a more sophisticated classifier. The majority classifier model predicts the majority class for all data points. Typically, a good model should beat the majority class classifier. Since the majority class in this dataset is the positive class (i.e., there are more positive than negative reviews), the accuracy of the majority class classifier can be computed as follows:
###Code
baseline = len(test_data[test_data['sentiment'] == 1])/len(test_data)
print "Baseline accuracy (majority class classifier): %s" % baseline
###Output
_____no_output_____
###Markdown
** Quiz Question:** Using accuracy as the evaluation metric, was our **logistic regression model** better than the baseline (majority class classifier)? Confusion MatrixThe accuracy, while convenient, does not tell the whole story. For a fuller picture, we turn to the **confusion matrix**. In the case of binary classification, the confusion matrix is a 2-by-2 matrix laying out correct and incorrect predictions made in each label as follows:``` +---------------------------------------------+ | Predicted label | +----------------------+----------------------+ | (+1) | (-1) |+-------+-----+----------------------+----------------------+| True |(+1) | of true positives | of false negatives || label +-----+----------------------+----------------------+| |(-1) | of false positives | of true negatives |+-------+-----+----------------------+----------------------+```To print out the confusion matrix for a classifier, use `metric='confusion_matrix'`:
###Code
confusion_matrix = model.evaluate(test_data, metric='confusion_matrix')['confusion_matrix']
confusion_matrix
###Output
_____no_output_____
###Markdown
**Quiz Question**: How many predicted values in the **test set** are **false positives**? Computing the cost of mistakesPut yourself in the shoes of a manufacturer that sells a baby product on Amazon.com and you want to monitor your product's reviews in order to respond to complaints. Even a few negative reviews may generate a lot of bad publicity about the product. So you don't want to miss any reviews with negative sentiments --- you'd rather put up with false alarms about potentially negative reviews instead of missing negative reviews entirely. In other words, **false positives cost more than false negatives**. (It may be the other way around for other scenarios, but let's stick with the manufacturer's scenario for now.)Suppose you know the costs involved in each kind of mistake: 1. \$100 for each false positive.2. \$1 for each false negative.3. Correctly classified reviews incur no cost.**Quiz Question**: Given the stipulation, what is the cost associated with the logistic regression classifier's performance on the **test set**? Precision and Recall You may not have exact dollar amounts for each kind of mistake. Instead, you may simply prefer to reduce the percentage of false positives to be less than, say, 3.5% of all positive predictions. This is where **precision** comes in:$$[\text{precision}] = \frac{[\text{ positive data points with positive predicitions}]}{\text{[ all data points with positive predictions]}} = \frac{[\text{ true positives}]}{[\text{ true positives}] + [\text{ false positives}]}$$ So to keep the percentage of false positives below 3.5% of positive predictions, we must raise the precision to 96.5% or higher. **First**, let us compute the precision of the logistic regression classifier on the **test_data**.
###Code
precision = model.evaluate(test_data, metric='precision')['precision']
print "Precision on test data: %s" % precision
###Output
_____no_output_____
###Markdown
**Quiz Question**: Out of all reviews in the **test set** that are predicted to be positive, what fraction of them are **false positives**? (Round to the second decimal place e.g. 0.25) **Quiz Question:** Based on what we learned in lecture, if we wanted to reduce this fraction of false positives to be below 3.5%, we would: (see the quiz) A complementary metric is **recall**, which measures the ratio between the number of true positives and that of (ground-truth) positive reviews:$$[\text{recall}] = \frac{[\text{ positive data points with positive predicitions}]}{\text{[ all positive data points]}} = \frac{[\text{ true positives}]}{[\text{ true positives}] + [\text{ false negatives}]}$$Let us compute the recall on the **test_data**.
###Code
recall = model.evaluate(test_data, metric='recall')['recall']
print "Recall on test data: %s" % recall
###Output
_____no_output_____
###Markdown
**Quiz Question**: What fraction of the positive reviews in the **test_set** were correctly predicted as positive by the classifier?**Quiz Question**: What is the recall value for a classifier that predicts **+1** for all data points in the **test_data**? Precision-recall tradeoffIn this part, we will explore the trade-off between precision and recall discussed in the lecture. We first examine what happens when we use a different threshold value for making class predictions. We then explore a range of threshold values and plot the associated precision-recall curve. Varying the thresholdFalse positives are costly in our example, so we may want to be more conservative about making positive predictions. To achieve this, instead of thresholding class probabilities at 0.5, we can choose a higher threshold. Write a function called `apply_threshold` that accepts two things* `probabilities` (an SArray of probability values)* `threshold` (a float between 0 and 1).The function should return an array, where each element is set to +1 or -1 depending whether the corresponding probability exceeds `threshold`.
###Code
def apply_threshold(probabilities, threshold):
### YOUR CODE GOES HERE
# +1 if >= threshold and -1 otherwise.
...
###Output
_____no_output_____
###Markdown
Run prediction with `output_type='probability'` to get the list of probability values. Then use thresholds set at 0.5 (default) and 0.9 to make predictions from these probability values.
###Code
probabilities = model.predict(test_data, output_type='probability')
predictions_with_default_threshold = apply_threshold(probabilities, 0.5)
predictions_with_high_threshold = apply_threshold(probabilities, 0.9)
print "Number of positive predicted reviews (threshold = 0.5): %s" % (predictions_with_default_threshold == 1).sum()
print "Number of positive predicted reviews (threshold = 0.9): %s" % (predictions_with_high_threshold == 1).sum()
###Output
_____no_output_____
###Markdown
**Quiz Question**: What happens to the number of positive predicted reviews as the threshold increased from 0.5 to 0.9? Exploring the associated precision and recall as the threshold varies By changing the probability threshold, it is possible to influence precision and recall. We can explore this as follows:
###Code
# Threshold = 0.5
precision_with_default_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_default_threshold)
recall_with_default_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_default_threshold)
# Threshold = 0.9
precision_with_high_threshold = graphlab.evaluation.precision(test_data['sentiment'],
predictions_with_high_threshold)
recall_with_high_threshold = graphlab.evaluation.recall(test_data['sentiment'],
predictions_with_high_threshold)
print "Precision (threshold = 0.5): %s" % precision_with_default_threshold
print "Recall (threshold = 0.5) : %s" % recall_with_default_threshold
print "Precision (threshold = 0.9): %s" % precision_with_high_threshold
print "Recall (threshold = 0.9) : %s" % recall_with_high_threshold
###Output
_____no_output_____
###Markdown
**Quiz Question (variant 1)**: Does the **precision** increase with a higher threshold?**Quiz Question (variant 2)**: Does the **recall** increase with a higher threshold? Precision-recall curveNow, we will explore various different values of tresholds, compute the precision and recall scores, and then plot the precision-recall curve.
###Code
threshold_values = np.linspace(0.5, 1, num=100)
print threshold_values
###Output
_____no_output_____
###Markdown
For each of the values of threshold, we compute the precision and recall scores.
###Code
precision_all = []
recall_all = []
probabilities = model.predict(test_data, output_type='probability')
for threshold in threshold_values:
predictions = apply_threshold(probabilities, threshold)
precision = graphlab.evaluation.precision(test_data['sentiment'], predictions)
recall = graphlab.evaluation.recall(test_data['sentiment'], predictions)
precision_all.append(precision)
recall_all.append(recall)
###Output
_____no_output_____
###Markdown
Now, let's plot the precision-recall curve to visualize the precision-recall tradeoff as we vary the threshold.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def plot_pr_curve(precision, recall, title):
plt.rcParams['figure.figsize'] = 7, 5
plt.locator_params(axis = 'x', nbins = 5)
plt.plot(precision, recall, 'b-', linewidth=4.0, color = '#B0017F')
plt.title(title)
plt.xlabel('Precision')
plt.ylabel('Recall')
plt.rcParams.update({'font.size': 16})
plot_pr_curve(precision_all, recall_all, 'Precision recall curve (all)')
###Output
_____no_output_____
###Markdown
**Quiz Question**: Among all the threshold values tried, what is the **smallest** threshold value that achieves a precision of 96.5% or better? Round your answer to 3 decimal places. **Quiz Question**: Using `threshold` = 0.98, how many **false negatives** do we get on the **test_data**? (**Hint**: You may use the `graphlab.evaluation.confusion_matrix` function implemented in GraphLab Create.) This is the number of false negatives (i.e the number of reviews to look at when not needed) that we have to deal with using this classifier. Evaluating specific search terms So far, we looked at the number of false positives for the **entire test set**. In this section, let's select reviews using a specific search term and optimize the precision on these reviews only. After all, a manufacturer would be interested in tuning the false positive rate just for their products (the reviews they want to read) rather than that of the entire set of products on Amazon. Precision-Recall on all baby related itemsFrom the **test set**, select all the reviews for all products with the word 'baby' in them.
###Code
baby_reviews = test_data[test_data['name'].apply(lambda x: 'baby' in x.lower())]
###Output
_____no_output_____
###Markdown
Now, let's predict the probability of classifying these reviews as positive:
###Code
probabilities = model.predict(baby_reviews, output_type='probability')
###Output
_____no_output_____
###Markdown
Let's plot the precision-recall curve for the **baby_reviews** dataset.**First**, let's consider the following `threshold_values` ranging from 0.5 to 1:
###Code
threshold_values = np.linspace(0.5, 1, num=100)
###Output
_____no_output_____
###Markdown
**Second**, as we did above, let's compute precision and recall for each value in `threshold_values` on the **baby_reviews** dataset. Complete the code block below.
###Code
precision_all = []
recall_all = []
for threshold in threshold_values:
# Make predictions. Use the `apply_threshold` function
## YOUR CODE HERE
predictions = ...
# Calculate the precision.
# YOUR CODE HERE
precision = ...
# YOUR CODE HERE
recall = ...
# Append the precision and recall scores.
precision_all.append(precision)
recall_all.append(recall)
###Output
_____no_output_____
###Markdown
**Quiz Question**: Among all the threshold values tried, what is the **smallest** threshold value that achieves a precision of 96.5% or better for the reviews of data in **baby_reviews**? Round your answer to 3 decimal places. **Quiz Question:** Is this threshold value smaller or larger than the threshold used for the entire dataset to achieve the same specified precision of 96.5%?**Finally**, let's plot the precision recall curve.
###Code
plot_pr_curve(precision_all, recall_all, "Precision-Recall (Baby)")
###Output
_____no_output_____ |
code/dipping-regional/data.ipynb | ###Markdown
Data of a dipping model with induced magnetization This notebook generates a toal field anomaly (TFA) data from a dipping model on flightlines.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import cPickle as pickle
from IPython.display import Image as img
from fatiando.gravmag import polyprism
from fatiando.vis import mpl
###Output
/home/leo/anaconda2/lib/python2.7/site-packages/fatiando/vis/mpl.py:76: UserWarning: This module will be removed in v0.6. We recommend the use of matplotlib.pyplot module directly. Some of the fatiando specific functions will remain.
"specific functions will remain.")
###Markdown
Auxiliary functions
###Code
import sys
sys.path.insert(0, '../../code')
import mag_polyprism_functions as mfun
###Output
_____no_output_____
###Markdown
The model
###Code
img(filename='../dipping/model.png')
###Output
_____no_output_____
###Markdown
Importing model and grid
###Code
model_dir = '../dipping/model.pickle'
with open(model_dir) as w:
model = pickle.load(w)
df = pd.read_csv('../anitapolis/anitapolis_large_mag.txt', header=0, sep=' ')
df['X'] -= np.mean(df['X'])
df['Y'] -= np.mean(df['Y'])
df['GPSALT'] = - df['GPSALT'] + 800
df.loc[df['GPSALT'] > 0., 'GPSALT'] = np.mean(df['GPSALT'])
mask = (df['GPSALT'].get_values()<0.)
df = df[mask]
df['GPSALT'].get_values().size
data = dict()
data['x'] = df['X'].get_values()
data['y'] = df['Y'].get_values()
data['z'] = df['GPSALT'].get_values()
data['N'] = data['x'].size
model['prisms'][0].props
###Output
_____no_output_____
###Markdown
Generating data
###Code
# main field
data['main_field'] = [-21.5, -18.7]
# TFA data
data['tfa'] = polyprism.tf(data['x'], data['y'], data['z'],
model['prisms'], data['main_field'][0], data['main_field'][1]) # predict data
data['regional'] = df['reg'].get_values() + 500.
amp_noise = 5.
data['tfa_obs'] = data['tfa'] + data['regional'] + np.random.normal(loc=0., scale=amp_noise,
size=data['N']) # noise corrupted data
###Output
_____no_output_____
###Markdown
Data ploting
###Code
plt.figure(figsize=(13,5))
plt.subplot(121)
plt.title('Predicted TFA', fontsize=20)
plt.tricontour(data['y'], data['x'], data['tfa'], 20, colors='k', linewidths=0.5).ax.tick_params(labelsize=12)
plt.tricontourf(data['y'], data['x'], data['tfa'], 20, cmap='RdBu_r', vmax=-np.min(data['tfa']), vmin=np.min(data['tfa'])).ax.tick_params(labelsize=12)
plt.plot(data['y'], data['x'], '.k', markersize=0.3)
plt.xlabel('$y$(km)', fontsize=18)
plt.ylabel('$x$(km)', fontsize=18)
clb = plt.colorbar(pad=0.025, aspect=40, shrink=1)
clb.ax.tick_params(labelsize=13)
clb.ax.set_title('nT')
mpl.m2km()
plt.subplot(122)
plt.title('Observed TFA', fontsize=20)
plt.tricontour(data['y'], data['x'], data['tfa_obs'], 20, colors='k', linewidths=0.5).ax.tick_params(labelsize=12)
plt.tricontourf(data['y'], data['x'], data['tfa_obs'], 20, cmap='RdBu_r', vmax=np.max(data['tfa_obs']), vmin=-np.max(data['tfa_obs'])).ax.tick_params(labelsize=12)
plt.plot(data['y'], data['x'], '.k', markersize=0.3)
plt.xlabel('$y$(km)', fontsize=18)
plt.ylabel('$x$(km)', fontsize=18)
clb = plt.colorbar(pad=0.025, aspect=40, shrink=1)
clb.ax.tick_params(labelsize=13)
clb.ax.set_title('nT')
mpl.m2km()
plt.show()
###Output
_____no_output_____
###Markdown
Saving in an outer file
###Code
file_name = 'data.pickle'
with open(file_name, 'w') as f:
pickle.dump(data, f)
###Output
_____no_output_____ |
28septiembre.ipynb | ###Markdown
Seccion 1 En Esta seccion aprenderemos a programar en python Codigo de Ejempo__Negritas__´edad = 10print(edad)´
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write('Hola Mundo Jupyter')
archivo.close
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de ejemplo**negritas**_italicas_`edad=10print=edad`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo= open('prueba_daa.txt','wt')
archivo.write("Hola mundo jupyter")
archivo.close
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo aprenderemos a programar en Python con la herramienta de Google, colab.researchTambien aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Código de ejemplo**negritas**_italica_`edad = 10print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo aprenderemos a programar en pyhton con la herramienta google colabresearchtambien aprenderemos a guardar nuestros cambios a nuestro repositorio de github Código de ejemplo`edad = 10 print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write("Hola mundo bb")
archivo.close()
###Output
_____no_output_____
###Markdown
sección 1 En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de ejemplo**negritas**_italica_`valor = 10 print(valor)`
###Code
frutas =[]
frutas.append('piña')
frutas.append('manzana')
frutas.append('kiwi')
print(frutas)
archivo=open('archivo_prueba.txt','wt')
archivo.write('Hola mundo Jupyter')
archivo.close()
###Output
_____no_output_____
###Markdown
**seccion 1** En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de Ejemplo
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
#Revisar la parte de documentos
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Seccion1 En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de ejemplo
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter");
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
**Sección 1** En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Código de ejemplo`edad = 10 print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Pina')
frutas.append('kiwi')
print(frutas)
archivo = open ('prueba_daa.txt', 'wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Seccion 1 en este archivo aprenderemos a progrmaar en PYthon con la herramienta de Google, colab.research Tambien aprenderemos a guardar nuestros repositorio de github.com Codigo de ejemplo**negritas** _italica_edad = 10 print(edad)
###Code
frutas = []
frutas.append('Manzana')
frutas.append('piña')
frutas.append('kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write('Hola mundo Jupyter')
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo aprenderemos a programar en python con la herramienta de Google, colab.research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Código de ejemplo`edad = 10print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo = open('prueba_daa.txt', 'wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
Seccion 1 En este archivo aprenderemos a programar en python con la herramienta de google Tambien aprenderemos a guardarlo en nuestro repositorio github **Hola**_italica_`edad=10print(edad)`
###Code
frutas =[]
frutas.append('manzana')
frutas.append('piña')
frutas.append('kiwwi')
print(frutas)
archivo = open('prueba_diseño_analisis_algoritmos.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo aprenderemos a programar en python con la herrmienta Google colab research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Código de ejemplo**negritas**_italica_`edad = 10 print ("edad")`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo = open('prueba_daa.txt', 'wt')
archivo.write('Hola Mundo Madrugador')
archivo.close()
###Output
_____no_output_____
###Markdown
Sección 1En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.comCódigo de ejemplo**Negritas** *Italica*Edad = 10
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo = open('prueba_daa.txt', 'wt')
archivo.write('Hola mundo Jupyter')
archivo.close()
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo apenderemos a programar en Python con la herramienta de Google Colab. También aprenderemos a guardar los cambios en un repositorio Github. *Letra italica***Example test**`const es6_sintax = number => console.log(number)`
###Code
fruits = []
fruits.append('Apple')
fruits.append('Pinneapple')
fruits.append('Kiwii')
print(fruits)
file = open('test_data.txt', 'wt')
file.write('Hola inadaptados!!')
file.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Seccion 1 En este archivoaprenderemos a programar en python con la herramienta Google research, tambien aprenderemos a guardar cambios a nuestro repositorio Github Código de ejemplo**negritas**_italica_`edad = 10 print (edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo= open('prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo aprenderemos a programar en Python con la herramienta de Google, colab.researchTambien aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de ejemplo**negritas**_italica_`edad = 10print (edad) `
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('kiwi')
print (frutas)
archivo =open('prueba_daa.txt', 'wt')
archivo.write("Hola Mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Sección 1 Sección nueva En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research. Codigo de Ejemplo **negritas**_italica_`edad = 10 print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Código de ejemplo""negritas""_italica_`edad = 10print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
sección 1 En este archivo aprenderemos a programar en python con la herramienta de colab research.Tambien aprenderemos a guardar nuestros cambios a un repositorio de github.com codigo de ejemplo`print(hola)`
###Code
frutas=[]
frutas.append("manzana")
frutas.append("piña")
frutas.append("kiwi")
print(frutas)
archivo=open("prueba_data.txt","wt")
archivo.write("hola mundo")
archivo.close()
###Output
_____no_output_____
###Markdown
Seccion 1 En este archivo aprederemos a programar en Python con la herramienta de Google,colab research.Tambien aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de ejemplo`edad=10print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('kiwi')
print(frutas)
archivo=open('prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
Seccion 1 En este archivo aprenderemos a programar en Python con la herramienta de Google, colab.researchTambien aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de ejemplo**negritas**_italica_```pythons = "Python syntax highlighting"print s```
###Code
frutas=[]
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo=open('prueba_daa.txt','wt')
archivo.write('Hola mundo Jupyter')
archivo.close()
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Código ejemplo`edad = 10print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Pina')
frutas.append('Kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Seccion 1en este archico aprenderemos a programar en python con esta herramienta tambien vamos a guardar en github codigo ejemplo**negritas**_italica_`edad=10print(edad)`
###Code
frutas=[]
frutas.append('manzana')
frutas.append('piña')
frutas.append('kiwi')
print(frutas)
archivo=open('prueba_daa.txt','wt')
archivo.write("hola jupiter")
archivo.close()
###Output
_____no_output_____
###Markdown
Sección uno En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Código Ejemplo**negritas**_italica_`edad = 10print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
**Sección 1** Aprender a programar en python con la herrramienta de colab research.Y guardar cambios en nuestro repositorio de github. Código de ejemplo*negritas*_Itálica_edad=10print(edad)
###Code
frutas=[]
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo=open('prueba_daa.txt','wt')
archivo.write("hola mundo")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Sección 1 CODIGO DE EJEMPLO`edad=10print=(edad)` En este archivo vamos a usarlo para crear los proyectos de python
###Code
frutas=[]
frutas.append("manzana")
frutas.append("piña")
frutas.append("kiwi")
print(frutas)
archivo=open('Prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
Sección1En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Código de ejemplo**negritas**_italica_`edad = 10print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write("Hola Mundo Jupiter")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Seccion 1 En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de ejemplo**negritas** _italica_`edad = 10print(edad)`
###Code
frutas = []
frutas.append('Manzaana')
frutas.append('Piña')
frutas.append('kiwi')
print(frutas)
archivo = open('prueba_daa.txt', 'wt')
archivo.write("hola jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Código de ejemplo*negrita*_italica_`edad=10print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Seccion 1En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de ejemplo**negritas**_italica_´edad=10print(edad)´
###Code
frutas=[]
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo=open('prueba_daa.txt','wt')
archivo.write('Hola mundo Jupyter')
archivo.close()
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo aprenderemos a programar en python con la herramienta de Google, Collab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio en Github Código de ejemplo **Negritas**_itálicas_`edad=10print(edad)`
###Code
frutas=[]
frutas.append("Manzana")
frutas.append("Piña")
frutas.append("Kiwi")
print(frutas)
archivo=open('prueba_daa.txt','wt')
archivo.write("Hola Mundo Jupiter")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
frutas = []
frutas.append('Manzana')
frutas.append('piña')
frutas.append('kiwi')
print(frutas)
archivo = open('prueba_daa.txt','wt')
archivo.write('Hola mundo Jupyter')
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Sección 1: En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios en nuestro repositorio de Github.com Código de Ejemplo**negritas**_italika_`Edad = 10print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('Piña')
frutas.append('Kiwi')
print(frutas)
archivo = open('Prueba_daa.txt', 'wt')
archivo.write('Hola mundo Jupyter')
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
En este archivo aprenderemos a programar en Python con la herramienta de Google, colab.research**texto en negrita**Tambien aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de ejemplo**negritas**_italica_`edad = 10print(edad)`
###Code
frutas = []
frutas.append('Manzana')
frutas.append('piña')
frutas.append('kiwi')
print(frutas)
archivo = open('prueba_daa.txt', 'wt')
archivo.write("Hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
Seccion 1 En este archivo aprenderemos a programar en Python con la herramienta de Google Colab Research.También aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de ejemplo**negritas**_italicas_`edad=10print(edad)`
###Code
frutas=[]
frutas.append('manzana')
frutas.append('piña')
frutas.append('kiwi')
print(frutas)
archivo=open('prueba_daa.txt','wt')
archivo.write("hola mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
Seccion 1 En este archivo aprenderemos a progamar en python con la herramienta de Google Colab.resach, tambien aprenderemos a guardar nuestros cambios a nuestro repositorio de github.com Codigo de ejemplo**Negritas**_Italica_`Edad= 10print(edad)`
###Code
frutas=[]
frutas.append('Manzana')
frutas.append('piña')
frutas.append('kiwi')
print(frutas)
archivo= open('prueba_daa.txt','wt')
archivo.write("Hola , mundo Jupyter")
archivo.close()
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
Sección 1 En este archivo tambien aprenderemos a programar con phyton en google y tambien obtendremos a como subir a nuestro repositorio de github Sección nueva``
###Code
frutas = []
frutas.append('manzana')
frutas.append('kiwi')
frutas.append('piña')
print(frutas)
archivo = open('prueba.txt','wt')
archivo.write("hola mundo jupyter")
archivo.close()
###Output
_____no_output_____ |
004_problem.ipynb | ###Markdown
004H 行 W 列のマス目があります。上から i (1 ≦ i ≦ H) 行目、左から j (1 ≦ j ≦ W) 列目にあるマス (i, j) には、整数 A[i][j] が書かれています。すべてのマス (i, j) [1 ≦ i ≦ H, 1 ≦ j ≦ W] について、以下の値を求めてください。・マス (i, j) と同じ行または同じ列にあるマス(自分自身を含む)に書かれている整数をすべて合計した値【制約】・1 ≦ H ≦ 2000・1 ≦ W ≦ 2000・1 ≦ A[i][j] ≦ 99・入力はすべて整数 入力形式H WA[1][1] A[1][2] ... A[1][W]A[2][1] A[2][2] ... A[2][W] :A[H][1] A[H][2] ... A[H][W]
###Code
# 入力例 1
3 3
1 1 1
1 1 1
1 1 1
# 出力例 1
5 5 5
5 5 5
5 5 5
# 入力例 2
4 4
3 1 4 1
5 9 2 6
5 3 5 8
9 7 9 3
# 出力例 2
28 28 25 26
39 33 40 34
38 38 36 31
41 41 39 43
# 入力例 3
2 10
31 41 59 26 53 58 97 93 23 84
62 64 33 83 27 95 2 88 41 97
# 出力例 3
627 629 598 648 592 660 567 653 606 662
623 633 651 618 645 650 689 685 615 676
# 入力例 4
10 10
83 86 77 65 93 85 86 92 99 71
62 77 90 59 63 76 90 76 72 86
61 68 67 79 82 80 62 73 67 85
79 52 72 58 69 67 93 56 61 92
79 73 71 69 84 87 98 74 65 70
63 76 91 80 56 73 62 70 96 81
55 75 84 77 86 55 96 79 63 57
74 95 82 95 64 67 84 64 93 50
87 58 76 78 88 84 53 51 54 99
82 60 76 68 89 62 76 86 94 89
# 出力例 4
1479 1471 1546 1500 1518 1488 1551 1466 1502 1546
1414 1394 1447 1420 1462 1411 1461 1396 1443 1445
1388 1376 1443 1373 1416 1380 1462 1372 1421 1419
1345 1367 1413 1369 1404 1368 1406 1364 1402 1387
1416 1417 1485 1429 1460 1419 1472 1417 1469 1480
1410 1392 1443 1396 1466 1411 1486 1399 1416 1447
1397 1372 1429 1378 1415 1408 1431 1369 1428 1450
1419 1393 1472 1401 1478 1437 1484 1425 1439 1498
1366 1390 1438 1378 1414 1380 1475 1398 1438 1409
1425 1442 1492 1442 1467 1456 1506 1417 1452 1473
###Output
_____no_output_____ |
notebooks/bnn_mnist_sgld_whitejax.ipynb | ###Markdown
Bayesian MLP for MNIST using preconditioned SGLDWe use the [Jax Bayes](https://github.com/jamesvuc/jax-bayes) library by James Vuckovic to fit an MLP to MNIST using SGD, and SGLD (with RMS preconditioning).Code is based on:1. https://github.com/jamesvuc/jax-bayes/blob/master/examples/deep/mnist/mnist.ipynb2. https://github.com/jamesvuc/jax-bayes/blob/master/examples/deep/mnist/mnist_mcmc.ipynb Setup
###Code
%%capture
!pip install git+https://github.com/jamesvuc/jax-bayes
!pip install SGMCMCJax
!pip install distrax
import jax.numpy as jnp
from jax.experimental import optimizers
import jax
import jax_bayes
import sys, os, math, time
import numpy as np
from functools import partial
from matplotlib import pyplot as plt
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
import tensorflow_datasets as tfds
import sgmcmcjax
from jax import jit, vmap
from jax.random import split, PRNGKey
import distrax
from tqdm.auto import tqdm
import tensorflow_probability.substrates.jax.distributions as tfd
###Output
_____no_output_____
###Markdown
Data
###Code
def load_dataset(split, is_training, batch_size):
if batch_size == -1:
ds = tfds.load('mnist:3.*.*', split=split, batch_size=-1)
else:
ds = tfds.load('mnist:3.*.*', split=split).cache().repeat()
if is_training and batch_size > 0:
ds = ds.shuffle(10 * batch_size, seed=0)
if batch_size > 0:
ds = ds.batch(batch_size)
return iter(tfds.as_numpy(ds)) if batch_size > 0 else tfds.as_numpy(ds)
# load the data into memory and create batch iterators
train_batches = load_dataset("train", is_training=True, batch_size=1_000)
val_batches = load_dataset("train", is_training=False, batch_size=10_000)
test_batches = load_dataset("test", is_training=False, batch_size=10_000)
###Output
_____no_output_____
###Markdown
The Bayesian NN is taken from [SGMCMCJAX](https://github.com/jeremiecoullon/SGMCMCJax/blob/7da21c0c79606e908c2292533c176349d9349cd0/docs/nbs/models/bayesian_NN/NN_model.py). However, there are couple of changes made. These can be listed as follows:1. The random_layer function initialises weights from truncated_normal rather than normal distribution.2. The random_layer function initialises weights with zeros rather than sampling from normal distribution.3. Activation function can be determined instead of using only softmax function.
###Code
# ==========
# Functions to initialise parameters
# initialise params: list of tuples (W, b) for each layer
def random_layer(key, m, n, scale=1e-2):
key, subkey = jax.random.split(key)
return (scale * jax.random.truncated_normal(key, -2, 2, (n,m)), jnp.zeros((n, )))
def init_network(key, sizes):
keys = jax.random.split(key, len(sizes))
return [random_layer(k,m,n) for k,m,n in zip(keys, sizes[:-1], sizes[1:])]
# ===========
# predict and accuracy functions
@partial(jit, static_argnames=("activation_fn"))
def predict(params, x, activation_fn):
# per-example predictions
activations = x
for w, b in params[:-1]:
outputs = activations @ w.T + b
activations = activation_fn(outputs)
final_w, final_b = params[-1]
logits = activations @ final_w.T + final_b
return logits
# =================
# Log-posterior
@partial(jit, static_argnames=("activation_fn"))
def loglikelihood(params, X, y, activation_fn):
return jnp.sum(y*jax.nn.log_softmax(predict(params, X, activation_fn)))
def logprior(params):
logP = 0.0
dist = distrax.Normal(0, 1)
for w, b in params:
logP += jnp.sum(dist.log_prob(w))
logP += jnp.sum(dist.log_prob(b))
return logP
# Accuracy for a single sample
batch_predict = vmap(predict, in_axes=(None, 0, None))
@partial(jit, static_argnames=("activation_fn"))
def accuracy(params, batch, activation_fn):
X, target_class = batch["image"].reshape((-1, D)), batch["label"]
predicted_class = jnp.argmax(batch_predict(params, X, activation_fn), axis=1)
return jnp.mean(predicted_class == target_class)
batch = next(train_batches)
nclasses = 10
x = batch["image"]
D = np.prod(x.shape[1:]) # 784
sizes = [D, 300, 100, nclasses]
###Output
_____no_output_____
###Markdown
Model SGD
###Code
def loss(params, batch, activation_fn):
logits = predict(params, batch["image"].reshape((-1, D)), activation_fn)
labels = jax.nn.one_hot(batch['label'], nclasses)
l2_loss = 0.5 * sum(jnp.sum(jnp.square(p))
for p in jax.tree_leaves(params))
softmax_crossent = - jnp.mean(labels * jax.nn.log_softmax(logits))
return softmax_crossent + reg * l2_loss
@partial(jit, static_argnames=("activation_fn"))
def train_step(i, opt_state, batch, activation_fn):
params = opt_get_params(opt_state)
dx = jax.grad(loss)(params, batch, activation_fn)
opt_state = opt_update(i, dx, opt_state)
return opt_state
reg = 1e-3
lr = 1e-3
opt_init, opt_update, opt_get_params = optimizers.rmsprop(lr)
initial_params = init_network(PRNGKey(0), sizes)
opt_state = opt_init(initial_params)
activation_fn = jax.nn.relu
%%time
accuracy_list_train, accuracy_list_test = [], []
nsteps = 2000
print_every = 100
for step in tqdm(range(nsteps+1)):
opt_state = train_step(step, opt_state, next(train_batches), activation_fn)
params_sgd = opt_get_params(opt_state)
if step % print_every == 0:
# Periodically evaluate classification accuracy on train & test sets.
train_accuracy = accuracy(params_sgd, next(val_batches), activation_fn)
test_accuracy = accuracy(params_sgd, next(test_batches), activation_fn)
accuracy_list_train.append(train_accuracy)
accuracy_list_test.append(test_accuracy)
fig, axes = plt.subplots(nrows = 1, ncols=2, sharex=True, sharey=True, figsize=(20, 5))
for ls, ax in zip([accuracy_list_train, accuracy_list_test], axes.flatten()):
ax.plot(ls[:])
ax.set_title(f"Final accuracy: {100*ls[-1]:.1f}%")
###Output
_____no_output_____
###Markdown
SGLD
###Code
from sgmcmcjax.kernels import build_sgld_kernel
from sgmcmcjax.util import progress_bar_scan
lr = 5e-5
activation_fn = jax.nn.softmax
data = load_dataset("train", is_training=True, batch_size=-1)
data = (jnp.array(data["image"].reshape((-1, D)) /255.), jax.nn.one_hot(jnp.array(data["label"]), nclasses))
batch_size = int(0.01*len(data[0]))
init_fn, my_kernel, get_params = build_sgld_kernel(lr, partial(loglikelihood, activation_fn=activation_fn), logprior, data, batch_size)
my_kernel = jit(my_kernel)
# define the inital state
key = jax.random.PRNGKey(10)
key, subkey = jax.random.split(key,2)
params_IC = init_network(subkey, sizes)
%%time
# iterate the the Markov chain
nsteps = 2000
Nsamples = 10
@partial(jit, static_argnums=(1,))
def sampler(key, Nsamples, params):
def body(carry, i):
key, state = carry
key, subkey = jax.random.split(key)
state = my_kernel(i, subkey, state)
return (key, state), get_params(state)
key, subkey = jax.random.split(key)
state = init_fn(subkey, params)
(_, state), samples = jax.lax.scan(body, (key, state), jnp.arange(Nsamples))
return samples, state
accuracy_list_test, accuracy_list_val = [], []
params = params_IC
for step in tqdm(range(nsteps)):
key, sample_key = jax.random.split(key, 2)
samples, state = sampler(sample_key, Nsamples, params)
params = get_params(state)
if step % print_every == 0:
test_acc, val_acc = accuracy(params,next(test_batches), activation_fn), accuracy(params,next(val_batches), activation_fn)
accuracy_list_test.append(test_acc)
accuracy_list_val.append(val_acc)
fig, axes = plt.subplots(nrows = 1, ncols=2, sharex=True, sharey=True, figsize=(20, 5))
for ls, ax in zip([accuracy_list_test, accuracy_list_val], axes.flatten()):
ax.plot(ls[:])
ax.set_title(f"Final accuracy: {100*ls[-1]:.2f}%")
###Output
_____no_output_____
###Markdown
Uncertainty analysis We select the predictions above a confidence threshold, and compute the predictive accuracy on that subset. As we increase the threshold, the accuracy should increase, but fewer examples will be selected. The following two functions are taken from [JaxBayes](https://github.com/jamesvuc/jax-bayes/blob/master/jax_bayes/utils.py)
###Code
def certainty_acc(pp, targets, cert_threshold=0.5):
""" Calculates the accuracy-at-certainty from the predictive probabilites pp
on the targets.
Args:
pp: (batch_size, n_classes) array of probabilities
targets: (batch_size, n_calsses) array of label class indices
cert_threhsold: (float) minimum probability for making a prediction
Returns:
accuracy at certainty, indicies of those prediction instances for which
the model is certain.
"""
preds = jnp.argmax(pp, axis=1)
pred_probs = jnp.max(pp, axis=1)
certain_idxs = pred_probs >= cert_threshold
acc_at_certainty = jnp.mean(targets[certain_idxs] == preds[certain_idxs])
return acc_at_certainty, certain_idxs
@jit
@vmap
def entropy(p):
""" computes discrete Shannon entropy.
p: (n_classes,) array of probabilities corresponding to each class
"""
p += 1e-12 #tolerance to avoid nans while ensuring 0log(0) = 0
return - jnp.sum(p * jnp.log(p))
test_batch = next(test_batches)
def plot_acc_vs_confidence(predict_fn, test_batch):
# plot how accuracy changes as we increase the required level of certainty
preds = predict_fn(test_batch) #(batch_size, n_classes) array of probabilities
acc, mask = certainty_acc(preds, test_batch['label'], cert_threshold=0)
thresholds = [0.1 * i for i in range(11)]
cert_accs, pct_certs = [], []
for t in thresholds:
cert_acc, cert_mask = certainty_acc(preds, test_batch['label'], cert_threshold=t)
cert_accs.append(cert_acc)
pct_certs.append(cert_mask.mean())
fig, ax = plt.subplots(1)
line1 = ax.plot(thresholds, cert_accs, label='accuracy at certainty', marker='x')
line2 = ax.axhline(y=acc, label='regular accuracy', color='black')
ax.set_ylabel('accuracy')
ax.set_xlabel('certainty threshold')
axb = ax.twinx()
line3 = axb.plot(thresholds, pct_certs, label='pct of certain preds',
color='green', marker='x')
axb.set_ylabel('pct certain')
lines = line1 + [line2] + line3
labels = [l.get_label() for l in lines]
ax.legend(lines, labels, loc=6)
return fig, ax
###Output
_____no_output_____
###Markdown
SGDFor the plugin estimate, the model is very confident on nearly all of the points.
###Code
# plugin approximation to posterior predictive
@partial(jit, static_argnames=("activation_fn"))
def posterior_predictive_plugin(params, X, activation_fn):
logit_pp = predict(params, X, activation_fn)
return jax.nn.softmax(logit_pp, axis=-1)
def pred_fn_sgd(batch):
X= batch["image"].reshape((-1, D))
return posterior_predictive_plugin(params_sgd, X, jax.nn.relu)
fig, ax = plot_acc_vs_confidence(pred_fn_sgd, test_batch)
plt.savefig('acc-vs-conf-sgd.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
SGLD
###Code
def posterior_predictive_bayes(params_sampled, batch, activation_fn):
"""computes the posterior_predictive P(class = c | inputs, params) using a histogram
"""
X= batch["image"].reshape((-1, D))
y= batch["label"]
pred_fn = lambda p: predict(p, X, activation_fn)
pred_fn = jax.vmap(pred_fn)
logit_samples = pred_fn(params_sampled) # n_samples x batch_size x n_classes
pred_samples = jnp.argmax(logit_samples, axis=-1) #n_samples x batch_size
n_classes = logit_samples.shape[-1]
batch_size = logit_samples.shape[1]
probs = np.zeros((batch_size, n_classes))
for c in range(n_classes):
idxs = pred_samples == c
probs[:,c] = idxs.sum(axis=0)
return probs / probs.sum(axis=1, keepdims=True)
def pred_fn_sgld(batch):
return posterior_predictive_bayes(samples, batch, jax.nn.softmax)
fig, ax = plot_acc_vs_confidence(pred_fn_sgld, test_batch)
plt.savefig('acc-vs-conf-sgld.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
Distribution shiftWe now examine the behavior of the models on the Fashion MNIST dataset.We expect the predictions to be much less confident, since the inputs are now 'out of distribution'. We will see that this is true for the Bayesian approach, but not for the plugin approximation.
###Code
fashion_ds = tfds.load('fashion_mnist:3.*.*', split="test").cache().repeat()
fashion_test_batches = tfds.as_numpy(fashion_ds.batch(10_000))
fashion_test_batches = iter(fashion_test_batches)
fashion_batch = next(fashion_test_batches)
###Output
_____no_output_____
###Markdown
SGD
###Code
fig, ax = plot_acc_vs_confidence(pred_fn_sgd, fashion_batch)
plt.savefig('acc-vs-conf-sgd-fashion.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
SGLD
###Code
fig, ax = plot_acc_vs_confidence(pred_fn_sgld, fashion_batch)
plt.savefig('acc-vs-conf-sgld-fashion.pdf')
plt.show()
###Output
_____no_output_____ |
020 Neuronale Netze.ipynb | ###Markdown
Neuronale Netze Neuronen Künstliche Neuronen Künstliche Neuronen Aktivierungsfunktionen
###Code
import torch
import torch.nn as nn
import numpy as np
import matplotlib.pyplot as plt
act_x = torch.tensor(np.linspace(-6, 6, 100))
plt.figure(figsize=(16, 12))
plt.subplot(3, 2, 1)
plt.plot(act_x, nn.Sigmoid()(act_x))
plt.subplot(3, 2, 2)
plt.plot(act_x, nn.Tanh()(act_x))
plt.subplot(3, 2, 3)
plt.plot(act_x, nn.ReLU()(act_x))
plt.subplot(3, 2, 4)
plt.plot(act_x, - nn.ReLU()(act_x + 2))
plt.subplot(3, 2, 5)
plt.plot(act_x, nn.ReLU()(act_x) - nn.ReLU()(act_x + 2))
plt.subplot(3, 2, 6)
plt.plot(act_x, nn.Tanh()(act_x) - 1.5 * nn.Tanh()(act_x - 2))
import torch
import torch.nn as nn
neuron = lambda x: nn.Tanh()(nn.Linear(4, 1)(x))
neuron(torch.tensor([1.0, 2.0, 3.0, 4.0]))
neuron = nn.Sequential(
nn.Linear(4, 1),
nn.Tanh()
)
neuron(torch.tensor([1.0, 2.0, 3.0, 4.0]))
###Output
_____no_output_____
###Markdown
Neuronale Netze
###Code
seq_model = nn.Sequential(
nn.Linear(2, 4),
nn.ReLU(),
nn.Linear(4, 3),
nn.ReLU(),
nn.Linear(3, 2)
)
seq_model(torch.tensor([1.0, 2.0]))
###Output
_____no_output_____
###Markdown
Erinnerung: Training Training Neuraler Netze Training Neuraler Netze Training Neuraler Netze Training Neuraler Netze Training Neuraler Netze Wie updaten wir die Parameter? Wie updaten wir die Parameter? Wie updaten wir die Parameter? MNIST
###Code
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
input_size = 28 * 28
num_classes = 10
num_epochs = 5
batch_size = 100
learning_rate = 0.005
mnist_transforms = transforms.Compose([
transforms.Resize(28, 28),
transforms.ToTensor()
])
train_dataset = torchvision.datasets.MNIST(root='./data',
train=True,
transform=mnist_transforms,
download=True)
test_dataset = torchvision.datasets.MNIST(root='./data',
train=False,
transform=mnist_transforms,
download=True)
it = iter(train_dataset)
next(it)[0].shape, next(it)[1]
next(it)[0].shape, next(it)[1]
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=False)
def create_model(hidden_size):
model = nn.Sequential(
nn.Linear(input_size, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, num_classes)
)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
return model, optimizer
loss_fn = nn.CrossEntropyLoss()
m = torch.randn(2, 3, 2, 5)
m.reshape(-1, 30).shape
def training_loop(n_epochs, optimizer, model, loss_fn, device, train_loader, val_loader):
all_losses = []
for epoch in range(1, n_epochs + 1):
accumulated_loss = 0
for i, (images, labels) in enumerate(train_loader):
images = images.reshape(-1, input_size).to(device)
labels = labels.to(device)
output = model(images)
loss = loss_fn(output, labels)
with torch.no_grad():
accumulated_loss += loss
all_losses.append(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0:
print(f"Epoch {epoch:3}/{n_epochs:3}, step {i + 1}: "
f"training loss = {accumulated_loss.item():8.3f}")
accumulated_loss = 0
return all_losses
def run_model(hidden_size, num_epochs=num_epochs):
model, optimizer = create_model(hidden_size)
losses = training_loop(
n_epochs=num_epochs,
optimizer=optimizer,
model=model,
loss_fn=loss_fn,
device=torch.device('cpu') if torch.cuda.is_available() else torch.device('cpu'),
train_loader=train_loader,
val_loader=test_loader
)
return losses
losses = run_model(128, num_epochs=5)
from matplotlib import pyplot
pyplot.figure(figsize=(16, 5))
pyplot.plot(range(len(losses)), losses);
run_model(32)
from matplotlib import pyplot
pyplot.figure(figsize=(16, 5))
pyplot.plot(range(len(losses)), losses);
run_model(512, num_epochs=10)
from matplotlib import pyplot
pyplot.figure(figsize=(16, 5))
pyplot.plot(range(len(losses)), losses);
###Output
Epoch 1/ 10, step 100: training loss = 42.237
Epoch 1/ 10, step 200: training loss = 19.806
Epoch 1/ 10, step 300: training loss = 15.659
Epoch 1/ 10, step 400: training loss = 14.877
Epoch 1/ 10, step 500: training loss = 13.241
Epoch 1/ 10, step 600: training loss = 12.211
Epoch 2/ 10, step 100: training loss = 8.249
Epoch 2/ 10, step 200: training loss = 8.177
Epoch 2/ 10, step 300: training loss = 9.035
Epoch 2/ 10, step 400: training loss = 8.776
Epoch 2/ 10, step 500: training loss = 8.792
Epoch 2/ 10, step 600: training loss = 8.950
Epoch 3/ 10, step 100: training loss = 6.073
Epoch 3/ 10, step 200: training loss = 5.804
Epoch 3/ 10, step 300: training loss = 6.615
Epoch 3/ 10, step 400: training loss = 6.450
Epoch 3/ 10, step 500: training loss = 6.391
Epoch 3/ 10, step 600: training loss = 6.566
Epoch 4/ 10, step 100: training loss = 4.324
Epoch 4/ 10, step 200: training loss = 4.841
Epoch 4/ 10, step 300: training loss = 4.418
Epoch 4/ 10, step 400: training loss = 5.358
Epoch 4/ 10, step 500: training loss = 5.300
Epoch 4/ 10, step 600: training loss = 5.713
Epoch 5/ 10, step 100: training loss = 4.644
Epoch 5/ 10, step 200: training loss = 4.748
Epoch 5/ 10, step 300: training loss = 4.143
Epoch 5/ 10, step 400: training loss = 3.296
Epoch 5/ 10, step 500: training loss = 4.797
Epoch 5/ 10, step 600: training loss = 4.573
Epoch 6/ 10, step 100: training loss = 3.675
Epoch 6/ 10, step 200: training loss = 3.511
Epoch 6/ 10, step 300: training loss = 4.159
Epoch 6/ 10, step 400: training loss = 5.091
Epoch 6/ 10, step 500: training loss = 4.039
Epoch 6/ 10, step 600: training loss = 4.672
Epoch 7/ 10, step 100: training loss = 3.545
Epoch 7/ 10, step 200: training loss = 3.363
Epoch 7/ 10, step 300: training loss = 3.600
Epoch 7/ 10, step 400: training loss = 3.299
Epoch 7/ 10, step 500: training loss = 2.549
Epoch 7/ 10, step 600: training loss = 3.242
Epoch 8/ 10, step 100: training loss = 2.706
Epoch 8/ 10, step 200: training loss = 2.870
Epoch 8/ 10, step 300: training loss = 3.798
Epoch 8/ 10, step 400: training loss = 3.322
Epoch 8/ 10, step 500: training loss = 3.273
Epoch 8/ 10, step 600: training loss = 3.210
Epoch 9/ 10, step 100: training loss = 2.431
Epoch 9/ 10, step 200: training loss = 1.921
Epoch 9/ 10, step 300: training loss = 2.705
Epoch 9/ 10, step 400: training loss = 3.024
Epoch 9/ 10, step 500: training loss = 3.426
Epoch 9/ 10, step 600: training loss = 2.790
Epoch 10/ 10, step 100: training loss = 1.923
Epoch 10/ 10, step 200: training loss = 3.124
Epoch 10/ 10, step 300: training loss = 3.156
Epoch 10/ 10, step 400: training loss = 2.681
Epoch 10/ 10, step 500: training loss = 3.041
Epoch 10/ 10, step 600: training loss = 2.968
###Markdown
Modelle Für Neuronale Netze:Was repräsentiert werden kann hängt ab von- Anzahl der Layers- Anzahl der Neutronen per Layer- Komplexität der Verbindungen zwischen Neutronen Was kann man (theoretisch) lernen?Schwierig aber irrelevant Was kann man praktisch lernen?Sehr viel, wenn man genug Zeit und Daten hat Was kann man effizient lernen?Sehr viel, wenn man sich geschickt anstellt(und ein Problem hat, an dem viele andere Leute arbeiten) Bias/Variance Tradeoff- Modelle mit geringer Expressivität (representational power) - Können schnell trainiert werden - Arbeiten mit wenig Trainingsdaten - Sind robust gegenüber Fehlern in den Trainingsdaten- Wir sind nicht an einer möglichst exakten Wiedergabe unserer Daten interessiert- Entscheidend ist wie gut unser Modell auf unbekannte Daten generalisiert Generalisierung und Rauschen Komplexität der Entscheidungsgrenze Datenverteilung und Qualität Erinnerung: die Trainings-Schleife Was lernt ein Klassifizierer? Wie gut sind wir?Wie wissen wir, wie gut unser Modell wirklich ist? Was kann schief gehen? Was kann schief gehen? Was kann schief gehen? Accuracy: Wie viel haben wir richtig gemacht? Precision: Wie gut sind unsere positiven Elemente? Recall: Wie viele positive Elemente haben wir übersehen? Bessere Netzwerkarchitektur Beispiel: Conv Net
###Code
def create_conv_model():
model = nn.Sequential(
nn.Conv2d(1, 32, 3, 1),
nn.ReLU(),
nn.Conv2d(32, 64, 3, 1),
nn.MaxPool2d(2),
nn.Dropout2d(0.25),
nn.Flatten(1),
nn.Linear(9216, 128),
nn.ReLU(),
nn.Dropout2d(0.5),
nn.Linear(128, 10)
)
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
return model, optimizer
def training_loop(n_epochs, optimizer, model, loss_fn, device, train_loader, val_loader):
all_losses = []
for epoch in range(1, n_epochs + 1):
accumulated_loss = 0
for i, (images, labels) in enumerate(train_loader):
images = images.to(device)
labels = labels.to(device)
output = model(images)
loss = loss_fn(output, labels)
with torch.no_grad():
accumulated_loss += loss
all_losses.append(loss)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if (i + 1) % 100 == 0:
print(f"Epoch {epoch:3}/{n_epochs:3}, step {i + 1}: "
f"training loss = {accumulated_loss.item():8.3f}")
accumulated_loss = 0
return all_losses
def run_conv_model(num_epochs=num_epochs):
model, optimizer = create_conv_model()
losses = training_loop(
n_epochs=num_epochs,
optimizer=optimizer,
model=model,
loss_fn=loss_fn,
device=torch.device('cpu') if torch.cuda.is_available() else torch.device('cpu'),
train_loader=train_loader,
val_loader=test_loader
)
run_conv_model(10)
###Output
_____no_output_____ |
Section 6/UnsupervisedLearning.ipynb | ###Markdown
Diving Into Clustering and Unsupervised Learning*Curtis Miller*In this notebook I give some functions for computing distances between points. This is to introduce the idea of different distance metrics, an important idea in data science and clustering.Many of these metrics are already supported in relevant packages, but you are welcome to look at functions defining them to understand how they work. Euclidean DistanceThis is the "straight line" distance people are most familiar with.
###Code
import numpy as np
def euclidean_distance(v1, v2):
"""Computes the Euclidean distance between two vectors"""
return np.sqrt(np.sum((v1 - v2) ** 2))
vec1 = np.array([1, 2, 3])
vec2 = np.array([1, -1, 0])
euclidean_distance(vec1, vec2)
###Output
_____no_output_____
###Markdown
Manhattan DistanceAlso commonly known as "taxicab distance" this is the distance between two points when "diagonal" movement is not allowed.
###Code
def manhattan_distance(v1, v2):
"""Computes the Manhattan distance between two vectors"""
return np.sum(np.abs(v1 - v2))
manhattan_distance(vec1, vec2)
###Output
_____no_output_____
###Markdown
Angular DistanceThis is the size of the angle between the two vectors.
###Code
from numpy.linalg import norm
def angular_distance(v1, v2):
"""Computes the angular distance between two vectors"""
sim = v1.dot(v2)/(norm(v1) * norm(v2))
return np.arccos(sim)/np.pi
angular_distance(vec1, vec2)
angular_distance(vec1, vec1) # Two identical vectors have an angular distance of 0
angular_distance(vec1, 2 * vec1) # It's insensitive to magnitude (technically it's not a metric as defined by
# mathematicians because of this, except on a unit circle)
###Output
_____no_output_____
###Markdown
Hamming DistanceIntended for strings (bitstring or otherwise), the Hamming distance between two strings is the number of symbols that need to change in one string to make it identical to the other. (The following code was shamelessly stolen from [Wikipedia](https://en.wikipedia.org/wiki/Hamming_distance).)
###Code
def hamming_distance(s1, s2):
"""Return the Hamming distance between equal-length sequences"""
if len(s1) != len(s2):
raise ValueError("Undefined for sequences of unequal length")
return sum(el1 != el2 for el1, el2 in zip(s1, s2))
hamming_distance("11101", "11011")
###Output
_____no_output_____
###Markdown
Jaccard DistanceThe Jaccard distance, defined for two sets, is the number of elements that the two sets don't have in common divided by the total number of elements the two sets combined have (removing duplicates).
###Code
def jaccard_distance(s1, s2):
"""Computes the Jaccard distance between two sets"""
s1, s2 = set(s1), set(s2)
diff = len(s1.union(s2)) - len(s1.intersection(s2))
return diff / len(s1.union(s2))
jaccard_distance(["cow", "pig", "horse"], ["cow", "donkey", "chicken"])
jaccard_distance("11101", "11011") # Sets formed from the contents of these strings are identical
###Output
_____no_output_____ |
classification/fake_news_detector_using_ML.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
!pip install kaggle
from google.colab import files
files.upload()
!mkdir -p ~/.kaggle
!cp kaggle.json ~/.kaggle/
!chmod 600 ~/.kaggle/kaggle.json
!kaggle competitions download -c fake-news
!ls
!unzip \*.zip
rm *.zip
!ls -d $PWD/*
train_df=pd.read_csv('/content/train.csv')
test_df=pd.read_csv('/content/test.csv')
test_label=pd.read_csv('/content/submit.csv')
len(train_df)
train_df.head()
train_df = train_df[['text', 'label']]
train_df.head(3)
train_df.isna().sum()
train_df.dropna(inplace=True)
train_df.isna().sum()
train_data = train_df['text']
train_label = train_df['label']
test_df.head(3)
test_label.head(3)
len(test_df), len(test_label)
test_df['text'].isna().sum()
test_df['label'] = test_label['label']
test_df.head(3)
new_test_df = test_df[['text', 'label']]
new_test_df.head(3)
new_test_df.dropna(inplace=True)
new_test_df.isna().sum()
len(new_test_df)
test_data = new_test_df['text']
test_label =new_test_df['label']
len(train_data), len(train_label), len(test_data), len(test_label)
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import GridSearchCV
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
def find_best_model(x, y):
pipe_1 = clf = Pipeline([
('vectorizer', CountVectorizer()),
('svc', SVC(gamma='auto', probability= True))
])
pipe_2 = Pipeline([
('vectorizer', CountVectorizer()),
('rf', RandomForestClassifier())
])
pipe_3 = Pipeline([
('vectorizer', CountVectorizer()),
('nb', MultinomialNB())
])
config = {
'support vector machine' : {
'model' : pipe_1,
'params': {
'svc__C': [1, 10, 100, 1000],
'svc__kernel': ['rbf', 'linear']
}
},
'random forest classifier' : {
'model' : pipe_2,
'params': {
'randomforestclassifier__criterion' : ['gini', 'entropy'],
'randomforestclassifier__n_estimators': [1,5,10],
'randomforestclassifier__warm_start' : [True, False]
}
},
'multinomial nb' : {
'model' : pipe_3,
'params': {
}
},
}
scores = []
best_estimator = {}
for model_name, model_params in config.items():
clf = GridSearchCV(model_params['model'], model_params['params'], cv = 5, return_train_score= False)
clf.fit(x,y)
scores.append({
'model' : model_name,
'best_score' : clf.best_score_,
'best_params' : clf.best_params_
})
best_estimators[alg] = clf.best_estimator_
return best_estimator, pd.DataFrame(scores)
best_estimator, scores_df = find_best_model(train_data, train_label)
scores_df
best_model = best_estimator['']
best_model
best_model.score(test_data, test_label)
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(test_label, best_model.predict(test_data))
import seaborn as sn
plt.figure(figsize =(12,5))
sn.heatmap(cm, annot=True)
plt.ylabel('True')
plt.xlabel('predicted')
data = pd.read_csv('small_news_test.csv')
data.head(3)
data = data[['title', 'text', 'label']]
data.head(3)
data.isna().sum()
len(data)
data['label_n'] = data['label'].apply(lambda x: 1 if x == 'REAL' else 0)
data.head(5)
x = data['text']
y = data['label_n']
x.shape, y.shape
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x,y, test_size = 0.2, random_state = 0)
best_estimator, scores_df2 = find_best_score(x_train, y_train)
score_df2
best_model = best_estimator[]
best_model
best_model.score(x_test, y_test)
cm = confusion_matrix(y_test, best_model.predict(x_test))
import seaborn as sn
plt.figure(figsize =(12,5))
sn.heatmap(cm, annot=True)
plt.ylabel('True')
plt.xlabel('predicted')
###Output
_____no_output_____ |
docs/python/sklearn/Label-Encoding.ipynb | ###Markdown
---title: "Label Encoding"author: "Sanjay"date: 2020-09-04description: "-"type: technical_notedraft: false---
###Code
# Import pandas library
import pandas as pd
# Initialize list of dicts
data = [{'Item': "Onion", 'Price': 85},
{'Item': "Tomato", 'Price': 80},
{'Item': "Egg", 'Price': 5},
{'Item': "Carrot", 'Price': 35},
{'Item': "Cabbage", 'Price': 30},]
# Print list of dicts
print(data)
# Create the pandas DataFrame
data = pd.DataFrame(data)
# Print dataframe
print(data)
# Importing Label Encoder from Sklearn
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
# Applying Label encoding for item column and printing it out
data['Item']= le.fit_transform(data['Item'])
print(data['Item'])
# Printing label encoded dataframe
print(data)
# Decoding the label encoded values
data['Item'] = le.inverse_transform(data['Item'])
print(data['Item'])
###Output
0 Onion
1 Tomato
2 Egg
3 Carrot
4 Cabbage
Name: Item, dtype: object
|
Dishify/notebooks/ingredient_populater/4_allrecipes_data_to_csv_and_db_queries.ipynb | ###Markdown
Part 4 of creating auto-populate feature Save it all to put in a database Load and Prep the Data
###Code
import csv
import pandas as pd
df = pd.read_csv('allrecipes_recipes_combined.csv')
df.head()
###Output
_____no_output_____
###Markdown
Some of the recipe names have '®' as text and not the symbol.
###Code
df['name'] = df['name'].str.replace('®', '')
###Output
_____no_output_____
###Markdown
The Ingredients Data has been saved as a string. Convert to list.
###Code
import ast
def string_to_list(x):
return ast.literal_eval(x)
df['ingredients'] = df['ingredients'].apply(string_to_list)
df.head()
###Output
_____no_output_____
###Markdown
Need to change vulgar fractions (single character fractions) into long-form strings. ('½' to '1/2')
###Code
# Dictionary to map unicode fractions to expanded strings.
# These are all of the vulgar fraction options. (Aside from one with a zero numerator.)
fraction_dict = {'½': '1/2',
'⅓': '1/3',
'⅔': '2/3',
'¼': '1/4',
'¾': '3/4',
'⅕': '1/5',
'⅖': '2/5',
'⅗': '3/5',
'⅘': '4/5',
'⅙': '1/6',
'⅚': '5/6',
'⅐': '1/7',
'⅛': '1/8',
'⅜': '3/8',
'⅝': '5/8',
'⅞': '7/8',
'⅑': '1/9',
'⅒': '1/10'}
def fraction_mapper(x):
for key in fraction_dict:
for i in range(len(x)):
if key in x[i]:
x[i] = x[i].replace(key, fraction_dict[key])
return(x)
df['ingredients'] = df['ingredients'].apply(fraction_mapper)
df.head()
df['ingredients'][0]
###Output
_____no_output_____
###Markdown
Remove ingredients that only appear once
###Code
from collections import Counter
ingredient_counter = Counter()
# Count each instance of each ingredient
for i in range(len(df)):
for j in range(len(df['ingredients'][i])):
ingredient = df['ingredients'][i][j]
ingredient_counter.update({ingredient: 1})
# Get the ingredients that only appear once
single_ing= []
for ing, num in ingredient_counter.items():
if num == 1:
single_ing.append(ing)
# Number of ingredients that only appear once in the 70k recipes.
# These are likely incredibly specific entries.
len(single_ing)
import datetime
# Get rid of the single-time ingredients counting backwards in each list
# so as to not go out of index range after removing one
for i in range(len(df)):
for j in range(len(df['ingredients'][i])-1, -1, -1):
if df['ingredients'][i][j] in single_ing:
ingredient = df['ingredients'][i][j]
# Remove from the ingredients
df['ingredients'][i].remove(ingredient)
# Remove from list to not slow down loop
single_ing.remove(ingredient)
if i % 2000 == 0:
print(i, datetime.datetime.now())
###Output
0 2020-05-15 13:51:19.720658
2000 2020-05-15 13:52:16.488010
4000 2020-05-15 13:53:15.251607
6000 2020-05-15 13:54:09.084974
8000 2020-05-15 13:55:02.154044
10000 2020-05-15 13:55:52.996995
12000 2020-05-15 13:56:42.864223
14000 2020-05-15 13:57:31.489191
16000 2020-05-15 13:58:21.119231
18000 2020-05-15 13:59:04.871275
20000 2020-05-15 13:59:46.476517
22000 2020-05-15 14:00:26.478474
24000 2020-05-15 14:01:01.240330
26000 2020-05-15 14:01:29.043022
28000 2020-05-15 14:01:53.159571
30000 2020-05-15 14:02:17.789336
32000 2020-05-15 14:02:43.058782
34000 2020-05-15 14:03:09.679638
36000 2020-05-15 14:03:33.603553
38000 2020-05-15 14:03:50.559672
40000 2020-05-15 14:04:06.437071
42000 2020-05-15 14:04:18.752220
44000 2020-05-15 14:04:29.583879
46000 2020-05-15 14:04:39.332232
48000 2020-05-15 14:04:48.053117
50000 2020-05-15 14:04:56.049660
52000 2020-05-15 14:05:05.211611
54000 2020-05-15 14:05:10.672331
56000 2020-05-15 14:05:15.278275
58000 2020-05-15 14:05:19.918911
60000 2020-05-15 14:05:23.544420
62000 2020-05-15 14:05:26.483451
64000 2020-05-15 14:05:28.842391
66000 2020-05-15 14:05:30.723619
68000 2020-05-15 14:05:32.119198
70000 2020-05-15 14:05:33.044977
###Markdown
Some recipes have an unneeded number of ingredients. I'm limiting recipes to 30
###Code
ingredients_len = []
for i in range(len(df)):
ingredients_len.append(len(df['ingredients'][i]))
max(ingredients_len)
ingredients_len.index(56)
indices = [i for i, x in enumerate(ingredients_len) if x > 30]
indices
for i in indices:
print(df.iloc[i])
print('='*30)
for i in indices:
print(df['ingredients'][i])
indices[::-1]
for i in indices[::-1]:
print(i)
for i in indices[::-1]:
df = df.drop(i, axis=0)
df = df.reset_index(drop=True)
ingredients_len = []
for i in range(len(df)):
ingredients_len.append(len(df['ingredients'][i]))
max(ingredients_len)
indices = [i for i, x in enumerate(ingredients_len) if x > 30]
indices
###Output
_____no_output_____
###Markdown
Put the ingredients into a dictionary that contains values of measurement quantity, measurement unit, and ingredient ('1/4', 'cup', 'butter, softened').
###Code
# These are measurement units from another notebook.
measurement_units = [
'packages', 'package', 'slices', 'sliced', 'slice',
'bags', 'bag', 'bars', 'bar', 'bottles', 'bottle', 'boxes' 'box', 'bulbs', 'bulb', 'bunches', 'bunch',
'cans', 'can', 'containers', 'container', 'cubes', 'cube', 'cups', 'cup',
'dashes', 'dash', 'drops', 'drop',
'envelopes', 'envelope',
'fillets', 'fillet',
'gallons', 'gallon', 'granules', 'granule',
'halfes', 'half', 'heads', 'head',
'jars', 'jar',
'layers', 'layer', 'leaf', 'leaves', 'legs', 'leg', 'links', 'link', 'loaf', 'loaves',
'ounces', 'ounce',
'packets', 'packet', 'pieces', 'piece', 'pinches', 'pinch', 'pints', 'pint', 'pounds', 'pound',
'quarts', 'quart',
'sprigs', 'sprig', 'squares', 'square', 'stalks', 'stalk', 'strips', 'strip',
'tablespoons', 'tablespoon','teaspoons', 'teaspoon', 'thighs', 'thigh', 'trays', 'tray']
import re
def ingred_dict(x):
'''
This function is meant to take in a list of ingredients for a recipe.
It then parses out the ingredients and saves the quantity of an ingredient,
the unit of measurement for that ingredient, and the name of the ingredient.
This information is then saved in a dictionary and returned.
'''
my_dict = {} # Dictionary for the current recipe
pattern = re.compile(r'^[\d/\s]+') # Include white space to catch compound fractions
for i in range(len(x)):
matches = pattern.finditer(x[i])
ingredient_test = x[i]
for match in matches:
quantity = match.group(0).strip() # Quantity of measurement set
ingredient_test = ingredient_test.strip(quantity) # Save everything after removing quantity
check = 0
breaker = False
pattern_2 = re.compile(r'^[(\d\s]+') # Check for any numbers in parenthesis
matches_2 = pattern_2.finditer(ingredient_test)
for unit in measurement_units:
if matches_2: # If there's a match for a number in parenthesis
matches_2 = False # Don't check this conditional again
continue # Skip this unit of measurement
elif unit in ingredient_test:
ingredient = ingredient_test.split(unit)[1].strip() # Ingredient set
units = (ingredient_test.split(unit)[0] + unit).strip() # Unit set (including any parenthesis before)
check = 1 # Set check to 1 so the last conditional doesn't execute
breaker = True
if breaker == True:
break
if check == 0: # If no unit measurement is found (like the ingredient is "1 egg")
ingredient = ingredient_test.strip()
units = None
ingred_num = f'ingredient{i+1}'
# Save ingredient information as a list
my_dict[ingred_num] = [quantity, units, ingredient]
return my_dict
df['ingredient_dict'] = df['ingredients'].apply(ingred_dict)
df.head()
df['ingredient_dict'][0]
len(df['ingredient_dict'][0])
ing = 'ingredient1'
df['ingredient_dict'][0][ing]
for j in range(len(df['ingredient_dict'][0])):
print(df['ingredient_dict'][0][f'ingredient{j+1}'])
###Output
['21', None, 'chocolate sandwich cookies, crushed']
['1/4', 'cup', 'butter, softened']
['1', 'cup', 'heavy cream']
['1', '(12 ounce) package', 'semisweet chocolate chips']
['1', 'teaspoon', 'vanilla extract']
['1', 'pinch', 'salt']
['2', 'cups', 'heavy cream']
['1/4', 'cup', 'white sugar']
['1', 'cup', 'heavy cream, chilled']
['1/4', 'cup', 'white sugar']
###Markdown
Prepare data to save to CSV for database
###Code
col_list = ['name']
for i in range(max(ingredients_len)):
col_list.append(f'ingredient{i+1}')
df_csv = pd.DataFrame(columns=col_list)
df_csv.head()
for i in range(len(df)):
new_dict = {'name': df['name'][i]}
for j in range(len(df['ingredient_dict'][i])):
try:
new_dict[f'ingredient{j+1}'] = df['ingredient_dict'][i][f'ingredient{j+1}']
except:
continue
df_csv = df_csv.append(new_dict, ignore_index=True)
if i % 2000 == 0:
print(i, datetime.datetime.now())
df_csv.head()
type(df_csv['name'][0])
###Output
_____no_output_____
###Markdown
Save to CSV
###Code
df_csv.to_csv('recipes_table_v2.csv', index=False, na_rep='')
df_csv.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 70881 entries, 0 to 70880
Data columns (total 31 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 name 70881 non-null object
1 ingredient1 67286 non-null object
2 ingredient2 67835 non-null object
3 ingredient3 65070 non-null object
4 ingredient4 60441 non-null object
5 ingredient5 53979 non-null object
6 ingredient6 46660 non-null object
7 ingredient7 39072 non-null object
8 ingredient8 31828 non-null object
9 ingredient9 24985 non-null object
10 ingredient10 18892 non-null object
11 ingredient11 13849 non-null object
12 ingredient12 9875 non-null object
13 ingredient13 6884 non-null object
14 ingredient14 4750 non-null object
15 ingredient15 3180 non-null object
16 ingredient16 2103 non-null object
17 ingredient17 1354 non-null object
18 ingredient18 857 non-null object
19 ingredient19 524 non-null object
20 ingredient20 343 non-null object
21 ingredient21 215 non-null object
22 ingredient22 137 non-null object
23 ingredient23 74 non-null object
24 ingredient24 50 non-null object
25 ingredient25 27 non-null object
26 ingredient26 17 non-null object
27 ingredient27 14 non-null object
28 ingredient28 5 non-null object
29 ingredient29 3 non-null object
30 ingredient30 1 non-null object
dtypes: object(31)
memory usage: 16.8+ MB
###Markdown
Now I need a new way of retrieving the appropriate ingredients
###Code
df_csv.iloc[0]
pd.notnull(df_csv.iloc[0])
pd.notnull(df_csv.iloc[0])[1]
df_csv.columns[1]
df_csv.iloc[0][1]
# I can make a dictionary for the ingredients, but then what?
ingredient_dict = {}
for i in range(1, len(df_csv.iloc[0])):
if pd.notnull(df_csv.iloc[0])[i]:
ingredient_dict[df_csv.columns[i]] = df_csv.iloc[0][i]
'''
Instead of creating a dictionary here to store the ingredients for each returned recipe
the ingredient counting could happen which would free up some processing and move more
quickly through the whole loop.
'''
ingredient_dict
# Make a df to store results
results_df = pd.DataFrame(columns=['ingredients'])
results_df
# Add the ingredient_dict to this new df
results_df = results_df.append({'ingredients' :ingredient_dict}, ignore_index=True)
results_df
results_df['ingredients'][0]
###Output
_____no_output_____
###Markdown
This appears to work and will make it so much of the code below will be useable with minor modifications. It's only a matter of working out the query for the words entered and matching those with recipe names in the database.
###Code
##################################################################################################################################
# To use for this error: InFailedSqlTransaction: current transaction is aborted, commands ignored until end of transaction block #
##################################################################################################################################
# cursor = conn.cursor()
# cursor.execute("""rollback;
# """)
# cursor.close()
%%capture
pip install psycopg2
# Connect to database.
import os
import psycopg2
conn = psycopg2.connect(database ='postgres', user = 'postgres', password = 'tz6MTgxObUZ62MNv0xgp', host = 'mydishdb-dev.c3und8sjo4p2.us-east-2.rds.amazonaws.com', port = '5432')
# String comes in from frontend. Split the string into words.
string = 'chicken noodle soup'
split_words = string.split()
cursor = conn.cursor()
command = """SELECT name
FROM recipes
;
"""
cursor.execute(command)
name_table = cursor.fetchall()
cursor.close()
name_table[0]
cursor = conn.cursor()
command = """SELECT index, name
FROM recipes
;
"""
cursor.execute(command)
test_table = cursor.fetchall()
cursor.close()
test_table[0]
cursor = conn.cursor()
command = """SELECT index, name
FROM recipes
WHERE index in (0, 2, 8)
;
"""
cursor.execute(command)
test = cursor.fetchall()
cursor.close()
test
test[0][1]
type(test[0][1])
# Then query the recipe database to get recipe names that have matching words.
cursor = conn.cursor()
command = """SELECT name
FROM recipes
WHERE name ILIKE '%chicken%' AND
name ILIKE '%noodle%' AND
name ILIKE '%soup%'
;"""
cursor.execute(command)
table = cursor.fetchall()
cursor.close()
table
cursor = conn.cursor()
command = """SELECT *
FROM recipes
WHERE name ILIKE '%chicken%' AND
name ILIKE '%noodle%' AND
name ILIKE '%soup%'
;"""
cursor.execute(command)
recipe_table = cursor.fetchall()
cursor.close()
recipe_table[0]
type(recipe_table)
type(recipe_table[0])
len(recipe_table[0])
len(recipe_table)
for i in range(len(recipe_table)):
print(i, recipe_table[i][1])
string_to_list(recipe_table[0][2])
string_to_list(recipe_table[0][2])[2]
# Count instances of each ingredient to find most common.
# Initialize a Counter for tabulating how often each ingredient occurs
ingredient_counts = Counter()
# Count each instance of each ingredient
for i in range(len(recipe_table)):
for j in range(2, len(recipe_table[i])):
if recipe_table[i][j]:
ingredient = string_to_list(recipe_table[i][j])[2]
ingredient_counts.update({ingredient: 1})
ingredient_counts
# Loop through most common to save quantity and measurement to get most common of those.
# Get the top 30 ingredients sorted by most common
top_30 = sorted(ingredient_counts.items(), key=lambda x: x[1], reverse=True)[:30]
# Get the ingredients that occured in at least 25% of recipes returned
above_25_percent = [(tup[0], round(100*tup[1]/len(recipe_table), 1)) for tup in top_30 if 100*tup[1]/len(recipe_table) >= 25]
above_25_percent
for item in above_25_percent:
print(item[0])
# for i in range(len(recipe_table)):
# for j in range(2, len(recipe_table[i])):
# if recipe_table[i][j]:
# print(string_to_list(recipe_table[i][j])[2])
# Create dictionary of information. Turn into dictionary (then JSON) and return.
results_list = []
# Get the ingredient information and put it in a dictionary
for item in above_25_percent:
quantity_list = []
unit_list = []
for i in range(len(recipe_table)):
for j in range(2, len(recipe_table[i])):
if recipe_table[i][j]:
if string_to_list(recipe_table[i][j])[2] == item[0]:
#print(recipe_table[i][j])
quantity = string_to_list(recipe_table[i][j])[0]
unit = string_to_list(recipe_table[i][j])[1]
quantity_list.append(quantity)
unit_list.append(unit)
# print(quantity)
# Getting and saving the most common quantity and unit for each ingredient
data = Counter(quantity_list)
quantity = data.most_common(1)
data = Counter(unit_list)
unit = data.most_common(1)
# print(quantity)
ingred_dict = {'quantity': quantity[0][0], 'unit': unit[0][0], 'ingredient': item[0]}
results_list.append(ingred_dict)
results_list
conn.close()
###Output
_____no_output_____
###Markdown
Put It All Together
###Code
%%capture
pip install psycopg2
import ast
import psycopg2
from collections import Counter
def string_to_list(x):
return ast.literal_eval(x)
def ingredient_getter(word):
results_list = []
split_words = word.split()
conn = psycopg2.connect(database ='postgres', user = 'postgres', password = 'tz6MTgxObUZ62MNv0xgp', host = 'mydishdb-dev.c3und8sjo4p2.us-east-2.rds.amazonaws.com', port = '5432')
cursor = conn.cursor()
command = f"SELECT * FROM recipes WHERE name ILIKE '%{split_words[0]}%' "
if len(split_words) > 1:
for i in range(1, len(split_words)):
command += f"AND name ILIKE '%{split_words[i]}%' "
command += ";"
cursor.execute(command)
recipe_table = cursor.fetchall()
cursor.close()
conn.close()
# Initialize a Counter for tabulating how often each ingredient occurs
ingredient_counts = Counter()
# Count each instance of each ingredient
for i in range(len(recipe_table)):
for j in range(2, len(recipe_table[i])):
if recipe_table[i][j]:
ingredient = string_to_list(recipe_table[i][j])[2]
ingredient_counts.update({ingredient: 1})
# Get the top 30 ingredients sorted by most common
top_30 = sorted(ingredient_counts.items(), key=lambda x: x[1], reverse=True)[:30]
# Get the ingredients that occured in at least 25% of recipes returned
above_25_percent = [(tup[0], round(100*tup[1]/len(recipe_table), 1)) for tup in top_30 if 100*tup[1]/len(recipe_table) >= 25]
# Get the ingredient information and put it in a dictionary
for item in above_25_percent:
quantity_list = []
unit_list = []
for i in range(len(recipe_table)):
for j in range(2, len(recipe_table[i])):
if recipe_table[i][j]:
if string_to_list(recipe_table[i][j])[2] == item[0]:
quantity = string_to_list(recipe_table[i][j])[0]
unit = string_to_list(recipe_table[i][j])[1]
quantity_list.append(quantity)
unit_list.append(unit)
# Getting and saving the most common quantity and unit for each ingredient
data = Counter(quantity_list)
quantity = data.most_common(1)
data = Counter(unit_list)
unit = data.most_common(1)
ingred_dict = {'quantity': quantity[0][0], 'unit': unit[0][0], 'ingredient': item[0]}
results_list.append(ingred_dict)
return results_list
ingredient_getter('brownies')
import time
start_time = time.time()
ingredient_getter('brownies')
print("--- %s seconds ---" % (time.time() - start_time))
###Output
--- 1.2188889980316162 seconds ---
|
01_Understanding and Visualizing Data with Python/Week_2 univariate data/w2_assessment.ipynb | ###Markdown
In this notebook, we'll ask you to find numerical summaries for a certain set of data. You will use the values of what you find in this assignment to answer questions in the quiz that follows (we've noted where specific values will be requested in the quiz, so that you can record them.)We'll also ask you to create some of the plots you have seen in previous lectures.
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import scipy.stats as stats
%matplotlib inline
import matplotlib.pyplot as plt
pd.set_option('display.max_columns', 100)
path = "nhanes_2015_2016.csv"
# First, you must import the data from the path given above
df = # using pandas, read in the csv data found at the url defined by 'path'
# Next, look at the 'head' of our DataFrame 'df'.
# If you can't remember a function, open a previous notebook or video as a reference
# or use your favorite search engine to look for a solution
###Output
_____no_output_____
###Markdown
How many rows can you see when you don't put an argument into the previous method? How many rows can you see if you use an int as an argument? Can you use a float as an argument?
###Code
# Lets only consider the feature (or variable) 'BPXSY2'
bp = df['BPXSY2']
###Output
_____no_output_____
###Markdown
Numerical Summaries Find the mean (note this for the quiz that follows)
###Code
# What is the mean of 'BPXSY2'?
bp_mean =
###Output
_____no_output_____
###Markdown
In the method you used above, how are the rows of missing data treated? Are the excluded entirely? Are they counted as zeros? Something else? If you used a library function, try looking up the documentation using the code:```help(function_you_used)```For example:```help(np.sum)``` .dropna()To make sure we know that we aren't treating missing data in ways we don't want, lets go ahead and drop all the nans from our Series 'bp'
###Code
bp = bp.dropna()
###Output
_____no_output_____
###Markdown
Find the:* Median* Max* Min* Standard deviation* VarianceYou can implement any of these from base python (that is, without any of the imported packages), but there are simple and intuitively named functions in the numpy library for all of these. You could also use the fact that 'bp' is not just a list, but is a pandas.Series. You can find pandas.Series attributes and methods [here](https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.Series.html)A large part of programming is being able to find the functions you need and to understand the documentation formatting so that you can implement the code yourself, so we highly encourage you to search the internet whenever you are unsure! Example: Find the difference of an element in 'bp' compared with the previous element in 'bp'.
###Code
# Using the fact that 'bp' is a pd.Series object, can use the pd.Series method diff()
# call this method by: pd.Series.diff()
diff_by_series_method = bp.diff()
# note that this returns a pd.Series object, that is, it had an index associated with it
diff_by_series_method.values # only want to see the values, not the index and values
# Now use the numpy library instead to find the same values
# np.diff(array)
diff_by_np_method = np.diff(bp)
diff_by_np_method
# note that this returns an 'numpy.ndarray', which has no index associated with it, and therefore ignores
# the nan we get by the Series method
# We could also implement this ourselves with some looping
diff_by_me = [] # create an empty list
for i in range(len(bp.values)-1): # iterate through the index values of bp
diff = bp.values[i+1] - bp.values[i] # find the difference between an element and the previous element
diff_by_me.append(diff) # append to out list
np.array(diff_by_me) # format as an np.array
###Output
_____no_output_____
###Markdown
Your turn (note these values for the quiz that follows)
###Code
bp_median =
bp_median
bp_max =
bp_max
bp_min =
bp_min
bp_std =
bp_std
bp_var =
bp_var
###Output
_____no_output_____
###Markdown
How to find the interquartile range (note this value for the quiz that follows)This time we need to use the scipy.stats library that we imported above under the name 'stats'
###Code
bp_iqr = stats.iqr(bp)
bp_iqr
###Output
_____no_output_____
###Markdown
Visualizing the dataNext we'll use what you have learned from the *Tables, Histograms, Boxplots in Python* video
###Code
# use the Series.describe() method to see some descriptive statistics of our Series 'bp'
bp_descriptive_stats =
bp_descriptive_stats
# Make a histogram of our 'bp' data using the seaborn library we imported as 'sns'
###Output
_____no_output_____
###Markdown
Is your histogram labeled and does it have a title?If not, try appending ```.set(title='your_title', xlabel='your_x_label', ylabel='your_y_label')```or just```.set(title='your_title')```to your graphing function
###Code
# Make a boxplot of our 'bp' data using the seaborn library. Make sure it has a title and labels!
###Output
_____no_output_____ |
nbs/07_vision.core.ipynb | ###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch(as_prop=True)
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch(as_prop=True)
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px` > `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch(as_prop=True)
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch(as_prop=True)
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o.codes=self.codes
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
###Output
_____no_output_____
###Markdown
Test ```get_annotations``` on the coco_tiny dataset against both image filenames and bounding box labels.
###Code
coco = untar_data(URLs.COCO_TINY)
test_images, test_lbl_bbox = get_annotations(coco/'train.json')
annotations = json.load(open(coco/'train.json'))
categories, images, annots = map(lambda x:L(x),annotations.values())
test_eq(test_images, images.attrgot('file_name'))
def bbox_lbls(file_name):
img = images.filter(lambda img:img['file_name']==file_name)[0]
bbs = annots.filter(lambda a:a['image_id'] == img['id'])
i2o = {k['id']:k['name'] for k in categories}
lbls = [i2o[cat] for cat in bbs.attrgot('category_id')]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bbs.attrgot('bbox')]
return [bboxes, lbls]
for idx in random.sample(range(len(images)),5):
test_eq(test_lbl_bbox[idx], bbox_lbls(test_images[idx]))
# export
from matplotlib import patches, patheffects
# export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x): return getattr(x, 'img_size') if self.sz is None else self.sz
def setups(self, dl):
res = first(dl.do_item(0), risinstance(TensorPoint))
if res is not None: self.c = res.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.img_size, (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.img_size, (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.img_size, (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 01a_losses.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 10b_tutorial.albumentations.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 18b_callback.preds.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted dev-setup.ipynb.
Converted index.ipynb.
Converted quick_start.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch(as_prop=True)
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch(as_prop=True)
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px` > `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch(as_prop=True)
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch(as_prop=True)
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o.codes=self.codes
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
###Output
_____no_output_____
###Markdown
Test ```get_annotations``` on the coco_tiny dataset against both image filenames and bounding box labels.
###Code
coco = untar_data(URLs.COCO_TINY)
test_images, test_lbl_bbox = get_annotations(coco/'train.json')
annotations = json.load(open(coco/'train.json'))
categories, images, annots = map(lambda x:L(x),annotations.values())
test_eq(test_images, images.attrgot('file_name'))
def bbox_lbls(file_name):
img = images.filter(lambda img:img['file_name']==file_name)[0]
bbs = annots.filter(lambda a:a['image_id'] == img['id'])
i2o = {k['id']:k['name'] for k in categories}
lbls = [i2o[cat] for cat in bbs.attrgot('category_id')]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bbs.attrgot('bbox')]
return [bboxes, lbls]
for idx in random.sample(range(len(images)),5):
test_eq(test_lbl_bbox[idx], bbox_lbls(test_images[idx]))
# export
from matplotlib import patches, patheffects
# export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x): return getattr(x, 'img_size') if self.sz is None else self.sz
def setups(self, dl):
res = first(dl.do_item(None), risinstance(TensorPoint))
if res is not None: self.c = res.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.img_size, (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.img_size, (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.img_size, (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 01a_losses.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 10b_tutorial.albumentations.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 18b_callback.preds.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted dev-setup.ipynb.
Converted index.ipynb.
Converted quick_start.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#|export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#|export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch(as_prop=True)
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#|export
@patch(as_prop=True)
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px` > `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#|export
@patch(as_prop=True)
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#|export
@patch(as_prop=True)
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#|export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#|export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#|export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#|export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#|export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#|export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#|export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#|export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#|export
class PILImage(PILBase): pass
#|export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#|hide
test_eq(np.array(im), np.array(tpil))
#|export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#|export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#|export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o.codes=self.codes
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
#|export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#|export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
#|export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
###Output
_____no_output_____
###Markdown
Test ```get_annotations``` on the coco_tiny dataset against both image filenames and bounding box labels.
###Code
coco = untar_data(URLs.COCO_TINY)
test_images, test_lbl_bbox = get_annotations(coco/'train.json')
annotations = json.load(open(coco/'train.json'))
categories, images, annots = map(lambda x:L(x),annotations.values())
test_eq(test_images, images.attrgot('file_name'))
def bbox_lbls(file_name):
img = images.filter(lambda img:img['file_name']==file_name)[0]
bbs = annots.filter(lambda a:a['image_id'] == img['id'])
i2o = {k['id']:k['name'] for k in categories}
lbls = [i2o[cat] for cat in bbs.attrgot('category_id')]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bbs.attrgot('bbox')]
return [bboxes, lbls]
for idx in random.sample(range(len(images)),5):
test_eq(test_lbl_bbox[idx], bbox_lbls(test_images[idx]))
#|export
from matplotlib import patches, patheffects
#|export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
#|export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
#|export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
#|export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#|export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#|export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#|export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x): return getattr(x, 'img_size') if self.sz is None else self.sz
def setups(self, dl):
res = first(dl.do_item(None), risinstance(TensorPoint))
if res is not None: self.c = res.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#|hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.img_size, (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#|export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#|export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#|export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#|hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.img_size, (128,128))
coco_tdl.show_batch();
#|hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.img_size, (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#|hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 01a_losses.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 10b_tutorial.albumentations.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 18b_callback.preds.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 20b_tutorial.distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.image_sequence.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 70a_callback.tensorboard.ipynb.
Converted 70b_callback.neptune.ipynb.
Converted 70c_callback.captum.ipynb.
Converted 70d_callback.comet.ipynb.
Converted 74_huggingface.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted dev-setup.ipynb.
Converted app_examples.ipynb.
Converted camvid.ipynb.
Converted distributed_app_examples.ipynb.
Converted migrating_catalyst.ipynb.
Converted migrating_ignite.ipynb.
Converted migrating_lightning.ipynb.
Converted migrating_pytorch.ipynb.
Converted migrating_pytorch_verbose.ipynb.
Converted ulmfit.ipynb.
Converted index.ipynb.
Converted quick_start.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
# export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_px=500, max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
#TODO function to resize_max all images in a path (optionally recursively) and save them somewhere (same relative dirs if recursive)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn, **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setup(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setup(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.transfer_learning.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.ulmfit-Copy1.ipynb.
Converted 38_tutorial.ulmfit.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.learner.ipynb.
Converted 43_tabular.model.ipynb.
Converted 45_collab.ipynb.
Converted 50_datablock_examples.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 97_test_utils.ipynb.
Converted index.ipynb.
Converted migrating.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_px=500, max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
#TODO function to resize_max all images in a path (optionally recursively) and save them somewhere (same relative dirs if recursive)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn, **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setup(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x, self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setup(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.transfer_learning.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.ulmfit.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.model.ipynb.
Converted 42_tabular.learner.ipynb.
Converted 45_collab.ipynb.
Converted 50_datablock_examples.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 97_test_utils.ipynb.
Converted index.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch(as_prop=True)
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch(as_prop=True)
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px` > `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch(as_prop=True)
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch(as_prop=True)
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o.codes=self.codes
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
###Output
_____no_output_____
###Markdown
Test ```get_annotations``` on the coco_tiny dataset against both image filenames and bounding box labels.
###Code
coco = untar_data(URLs.COCO_TINY)
test_images, test_lbl_bbox = get_annotations(coco/'train.json')
annotations = json.load(open(coco/'train.json'))
categories, images, annots = map(lambda x:L(x),annotations.values())
test_eq(test_images, images.attrgot('file_name'))
def bbox_lbls(file_name):
img = images.filter(lambda img:img['file_name']==file_name)[0]
bbs = annots.filter(lambda a:a['image_id'] == img['id'])
i2o = {k['id']:k['name'] for k in categories}
lbls = [i2o[cat] for cat in bbs.attrgot('category_id')]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bbs.attrgot('bbox')]
return [bboxes, lbls]
for idx in random.sample(range(len(images)),5):
test_eq(test_lbl_bbox[idx], bbox_lbls(test_images[idx]))
# export
from matplotlib import patches, patheffects
# export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x): return getattr(x, 'img_size') if self.sz is None else self.sz
def setups(self, dl):
res = first(dl.do_item(None), risinstance(TensorPoint))
if res is not None: self.c = res.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.img_size, (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.img_size, (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.img_size, (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
_____no_output_____
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.transfer_learning.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.ulmfit.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.learner.ipynb.
Converted 43_tabular.model.ipynb.
Converted 45_collab.ipynb.
Converted 50_datablock_examples.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 97_test_utils.ipynb.
Converted index.ipynb.
Converted migrating.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch(as_prop=True)
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch(as_prop=True)
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px` > `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch(as_prop=True)
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch(as_prop=True)
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o.codes=self.codes
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
###Output
_____no_output_____
###Markdown
Test ```get_annotations``` on the coco_tiny dataset against both image filenames and bounding box labels.
###Code
coco = untar_data(URLs.COCO_TINY)
test_images, test_lbl_bbox = get_annotations(coco/'train.json')
annotations = json.load(open(coco/'train.json'))
categories, images, annots = map(lambda x:L(x),annotations.values())
test_eq(test_images, images.attrgot('file_name'))
def bbox_lbls(file_name):
img = images.filter(lambda img:img['file_name']==file_name)[0]
bbs = annots.filter(lambda a:a['image_id'] == img['id'])
i2o = {k['id']:k['name'] for k in categories}
lbls = [i2o[cat] for cat in bbs.attrgot('category_id')]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bbs.attrgot('bbox')]
return [bboxes, lbls]
for idx in random.sample(range(len(images)),5):
test_eq(test_lbl_bbox[idx], bbox_lbls(test_images[idx]))
# export
from matplotlib import patches, patheffects
# export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.img_size
assert sz is not None or self.sz is not None, "Size could not be inferred, pass to init with `img_size=...`"
return sz if self.sz is None else self.sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.img_size, (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.img_size, (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.img_size, (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 01a_losses.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 10b_tutorial.albumentations.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 18b_callback.preds.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted dev-setup.ipynb.
Converted index.ipynb.
Converted quick_start.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.transfer_learning.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.ulmfit.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 45_collab.ipynb.
Converted 50_datablock_examples.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.transfer_learning.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.ulmfit.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 45_collab.ipynb.
Converted 50_datablock_examples.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 97_test_utils.ipynb.
Converted index.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#|export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#|export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch(as_prop=True)
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#|export
@patch(as_prop=True)
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px` > `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#|export
@patch(as_prop=True)
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#|export
@patch(as_prop=True)
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#|export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#|export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#|export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#|export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#|export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#|export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#|export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#|export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#|export
class PILImage(PILBase): pass
#|export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#|hide
test_eq(np.array(im), np.array(tpil))
#|export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#|export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#|export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o.codes=self.codes
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
#|export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#|export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
#|export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
###Output
_____no_output_____
###Markdown
Test ```get_annotations``` on the coco_tiny dataset against both image filenames and bounding box labels.
###Code
coco = untar_data(URLs.COCO_TINY)
test_images, test_lbl_bbox = get_annotations(coco/'train.json')
annotations = json.load(open(coco/'train.json'))
categories, images, annots = map(lambda x:L(x),annotations.values())
test_eq(test_images, images.attrgot('file_name'))
def bbox_lbls(file_name):
img = images.filter(lambda img:img['file_name']==file_name)[0]
bbs = annots.filter(lambda a:a['image_id'] == img['id'])
i2o = {k['id']:k['name'] for k in categories}
lbls = [i2o[cat] for cat in bbs.attrgot('category_id')]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bbs.attrgot('bbox')]
return [bboxes, lbls]
for idx in random.sample(range(len(images)),5):
test_eq(test_lbl_bbox[idx], bbox_lbls(test_images[idx]))
#|export
from matplotlib import patches, patheffects
#|export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
#|export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
#|export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
#|export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#|export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#|export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#|export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x): return getattr(x, 'img_size') if self.sz is None else self.sz
def setups(self, dl):
res = first(dl.do_item(None), risinstance(TensorPoint))
if res is not None: self.c = res.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#|hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.img_size, (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#|export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#|export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#|export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#|hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.img_size, (128,128))
coco_tdl.show_batch();
#|hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.img_size, (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#|hide
from nbdev.export import notebook2script
notebook2script()
###Output
_____no_output_____
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
# export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_px=500, max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
#TODO function to resize_max all images in a path (optionally recursively) and save them somewhere (same relative dirs if recursive)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn, **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, sz=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), sz=sz)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, sz=None)->None: return cls(tensor(x).view(-1, 4).float(), sz=sz)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, sz=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, sz=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = getattr(x, '_meta', {}).get('sz', None)
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `sz=...`"
return self.sz if sz is None else sz
def setup(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x, self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
pnt_tdl.after_item.c
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setup(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(TensorPoint(x.view(-1,2), sz=x._meta.get('sz', None)))
return TensorBBox(pnts.view(-1, 4), sz=x._meta.get('sz', None))
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 08_vision.core.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09a_vision.data.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.transfer_learning.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.ulmfit.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.model.ipynb.
Converted 50_datablock_examples.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 97_test_utils.ipynb.
Converted index.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch(as_prop=True)
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch(as_prop=True)
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px` > `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch(as_prop=True)
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch(as_prop=True)
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
###Output
_____no_output_____
###Markdown
Test ```get_annotations``` on the coco_tiny dataset against both image filenames and bounding box labels.
###Code
coco = untar_data(URLs.COCO_TINY)
test_images, test_lbl_bbox = get_annotations(coco/'train.json')
annotations = json.load(open(coco/'train.json'))
categories, images, annots = map(lambda x:L(x),annotations.values())
test_eq(test_images, images.attrgot('file_name'))
def bbox_lbls(file_name):
img = images.filter(lambda img:img['file_name']==file_name)[0]
bbs = annots.filter(lambda a:a['image_id'] == img['id'])
i2o = {k['id']:k['name'] for k in categories}
lbls = [i2o[cat] for cat in bbs.attrgot('category_id')]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bbs.attrgot('bbox')]
return [bboxes, lbls]
for idx in random.sample(range(len(images)),5):
test_eq(test_lbl_bbox[idx], bbox_lbls(test_images[idx]))
# export
from matplotlib import patches, patheffects
# export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return sz if self.sz is None else self.sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 01a_losses.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 10b_tutorial.albumentations.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 18b_callback.preds.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted dev-setup.ipynb.
Converted index.ipynb.
Converted quick_start.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
# export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.transfer_learning.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.ulmfit.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.learner.ipynb.
Converted 43_tabular.model.ipynb.
Converted 45_collab.ipynb.
Converted 50_datablock_examples.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 97_test_utils.ipynb.
Converted index.ipynb.
Converted migrating.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.transfer_learning.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.ulmfit.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.learner.ipynb.
Converted 43_tabular.model.ipynb.
Converted 45_collab.ipynb.
Converted 50_datablock_examples.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 97_test_utils.ipynb.
Converted index.ipynb.
Converted migrating.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
###Output
_____no_output_____
###Markdown
Test ```get_annotations``` on the coco_tiny dataset against both image filenames and bounding box labels.
###Code
coco = untar_data(URLs.COCO_TINY)
test_images, test_lbl_bbox = get_annotations(coco/'train.json')
annotations = json.load(open(coco/'train.json'))
categories, images, annots = map(lambda x:L(x),annotations.values())
test_eq(test_images, images.attrgot('file_name'))
def bbox_lbls(file_name):
img = images.filter(lambda img:img['file_name']==file_name)[0]
bbs = annots.filter(lambda a:a['image_id'] == img['id'])
i2o = {k['id']:k['name'] for k in categories}
lbls = [i2o[cat] for cat in bbs.attrgot('category_id')]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bbs.attrgot('bbox')]
return [bboxes, lbls]
for idx in random.sample(range(len(images)),5):
test_eq(test_lbl_bbox[idx], bbox_lbls(test_images[idx]))
# export
from matplotlib import patches, patheffects
# export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return sz if self.sz is None else self.sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 01a_losses.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 10b_tutorial.albumentations.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 18b_callback.preds.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted dev-setup.ipynb.
Converted index.ipynb.
Converted quick_start.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
###Output
_____no_output_____
###Markdown
Test ```get_annotations``` on the coco_tiny dataset against both image filenames and bounding box labels.
###Code
coco = untar_data(URLs.COCO_TINY)
test_images, test_lbl_bbox = get_annotations(coco/'train.json')
annotations = json.load(open(coco/'train.json'))
categories, images, annots = map(lambda x:L(x),annotations.values())
test_eq(test_images, images.attrgot('file_name'))
def bbox_lbls(file_name):
img = images.filter(lambda img:img['file_name']==file_name)[0]
bbs = annots.filter(lambda a:a['image_id'] == img['id'])
i2o = {k['id']:k['name'] for k in categories}
lbls = [i2o[cat] for cat in bbs.attrgot('category_id')]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bbs.attrgot('bbox')]
return [bboxes, lbls]
for idx in random.sample(range(len(images)),5):
test_eq(test_lbl_bbox[idx], bbox_lbls(test_images[idx]))
# export
from matplotlib import patches, patheffects
# export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return sz if self.sz is None else self.sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 01a_losses.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 10b_tutorial.albumentations.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 18b_callback.preds.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted dev-setup.ipynb.
Converted index.ipynb.
Converted quick_start.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = DataSource([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = DataSource([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = DataSource([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.transfer_learning.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.ulmfit.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.learner.ipynb.
Converted 43_tabular.model.ipynb.
Converted 45_collab.ipynb.
Converted 50_datablock_examples.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 97_test_utils.ipynb.
Converted index.ipynb.
Converted migrating.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch(as_prop=True)
def size(x:Image.Image): return fastuple(_old_sz(x))
Image._patched = True
#export
@patch(as_prop=True)
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px` > `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch(as_prop=True)
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch(as_prop=True)
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = fastuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
#export
def to_image(x):
"Convert a tensor or array to a PIL int8 Image"
if isinstance(x,Image.Image): return x
if isinstance(x,Tensor): x = to_np(x.permute((1,2,0)))
if x.dtype==np.float32: x = (x*255).astype(np.uint8)
return Image.fromarray(x, mode=['RGB','CMYK'][x.shape[0]==4])
#export
def load_image(fn, mode=None):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn, TensorMask): fn = fn.type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o.codes=self.codes
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is different from the usual indexing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
###Output
_____no_output_____
###Markdown
Test ```get_annotations``` on the coco_tiny dataset against both image filenames and bounding box labels.
###Code
coco = untar_data(URLs.COCO_TINY)
test_images, test_lbl_bbox = get_annotations(coco/'train.json')
annotations = json.load(open(coco/'train.json'))
categories, images, annots = map(lambda x:L(x),annotations.values())
test_eq(test_images, images.attrgot('file_name'))
def bbox_lbls(file_name):
img = images.filter(lambda img:img['file_name']==file_name)[0]
bbs = annots.filter(lambda a:a['image_id'] == img['id'])
i2o = {k['id']:k['name'] for k in categories}
lbls = [i2o[cat] for cat in bbs.attrgot('category_id')]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bbs.attrgot('bbox')]
return [bboxes, lbls]
for idx in random.sample(range(len(images)),5):
test_eq(test_lbl_bbox[idx], bbox_lbls(test_images[idx]))
# export
from matplotlib import patches, patheffects
# export
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `PointScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.img_size
assert sz is not None or self.sz is not None, "Size could not be inferred, pass to init with `img_size=...`"
return sz if self.sz is None else self.sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.img_size, (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.img_size, x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.img_size, (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.img_size, (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 01a_losses.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 10b_tutorial.albumentations.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 18b_callback.preds.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted dev-setup.ipynb.
Converted index.ipynb.
Converted quick_start.ipynb.
Converted tutorial.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements adn a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to height/width, with the following convetion: top, left, bottom, right.> Note: We use the same convention as for points with y axis being before x.
###Code
# export
class LabeledBBox(Tuple):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.transfer_learning.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.ulmfit.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 45_collab.ipynb.
Converted 50_datablock_examples.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 97_test_utils.ipynb.
Converted index.ipynb.
###Markdown
Core vision> Basic image opening/processing functionality Helpers
###Code
#export
imagenet_stats = ([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
cifar_stats = ([0.491, 0.482, 0.447], [0.247, 0.243, 0.261])
mnist_stats = ([0.131], [0.308])
im = Image.open(TEST_IMAGE).resize((30,20))
#export
if not hasattr(Image,'_patched'):
_old_sz = Image.Image.size.fget
@patch_property
def size(x:Image.Image): return Tuple(_old_sz(x))
Image._patched = True
#export
@patch_property
def n_px(x: Image.Image): return x.size[0] * x.size[1]
###Output
_____no_output_____
###Markdown
`Image.n_px`> `Image.n_px` (property)Number of pixels in image
###Code
test_eq(im.n_px, 30*20)
#export
@patch_property
def shape(x: Image.Image): return x.size[1],x.size[0]
###Output
_____no_output_____
###Markdown
`Image.shape`> `Image.shape` (property)Image (height,width) tuple (NB: opposite order of `Image.size()`, same order as numpy array and pytorch tensor)
###Code
test_eq(im.shape, (20,30))
#export
@patch_property
def aspect(x: Image.Image): return x.size[0]/x.size[1]
###Output
_____no_output_____
###Markdown
`Image.aspect`> `Image.aspect` (property)Aspect ratio of the image, i.e. `width/height`
###Code
test_eq(im.aspect, 30/20)
#export
@patch
def reshape(x: Image.Image, h, w, resample=0):
"`resize` `x` to `(w,h)`"
return x.resize((w,h), resample=resample)
show_doc(Image.Image.reshape)
test_eq(im.reshape(12,10).shape, (12,10))
#export
@patch
def to_bytes_format(im:Image.Image, format='png'):
"Convert to bytes, default to PNG format"
arr = io.BytesIO()
im.save(arr, format=format)
return arr.getvalue()
show_doc(Image.Image.to_bytes_format)
#export
@patch
def to_thumb(self:Image.Image, h, w=None):
"Same as `thumbnail`, but uses a copy"
if w is None: w=h
im = self.copy()
im.thumbnail((w,h))
return im
show_doc(Image.Image.to_thumb)
#export
@patch
def resize_max(x: Image.Image, resample=0, max_px=None, max_h=None, max_w=None):
"`resize` `x` to `max_px`, or `max_h`, or `max_w`"
h,w = x.shape
if max_px and x.n_px>max_px: h,w = Tuple(h,w).mul(math.sqrt(max_px/x.n_px))
if max_h and h>max_h: h,w = (max_h ,max_h*w/h)
if max_w and w>max_w: h,w = (max_w*h/w,max_w )
return x.reshape(round(h), round(w), resample=resample)
test_eq(im.resize_max(max_px=20*30).shape, (20,30))
test_eq(im.resize_max(max_px=300).n_px, 294)
test_eq(im.resize_max(max_px=500, max_h=10, max_w=20).shape, (10,15))
test_eq(im.resize_max(max_h=14, max_w=15).shape, (10,15))
test_eq(im.resize_max(max_px=300, max_h=10, max_w=25).shape, (10,15))
show_doc(Image.Image.resize_max)
###Output
_____no_output_____
###Markdown
Basic types This section regroups the basic types used in vision with the transform that create objects of those types.
###Code
# TODO: docs
#export
def load_image(fn, mode=None, **kwargs):
"Open and load a `PIL.Image` and convert to `mode`"
im = Image.open(fn, **kwargs)
im.load()
im = im._new(im.im)
return im.convert(mode) if mode else im
# export
def image2tensor(img):
"Transform image to byte tensor in `c*h*w` dim order."
res = tensor(img)
if res.dim()==2: res = res.unsqueeze(-1)
return res.permute(2,0,1)
#export
class PILBase(Image.Image, metaclass=BypassNewMeta):
_bypass_type=Image.Image
_show_args = {'cmap':'viridis'}
_open_args = {'mode': 'RGB'}
@classmethod
def create(cls, fn:(Path,str,Tensor,ndarray,bytes), **kwargs)->None:
"Open an `Image` from path `fn`"
if isinstance(fn,TensorImage): fn = fn.permute(1,2,0).type(torch.uint8)
if isinstance(fn,Tensor): fn = fn.numpy()
if isinstance(fn,ndarray): return cls(Image.fromarray(fn))
if isinstance(fn,bytes): fn = io.BytesIO(fn)
return cls(load_image(fn, **merge(cls._open_args, kwargs)))
def show(self, ctx=None, **kwargs):
"Show image using `merge(self._show_args, kwargs)`"
return show_image(self, ctx=ctx, **merge(self._show_args, kwargs))
def __repr__(self): return f'{self.__class__.__name__} mode={self.mode} size={"x".join([str(d) for d in self.size])}'
#export
class PILImage(PILBase): pass
#export
class PILImageBW(PILImage): _show_args,_open_args = {'cmap':'Greys'},{'mode': 'L'}
im = PILImage.create(TEST_IMAGE)
test_eq(type(im), PILImage)
test_eq(im.mode, 'RGB')
test_eq(str(im), 'PILImage mode=RGB size=1200x803')
im.resize((64,64))
ax = im.show(figsize=(1,1))
test_fig_exists(ax)
timg = TensorImage(image2tensor(im))
tpil = PILImage.create(timg)
tpil.resize((64,64))
#hide
test_eq(np.array(im), np.array(tpil))
#export
class PILMask(PILBase): _open_args,_show_args = {'mode':'L'},{'alpha':0.5, 'cmap':'tab20'}
im = PILMask.create(TEST_IMAGE)
test_eq(type(im), PILMask)
test_eq(im.mode, 'L')
test_eq(str(im), 'PILMask mode=L size=1200x803')
#export
OpenMask = Transform(PILMask.create)
OpenMask.loss_func = CrossEntropyLossFlat(axis=1)
PILMask.create = OpenMask
###Output
_____no_output_____
###Markdown
Images
###Code
mnist = untar_data(URLs.MNIST_TINY)
fns = get_image_files(mnist)
mnist_fn = TEST_IMAGE_BW
timg = Transform(PILImageBW.create)
mnist_img = timg(mnist_fn)
test_eq(mnist_img.size, (28,28))
assert isinstance(mnist_img, PILImageBW)
mnist_img
###Output
_____no_output_____
###Markdown
Segmentation masks
###Code
#export
class AddMaskCodes(Transform):
"Add the code metadata to a `TensorMask`"
def __init__(self, codes=None):
self.codes = codes
if codes is not None: self.vocab,self.c = codes,len(codes)
def decodes(self, o:TensorMask):
if self.codes is not None: o._meta = {'codes': self.codes}
return o
camvid = untar_data(URLs.CAMVID_TINY)
fns = get_image_files(camvid/'images')
cam_fn = fns[0]
mask_fn = camvid/'labels'/f'{cam_fn.stem}_P{cam_fn.suffix}'
cam_img = PILImage.create(cam_fn)
test_eq(cam_img.size, (128,96))
tmask = Transform(PILMask.create)
mask = tmask(mask_fn)
test_eq(type(mask), PILMask)
test_eq(mask.size, (128,96))
_,axs = plt.subplots(1,3, figsize=(12,3))
cam_img.show(ctx=axs[0], title='image')
mask.show(alpha=1, ctx=axs[1], vmin=1, vmax=30, title='mask')
cam_img.show(ctx=axs[2], title='superimposed')
mask.show(ctx=axs[2], vmin=1, vmax=30);
###Output
_____no_output_____
###Markdown
Points
###Code
# export
class TensorPoint(TensorBase):
"Basic type for points in an image"
_show_args = dict(s=10, marker='.', c='r')
@classmethod
def create(cls, t, img_size=None)->None:
"Convert an array or a list of points `t` to a `Tensor`"
return cls(tensor(t).view(-1, 2).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
if 'figsize' in kwargs: del kwargs['figsize']
x = self.view(-1,2)
ctx.scatter(x[:, 0], x[:, 1], **{**self._show_args, **kwargs})
return ctx
#export
TensorPointCreate = Transform(TensorPoint.create)
TensorPointCreate.loss_func = MSELossFlat()
TensorPoint.create = TensorPointCreate
###Output
_____no_output_____
###Markdown
Points are expected to come as an array/tensor of shape `(n,2)` or as a list of lists with two elements. Unless you change the defaults in `PointScaler` (see later on), coordinates should go from 0 to width/height, with the first one being the column index (so from 0 to width) and the second one being the row index (so from 0 to height).> Note: This is differnt from the usual indeixing convention for arrays in numpy or in PyTorch, but it's the way points are expected by matplotlib or the internal functions in PyTorch like `F.grid_sample`.
###Code
pnt_img = TensorImage(mnist_img.resize((28,35)))
pnts = np.array([[0,0], [0,35], [28,0], [28,35], [9, 17]])
tfm = Transform(TensorPoint.create)
tpnts = tfm(pnts)
test_eq(tpnts.shape, [5,2])
test_eq(tpnts.dtype, torch.float32)
ctx = pnt_img.show(figsize=(1,1), cmap='Greys')
tpnts.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Bounding boxes
###Code
# export
def get_annotations(fname, prefix=None):
"Open a COCO style json in `fname` and returns the lists of filenames (with maybe `prefix`) and labelled bboxes."
annot_dict = json.load(open(fname))
id2images, id2bboxes, id2cats = {}, collections.defaultdict(list), collections.defaultdict(list)
classes = {o['id']:o['name'] for o in annot_dict['categories']}
for o in annot_dict['annotations']:
bb = o['bbox']
id2bboxes[o['image_id']].append([bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]])
id2cats[o['image_id']].append(classes[o['category_id']])
id2images = {o['id']:ifnone(prefix, '') + o['file_name'] for o in annot_dict['images'] if o['id'] in id2bboxes}
ids = list(id2images.keys())
return [id2images[k] for k in ids], [(id2bboxes[k], id2cats[k]) for k in ids]
#hide
#TODO explain and/or simplify this
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
annots = json.load(open(coco/'train.json'))
test_eq(images, [k['file_name'] for k in annots['images']])
for _ in range(5):
idx = random.randint(0, len(images)-1)
fn = images[idx]
i = 0
while annots['images'][i]['file_name'] != fn: i+=1
img_id = annots['images'][i]['id']
bbs = [ann for ann in annots['annotations'] if ann['image_id'] == img_id]
i2o = {k['id']:k['name'] for k in annots['categories']}
lbls = [i2o[bb['category_id']] for bb in bbs]
bboxes = [bb['bbox'] for bb in bbs]
bboxes = [[bb[0],bb[1], bb[0]+bb[2], bb[1]+bb[3]] for bb in bboxes]
test_eq(lbl_bbox[idx], [bboxes, lbls])
# export
from matplotlib import patches, patheffects
def _draw_outline(o, lw):
o.set_path_effects([patheffects.Stroke(linewidth=lw, foreground='black'), patheffects.Normal()])
def _draw_rect(ax, b, color='white', text=None, text_size=14, hw=True, rev=False):
lx,ly,w,h = b
if rev: lx,ly,w,h = ly,lx,h,w
if not hw: w,h = w-lx,h-ly
patch = ax.add_patch(patches.Rectangle((lx,ly), w, h, fill=False, edgecolor=color, lw=2))
_draw_outline(patch, 4)
if text is not None:
patch = ax.text(lx,ly, text, verticalalignment='top', color=color, fontsize=text_size, weight='bold')
_draw_outline(patch,1)
# export
class TensorBBox(TensorPoint):
"Basic type for a tensor of bounding boxes in an image"
@classmethod
def create(cls, x, img_size=None)->None: return cls(tensor(x).view(-1, 4).float(), img_size=img_size)
def show(self, ctx=None, **kwargs):
x = self.view(-1,4)
for b in x: _draw_rect(ctx, b, hw=False, **kwargs)
return ctx
###Output
_____no_output_____
###Markdown
Bounding boxes are expected to come as tuple with an array/tensor of shape `(n,4)` or as a list of lists with four elements and a list of corresponding labels. Unless you change the defaults in `BBoxScaler` (see later on), coordinates for each bounding box should go from 0 to width/height, with the following convention: x1, y1, x2, y2 where (x1,y1) is your top-left corner and (x2,y2) is your bottom-right corner.> Note: We use the same convention as for points with x going from 0 to width and y going from 0 to height.
###Code
# export
class LabeledBBox(L):
"Basic type for a list of bounding boxes in an image"
def show(self, ctx=None, **kwargs):
for b,l in zip(self.bbox, self.lbl):
if l != '#na#': ctx = retain_type(b, self.bbox).show(ctx=ctx, text=l)
return ctx
bbox,lbl = add_props(lambda i,self: self[i])
coco = untar_data(URLs.COCO_TINY)
images, lbl_bbox = get_annotations(coco/'train.json')
idx=2
coco_fn,bbox = coco/'train'/images[idx],lbl_bbox[idx]
coco_img = timg(coco_fn)
tbbox = LabeledBBox(TensorBBox(bbox[0]), bbox[1])
ctx = coco_img.show(figsize=(3,3), cmap='Greys')
tbbox.show(ctx=ctx);
###Output
_____no_output_____
###Markdown
Basic Transforms Unless specifically metioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transform (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work accross applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` or `BBoxScaler` (which are tuple transforms) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return self.sz if sz is None else sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def before_call(self): self.bbox,self.lbls = None,None
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 13a_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.transfer_learning.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.ulmfit.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 45_collab.ipynb.
Converted 50_datablock_examples.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
###Markdown
Basic Transforms Unless specifically mentioned, all the following transforms can be used as single-item transforms (in one of the list in the `tfms` you pass to a `TfmdDS` or a `Datasource`) or tuple transforms (in the `tuple_tfms` you pass to a `TfmdDS` or a `Datasource`). The safest way that will work across applications is to always use them as `tuple_tfms`. For instance, if you have points or bounding boxes as targets and use `Resize` as a single-item transform, when you get to `PointScaler` (which is a tuple transform) you won't have the correct size of the image to properly scale your points.
###Code
# export
PILImage ._tensor_cls = TensorImage
PILImageBW._tensor_cls = TensorImageBW
PILMask ._tensor_cls = TensorMask
#export
@ToTensor
def encodes(self, o:PILBase): return o._tensor_cls(image2tensor(o))
@ToTensor
def encodes(self, o:PILMask): return o._tensor_cls(image2tensor(o)[0])
###Output
_____no_output_____
###Markdown
Any data augmentation transform that runs on PIL Images must be run before this transform.
###Code
tfm = ToTensor()
print(tfm)
print(type(mnist_img))
print(type(tfm(mnist_img)))
tfm = ToTensor()
test_eq(tfm(mnist_img).shape, (1,28,28))
test_eq(type(tfm(mnist_img)), TensorImageBW)
test_eq(tfm(mask).shape, (96,128))
test_eq(type(tfm(mask)), TensorMask)
###Output
_____no_output_____
###Markdown
Let's confirm we can pipeline this with `PILImage.create`.
###Code
pipe_img = Pipeline([PILImageBW.create, ToTensor()])
img = pipe_img(mnist_fn)
test_eq(type(img), TensorImageBW)
pipe_img.show(img, figsize=(1,1));
def _cam_lbl(x): return mask_fn
cam_tds = Datasets([cam_fn], [[PILImage.create, ToTensor()], [_cam_lbl, PILMask.create, ToTensor()]])
show_at(cam_tds, 0);
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Warning: This transform needs to run on the tuple level, before any transform that changes the image size.
###Code
#export
def _scale_pnts(y, sz, do_scale=True, y_first=False):
if y_first: y = y.flip(1)
res = y * 2/tensor(sz).float() - 1 if do_scale else y
return TensorPoint(res, img_size=sz)
def _unscale_pnts(y, sz): return TensorPoint((y+1) * tensor(sz).float()/2, img_size=sz)
#export
class PointScaler(Transform):
"Scale a tensor representing points"
order = 1
def __init__(self, do_scale=True, y_first=False): self.do_scale,self.y_first = do_scale,y_first
def _grab_sz(self, x):
self.sz = [x.shape[-1], x.shape[-2]] if isinstance(x, Tensor) else x.size
return x
def _get_sz(self, x):
sz = x.get_meta('img_size')
assert sz is not None or self.sz is not None, "Size could not be inferred, pass it in the init of your TensorPoint with `img_size=...`"
return sz if self.sz is None else self.sz
def setups(self, dl):
its = dl.do_item(0)
for t in its:
if isinstance(t, TensorPoint): self.c = t.numel()
def encodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def decodes(self, x:(PILBase,TensorImageBase)): return self._grab_sz(x)
def encodes(self, x:TensorPoint): return _scale_pnts(x, self._get_sz(x), self.do_scale, self.y_first)
def decodes(self, x:TensorPoint): return _unscale_pnts(x.view(-1, 2), self._get_sz(x))
###Output
_____no_output_____
###Markdown
To work with data augmentation, and in particular the `grid_sample` method, points need to be represented with coordinates going from -1 to 1 (-1 being top or left, 1 bottom or right), which will be done unless you pass `do_scale=False`. We also need to make sure they are following our convention of points being x,y coordinates, so pass along `y_first=True` if you have your data in an y,x format to add a flip.> Note: This transform automatically grabs the sizes of the images it sees before a TensorPoint object and embeds it in them. For this to work, those images need to be before any points in the order of your final tuple. If you don't have such images, you need to embed the size of the corresponding image when creating a TensorPoint by passing it with `sz=...`.
###Code
def _pnt_lbl(x): return TensorPoint.create(pnts)
def _pnt_open(fn): return PILImage(PILImage.create(fn).resize((28,35)))
pnt_tds = Datasets([mnist_fn], [_pnt_open, [_pnt_lbl]])
pnt_tdl = TfmdDL(pnt_tds, bs=1, after_item=[PointScaler(), ToTensor()])
test_eq(pnt_tdl.after_item.c, 10)
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y = tfm(pnt_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
x,y = pnt_tdl.one_batch()
#Scaling and flipping properly done
#NB: we added a point earlier at (9,17); formula below scales to (-1,1) coords
test_close(y[0], tensor([[-1., -1.], [-1., 1.], [1., -1.], [1., 1.], [9/14-1, 17/17.5-1]]))
a,b = pnt_tdl.decode_batch((x,y))[0]
test_eq(b, tensor(pnts).float())
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorPoint)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorPoint)
test_eq(b.get_meta('img_size'), (28,35)) #Automatically picked the size of the input
pnt_tdl.show_batch(figsize=(2,2), cmap='Greys');
#export
class BBoxLabeler(Transform):
def setups(self, dl): self.vocab = dl.vocab
def decode (self, x, **kwargs):
self.bbox,self.lbls = None,None
return self._call('decodes', x, **kwargs)
def decodes(self, x:TensorMultiCategory):
self.lbls = [self.vocab[a] for a in x]
return x if self.bbox is None else LabeledBBox(self.bbox, self.lbls)
def decodes(self, x:TensorBBox):
self.bbox = x
return self.bbox if self.lbls is None else LabeledBBox(self.bbox, self.lbls)
#export
#LabeledBBox can be sent in a tl with MultiCategorize (depending on the order of the tls) but it is already decoded.
@MultiCategorize
def decodes(self, x:LabeledBBox): return x
#export
@PointScaler
def encodes(self, x:TensorBBox):
pnts = self.encodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
@PointScaler
def decodes(self, x:TensorBBox):
pnts = self.decodes(cast(x.view(-1,2), TensorPoint))
return cast(pnts.view(-1, 4), TensorBBox)
def _coco_bb(x): return TensorBBox.create(bbox[0])
def _coco_lbl(x): return bbox[1]
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_bb], [_coco_lbl, MultiCategorize(add_na=True)]], n_inp=1)
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
#hide
#Check the size was grabbed by PointScaler and added to y
tfm = PointScaler()
tfm.as_item=False
x,y,z = tfm(coco_tds[0])
test_eq(tfm.sz, x.size)
test_eq(y.get_meta('img_size'), x.size)
Categorize(add_na=True)
coco_tds.tfms
x,y,z
x,y,z = coco_tdl.one_batch()
test_close(y[0], -1+tensor(bbox[0])/64)
test_eq(z[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_close(b, tensor(bbox[0]).float())
test_eq(c.bbox, b)
test_eq(c.lbl, bbox[1])
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorBBox)
test_eq(type(z), TensorMultiCategory)
test_eq(type(a), TensorImage)
test_eq(type(b), TensorBBox)
test_eq(type(c), LabeledBBox)
test_eq(y.get_meta('img_size'), (128,128))
coco_tdl.show_batch();
#hide
#test other direction works too
coco_tds = Datasets([coco_fn], [PILImage.create, [_coco_lbl, MultiCategorize(add_na=True)], [_coco_bb]])
coco_tdl = TfmdDL(coco_tds, bs=1, after_item=[BBoxLabeler(), PointScaler(), ToTensor()])
x,y,z = coco_tdl.one_batch()
test_close(z[0], -1+tensor(bbox[0])/64)
test_eq(y[0], tensor([1,1,1]))
a,b,c = coco_tdl.decode_batch((x,y,z))[0]
test_eq(b, bbox[1])
test_close(c.bbox, tensor(bbox[0]).float())
test_eq(c.lbl, b)
#Check types
test_eq(type(x), TensorImage)
test_eq(type(y), TensorMultiCategory)
test_eq(type(z), TensorBBox)
test_eq(type(a), TensorImage)
test_eq(type(b), MultiCategory)
test_eq(type(c), LabeledBBox)
test_eq(z.get_meta('img_size'), (128,128))
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 00_torch_core.ipynb.
Converted 01_layers.ipynb.
Converted 02_data.load.ipynb.
Converted 03_data.core.ipynb.
Converted 04_data.external.ipynb.
Converted 05_data.transforms.ipynb.
Converted 06_data.block.ipynb.
Converted 07_vision.core.ipynb.
Converted 08_vision.data.ipynb.
Converted 09_vision.augment.ipynb.
Converted 09b_vision.utils.ipynb.
Converted 09c_vision.widgets.ipynb.
Converted 10_tutorial.pets.ipynb.
Converted 11_vision.models.xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_callback.core.ipynb.
Converted 13a_learner.ipynb.
Converted 13b_metrics.ipynb.
Converted 14_callback.schedule.ipynb.
Converted 14a_callback.data.ipynb.
Converted 15_callback.hook.ipynb.
Converted 15a_vision.models.unet.ipynb.
Converted 16_callback.progress.ipynb.
Converted 17_callback.tracker.ipynb.
Converted 18_callback.fp16.ipynb.
Converted 18a_callback.training.ipynb.
Converted 19_callback.mixup.ipynb.
Converted 20_interpret.ipynb.
Converted 20a_distributed.ipynb.
Converted 21_vision.learner.ipynb.
Converted 22_tutorial.imagenette.ipynb.
Converted 23_tutorial.vision.ipynb.
Converted 24_tutorial.siamese.ipynb.
Converted 24_vision.gan.ipynb.
Converted 30_text.core.ipynb.
Converted 31_text.data.ipynb.
Converted 32_text.models.awdlstm.ipynb.
Converted 33_text.models.core.ipynb.
Converted 34_callback.rnn.ipynb.
Converted 35_tutorial.wikitext.ipynb.
Converted 36_text.models.qrnn.ipynb.
Converted 37_text.learner.ipynb.
Converted 38_tutorial.text.ipynb.
Converted 39_tutorial.transformers.ipynb.
Converted 40_tabular.core.ipynb.
Converted 41_tabular.data.ipynb.
Converted 42_tabular.model.ipynb.
Converted 43_tabular.learner.ipynb.
Converted 44_tutorial.tabular.ipynb.
Converted 45_collab.ipynb.
Converted 46_tutorial.collab.ipynb.
Converted 50_tutorial.datablock.ipynb.
Converted 60_medical.imaging.ipynb.
Converted 61_tutorial.medical_imaging.ipynb.
Converted 65_medical.text.ipynb.
Converted 70_callback.wandb.ipynb.
Converted 71_callback.tensorboard.ipynb.
Converted 72_callback.neptune.ipynb.
Converted 73_callback.captum.ipynb.
Converted 74_callback.cutmix.ipynb.
Converted 97_test_utils.ipynb.
Converted 99_pytorch_doc.ipynb.
Converted index.ipynb.
Converted tutorial.ipynb.
|
sgan-dataset/Result Analyses.ipynb | ###Markdown
Log-Likelihood Analyses
###Code
lls_df = pd.concat([pd.read_csv(f) for f in glob.glob('plots/data/*_lls.csv')], ignore_index=True)
lls_df['NLL'] = -lls_df['log-likelihood']
lls_df.head()
lls_df[(lls_df['method'] == 'sgan') & (lls_df['dataset'] == 'eth') & (lls_df['data_precondition'] == 'curr')]['log-likelihood'].mean()
specific_df = lls_df[lls_df['data_precondition'] == 'curr']
fig, ax = plt.subplots(figsize=(5, 3), dpi=300)
sns.pointplot(y='NLL', x='timestep', data=specific_df,
hue='method', ax=ax, dodge=0.2,
palette=sns.color_palette(['#3498db','#70B832','#EC8F31']),
scale=0.5, errwidth=1.5)
sns.despine()
ax.set_ylabel('Negative Log-Likelihood')
ax.set_xlabel('Prediction Timestep')
handles, labels = ax.get_legend_handles_labels()
labels = ['Social GAN', 'Our Method (Full)', r'Our Method ($z_{best}$)']
ax.legend(handles, labels, loc='best');
plt.savefig('plots/paper_figures/nll_vs_time.pdf', dpi=300, bbox_inches='tight')
sns.catplot(y='NLL', x='timestep', data=specific_df,
hue='method', dodge=0.2, kind='point',
hue_order=['sgan', 'our_most_likely', 'our_full'],
palette=sns.color_palette(['#3498db','#EC8F31','#70B832']),
scale=0.5, errwidth=1.5, col='dataset')
sns.despine()
# plt.savefig('plots/paper_figures/nll_vs_time.pdf', dpi=300, bbox_inches='tight')
# data_precondition dataset method run timestep node log-likelihood NLL
barplot_df = lls_df[lls_df['data_precondition'] == 'curr'].groupby(['dataset', 'method', 'run', 'node']).mean().reset_index()
del barplot_df['log-likelihood']
barplot_copied_df = barplot_df.copy()
barplot_copied_df['dataset'] = 'Average'
barplot_df = pd.concat([barplot_df, barplot_copied_df], ignore_index=True)
barplot_df.tail()
fig, ax = plt.subplots(figsize=(8, 4), dpi=300)
sns.barplot(y='NLL', x='dataset',
data=barplot_df,
hue_order=['sgan', 'our_full', 'our_most_likely'],
palette=sns.color_palette(['#a6cee3','#b2df8a','#F7BF48']),
hue='method', dodge=0.2, order=['eth', 'hotel', 'univ', 'zara1', 'zara2', 'Average'])
sns.despine()
ax.set_ylabel('Negative Log-Likelihood')
ax.set_xlabel('')
ax.set_xticklabels([pretty_dataset_name(label.get_text()) for label in ax.get_xticklabels()])
handles, labels = ax.get_legend_handles_labels()
labels = ['Social GAN', 'Our Method (Full)', r'Our Method ($z_{best}$)']
ax.legend(handles, labels, loc='best');
plt.savefig('plots/paper_figures/nll_vs_dataset.pdf', dpi=300, bbox_inches='tight')
from statsmodels.stats.weightstats import ttest_ind, DescrStatsW
sgan_df = lls_df[(lls_df['data_precondition'] == 'curr') & (lls_df['method'] == 'sgan')]
our_ml_df = lls_df[(lls_df['data_precondition'] == 'curr') & (lls_df['method'] == 'our_most_likely')]
our_full_df = lls_df[(lls_df['data_precondition'] == 'curr') & (lls_df['method'] == 'our_full')]
dataset_names = ['eth', 'hotel', 'univ', 'zara1', 'zara2', 'Average']
ll_dict = {'dataset': list(), 'method': list(), 'mean_ll': list(),
'conf_int_low': list(), 'conf_int_high': list(),
'p_value': list()}
for dataset_name in dataset_names:
if dataset_name != 'Average':
curr_sgan_df = sgan_df[sgan_df['dataset'] == dataset_name]
curr_our_ml_df = our_ml_df[our_ml_df['dataset'] == dataset_name]
curr_our_full_df = our_full_df[our_full_df['dataset'] == dataset_name]
sgan_nlls = curr_sgan_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
our_ml_nlls = curr_our_ml_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
our_full_nlls = curr_our_full_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
sgan_stats = DescrStatsW(sgan_nlls)
our_ml_stats = DescrStatsW(our_ml_nlls)
our_full_stats = DescrStatsW(our_full_nlls)
low, high = sgan_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, sgan_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Social GAN')
ll_dict['mean_ll'].append(sgan_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
low, high = our_ml_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, our_ml_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Our Method (z_best)')
ll_dict['mean_ll'].append(our_ml_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
low, high = our_full_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, our_full_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Our Method (Full)')
ll_dict['mean_ll'].append(our_full_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
else:
sgan_nlls = sgan_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
our_ml_nlls = our_ml_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
our_full_nlls = our_full_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
sgan_stats = DescrStatsW(sgan_nlls)
our_ml_stats = DescrStatsW(our_ml_nlls)
our_full_stats = DescrStatsW(our_full_nlls)
low, high = sgan_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, sgan_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Social GAN')
ll_dict['mean_ll'].append(sgan_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
low, high = our_ml_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, our_ml_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Our Method (z_best)')
ll_dict['mean_ll'].append(our_ml_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
low, high = our_full_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, our_full_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Our Method (Full)')
ll_dict['mean_ll'].append(our_full_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
ll_tabular_df = pd.DataFrame.from_dict(ll_dict)
ll_tabular_df
###Output
_____no_output_____
###Markdown
Displacement Error Analyses
###Code
# These are for a prediction horizon of 12 timesteps.
prior_work_mse_results = {
'ETH - Univ': OrderedDict([('Linear', 1.33), ('Vanilla LSTM', 1.09), ('Social LSTM', 1.09), ('Social Attention', 0.39)]),
'ETH - Hotel': OrderedDict([('Linear', 0.39), ('Vanilla LSTM', 0.86), ('Social LSTM', 0.79), ('Social Attention', 0.29)]),
'UCY - Univ': OrderedDict([('Linear', 0.82), ('Vanilla LSTM', 0.61), ('Social LSTM', 0.67), ('Social Attention', 0.20)]),
'UCY - Zara 1': OrderedDict([('Linear', 0.62), ('Vanilla LSTM', 0.41), ('Social LSTM', 0.47), ('Social Attention', 0.30)]),
'UCY - Zara 2': OrderedDict([('Linear', 0.77), ('Vanilla LSTM', 0.52), ('Social LSTM', 0.56), ('Social Attention', 0.33)]),
'Average': OrderedDict([('Linear', 0.79), ('Vanilla LSTM', 0.70), ('Social LSTM', 0.72), ('Social Attention', 0.30)])
}
prior_work_fse_results = {
'ETH - Univ': OrderedDict([('Linear', 2.94), ('Vanilla LSTM', 2.41), ('Social LSTM', 2.35), ('Social Attention', 3.74)]),
'ETH - Hotel': OrderedDict([('Linear', 0.72), ('Vanilla LSTM', 1.91), ('Social LSTM', 1.76), ('Social Attention', 2.64)]),
'UCY - Univ': OrderedDict([('Linear', 1.59), ('Vanilla LSTM', 1.31), ('Social LSTM', 1.40), ('Social Attention', 0.52)]),
'UCY - Zara 1': OrderedDict([('Linear', 1.21), ('Vanilla LSTM', 0.88), ('Social LSTM', 1.00), ('Social Attention', 2.13)]),
'UCY - Zara 2': OrderedDict([('Linear', 1.48), ('Vanilla LSTM', 1.11), ('Social LSTM', 1.17), ('Social Attention', 3.92)]),
'Average': OrderedDict([('Linear', 1.59), ('Vanilla LSTM', 1.52), ('Social LSTM', 1.54), ('Social Attention', 2.59)])
}
linestyles = ['--', '-.', '-', ':']
errors_df = pd.concat([pd.read_csv(f) for f in glob.glob('plots/data/*_errors.csv')], ignore_index=True)
errors_df.head()
dataset_names = ['eth', 'hotel', 'univ', 'zara1', 'zara2', 'Average']
sgan_err_df = errors_df[(errors_df['data_precondition'] == 'curr') & (errors_df['method'] == 'sgan')]
our_ml_err_df = errors_df[(errors_df['data_precondition'] == 'curr') & (errors_df['method'] == 'our_most_likely')]
our_full_err_df = errors_df[(errors_df['data_precondition'] == 'curr') & (errors_df['method'] == 'our_full')]
for dataset_name in dataset_names:
if dataset_name != 'Average':
curr_sgan_df = sgan_err_df[sgan_err_df['dataset'] == dataset_name]
curr_our_ml_df = our_ml_err_df[our_ml_err_df['dataset'] == dataset_name]
curr_our_full_df = our_full_err_df[our_full_err_df['dataset'] == dataset_name]
sgan_errs = curr_sgan_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
our_ml_errs = curr_our_ml_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
our_full_errs = curr_our_full_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
sgan_mse_errs = sgan_errs[sgan_errs['error_type'] == 'mse']['error_value']
our_ml_mse_errs = our_ml_errs[our_ml_errs['error_type'] == 'mse']['error_value']
our_full_mse_errs = our_full_errs[our_full_errs['error_type'] == 'mse']['error_value']
sgan_fse_errs = sgan_errs[sgan_errs['error_type'] == 'fse']['error_value']
our_ml_fse_errs = our_ml_errs[our_ml_errs['error_type'] == 'fse']['error_value']
our_full_fse_errs = our_full_errs[our_full_errs['error_type'] == 'fse']['error_value']
sgan_mse_stats = DescrStatsW(sgan_mse_errs)
our_ml_mse_stats = DescrStatsW(our_ml_mse_errs)
our_full_mse_stats = DescrStatsW(our_full_mse_errs)
sgan_fse_stats = DescrStatsW(sgan_fse_errs)
our_ml_fse_stats = DescrStatsW(our_ml_fse_errs)
our_full_fse_stats = DescrStatsW(our_full_fse_errs)
print('\nMSE', dataset_name)
print('sgan', sgan_mse_stats.mean, sgan_mse_stats.tconfint_mean())
print('our_ml', our_ml_mse_stats.mean, our_ml_mse_stats.tconfint_mean(), ttest_ind(sgan_mse_errs, our_ml_mse_errs))
print('our_full', our_full_mse_stats.mean, our_full_mse_stats.tconfint_mean(), ttest_ind(sgan_mse_errs, our_full_mse_errs))
print('FSE', dataset_name)
print('sgan', sgan_fse_stats.mean, sgan_fse_stats.tconfint_mean())
print('our_ml', our_ml_fse_stats.mean, our_ml_fse_stats.tconfint_mean(), ttest_ind(sgan_fse_errs, our_ml_fse_errs))
print('our_full', our_full_fse_stats.mean, our_full_fse_stats.tconfint_mean(), ttest_ind(sgan_fse_errs, our_full_fse_errs))
else:
sgan_errs = sgan_err_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
our_ml_errs = our_ml_err_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
our_full_errs = our_full_err_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
sgan_mse_errs = sgan_errs[sgan_errs['error_type'] == 'mse']['error_value']
our_ml_mse_errs = our_ml_errs[our_ml_errs['error_type'] == 'mse']['error_value']
our_full_mse_errs = our_full_errs[our_full_errs['error_type'] == 'mse']['error_value']
sgan_fse_errs = sgan_errs[sgan_errs['error_type'] == 'fse']['error_value']
our_ml_fse_errs = our_ml_errs[our_ml_errs['error_type'] == 'fse']['error_value']
our_full_fse_errs = our_full_errs[our_full_errs['error_type'] == 'fse']['error_value']
sgan_mse_stats = DescrStatsW(sgan_mse_errs)
our_ml_mse_stats = DescrStatsW(our_ml_mse_errs)
our_full_mse_stats = DescrStatsW(our_full_mse_errs)
sgan_fse_stats = DescrStatsW(sgan_fse_errs)
our_ml_fse_stats = DescrStatsW(our_ml_fse_errs)
our_full_fse_stats = DescrStatsW(our_full_fse_errs)
print('\nMSE', dataset_name)
print('sgan', sgan_mse_stats.mean, sgan_mse_stats.tconfint_mean())
print('our_ml', our_ml_mse_stats.mean, our_ml_mse_stats.tconfint_mean(), ttest_ind(sgan_mse_errs, our_ml_mse_errs))
print('our_full', our_full_mse_stats.mean, our_full_mse_stats.tconfint_mean(), ttest_ind(sgan_mse_errs, our_full_mse_errs))
print('FSE', dataset_name)
print('sgan', sgan_fse_stats.mean, sgan_fse_stats.tconfint_mean())
print('our_ml', our_ml_fse_stats.mean, our_ml_fse_stats.tconfint_mean(), ttest_ind(sgan_fse_errs, our_ml_fse_errs))
print('our_full', our_full_fse_stats.mean, our_full_fse_stats.tconfint_mean(), ttest_ind(sgan_fse_errs, our_full_fse_errs))
perf_df = errors_df[(errors_df['data_precondition'] == 'curr')]
mean_markers = 'X'
marker_size = 7
line_colors = ['#1f78b4','#33a02c','#fb9a99','#e31a1c']
area_colors = ['#a6cee3','#b2df8a','#F7BF48']
area_rgbs = list()
for c in area_colors:
area_rgbs.append([int(c[i:i+2], 16) for i in (1, 3, 5)])
with sns.color_palette("muted"):
fig_mse, ax_mses = plt.subplots(nrows=1, ncols=6, figsize=(8, 4), dpi=300, sharey=True)
for idx, ax_mse in enumerate(ax_mses):
dataset_name = dataset_names[idx]
if dataset_name != 'Average':
specific_df = perf_df[(perf_df['dataset'] == dataset_name) & (perf_df['error_type'] == 'mse')]
specific_df['dataset'] = pretty_dataset_name(dataset_name)
else:
specific_df = perf_df[(perf_df['error_type'] == 'mse')].copy()
specific_df['dataset'] = 'Average'
sns.boxplot(x='dataset', y='error_value', hue='method',
data=specific_df, ax=ax_mse, showfliers=False,
palette=area_colors, hue_order=['sgan', 'our_full', 'our_most_likely'])
for baseline_idx, (baseline, mse_val) in enumerate(prior_work_mse_results[pretty_dataset_name(dataset_name)].items()):
ax_mse.axhline(y=mse_val, label=baseline, color=line_colors[baseline_idx], linestyle=linestyles[baseline_idx])
ax_mse.get_legend().remove()
ax_mse.set_xlabel('')
ax_mse.set_ylabel('' if idx > 0 else 'Average Displacement Error (m)')
if idx == 0:
handles, labels = ax_mse.get_legend_handles_labels()
handles = [handles[0], handles[4], handles[1], handles[5], handles[2], handles[6], handles[3]]
labels = [labels[0], 'Social GAN', labels[1], 'Our Method (Full)', labels[2], r'Our Method ($z_{best}$)', labels[3]]
ax_mse.legend(handles, labels,
loc='lower center', bbox_to_anchor=(0.5, 0.9),
ncol=4, borderaxespad=0, frameon=False,
bbox_transform=fig_mse.transFigure)
ax_mse.scatter([-0.2675, 0, 0.2675],
[np.mean(specific_df[specific_df['method'] == 'sgan']['error_value']),
np.mean(specific_df[specific_df['method'] == 'our_full']['error_value']),
np.mean(specific_df[specific_df['method'] == 'our_most_likely']['error_value'])],
s=marker_size*marker_size, c=np.asarray(area_rgbs)/255.0, marker=mean_markers,
edgecolors='#545454', zorder=10)
# fig_mse.text(0.51, 0.03, 'Dataset', ha='center')
plt.savefig('plots/paper_figures/mse_boxplots.pdf', dpi=300, bbox_inches='tight')
with sns.color_palette("muted"):
fig_fse, ax_fses = plt.subplots(nrows=1, ncols=6, figsize=(8, 4), dpi=300, sharey=True)
for idx, ax_fse in enumerate(ax_fses):
dataset_name = dataset_names[idx]
if dataset_name != 'Average':
specific_df = perf_df[(perf_df['dataset'] == dataset_name) & (perf_df['error_type'] == 'fse')]
specific_df['dataset'] = pretty_dataset_name(dataset_name)
else:
specific_df = perf_df[(perf_df['error_type'] == 'fse')].copy()
specific_df['dataset'] = 'Average'
sns.boxplot(x='dataset', y='error_value', hue='method',
data=specific_df, ax=ax_fse, showfliers=False,
palette=area_colors, hue_order=['sgan', 'our_full', 'our_most_likely'])
for baseline_idx, (baseline, fse_val) in enumerate(prior_work_fse_results[pretty_dataset_name(dataset_name)].items()):
ax_fse.axhline(y=fse_val, label=baseline, color=line_colors[baseline_idx], linestyle=linestyles[baseline_idx])
ax_fse.get_legend().remove()
ax_fse.set_xlabel('')
ax_fse.set_ylabel('' if idx > 0 else 'Final Displacement Error (m)')
if idx == 0:
handles, labels = ax_fse.get_legend_handles_labels()
handles = [handles[0], handles[4], handles[1], handles[5], handles[2], handles[6], handles[3]]
labels = [labels[0], 'Social GAN', labels[1], 'Our Method (Full)', labels[2], r'Our Method ($z_{best}$)', labels[3]]
ax_fse.legend(handles, labels,
loc='lower center', bbox_to_anchor=(0.5, 0.9),
ncol=4, borderaxespad=0, frameon=False,
bbox_transform=fig_fse.transFigure)
ax_fse.scatter([-0.2675, 0, 0.2675],
[np.mean(specific_df[specific_df['method'] == 'sgan']['error_value']),
np.mean(specific_df[specific_df['method'] == 'our_full']['error_value']),
np.mean(specific_df[specific_df['method'] == 'our_most_likely']['error_value'])],
s=marker_size*marker_size, c=np.asarray(area_rgbs)/255.0, marker=mean_markers,
edgecolors='#545454', zorder=10)
# fig_fse.text(0.51, 0.03, 'Dataset', ha='center')
plt.savefig('plots/paper_figures/fse_boxplots.pdf', dpi=300, bbox_inches='tight')
###Output
/home/borisi/anaconda3/envs/dynstg/lib/python3.6/site-packages/ipykernel_launcher.py:7: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
import sys
###Markdown
Log-Likelihood Analyses
###Code
lls_df = pd.concat([pd.read_csv(f) for f in glob.glob('plots/data/*_lls.csv')], ignore_index=True)
lls_df['NLL'] = -lls_df['log-likelihood']
lls_df.head()
lls_df[(lls_df['method'] == 'sgan') & (lls_df['dataset'] == 'eth') & (lls_df['data_precondition'] == 'curr')]['log-likelihood'].mean()
specific_df = lls_df[lls_df['data_precondition'] == 'curr']
fig, ax = plt.subplots(figsize=(5, 3), dpi=300)
sns.pointplot(y='NLL', x='timestep', data=specific_df,
hue='method', ax=ax, dodge=0.2,
palette=sns.color_palette(['#3498db','#70B832','#EC8F31']),
scale=0.5, errwidth=1.5)
sns.despine()
ax.set_ylabel('Negative Log-Likelihood')
ax.set_xlabel('Prediction Timestep')
handles, labels = ax.get_legend_handles_labels()
labels = ['Social GAN', 'Our Method (Full)', r'Our Method ($z_{best}$)']
ax.legend(handles, labels, loc='best');
plt.savefig('plots/paper_figures/nll_vs_time.pdf', dpi=300, bbox_inches='tight')
sns.catplot(y='NLL', x='timestep', data=specific_df,
hue='method', dodge=0.2, kind='point',
hue_order=['sgan', 'our_most_likely', 'our_full'],
palette=sns.color_palette(['#3498db','#EC8F31','#70B832']),
scale=0.5, errwidth=1.5, col='dataset')
sns.despine()
# plt.savefig('plots/paper_figures/nll_vs_time.pdf', dpi=300, bbox_inches='tight')
# data_precondition dataset method run timestep node log-likelihood NLL
barplot_df = lls_df[lls_df['data_precondition'] == 'curr'].groupby(['dataset', 'method', 'run', 'node']).mean().reset_index()
del barplot_df['log-likelihood']
barplot_copied_df = barplot_df.copy()
barplot_copied_df['dataset'] = 'Average'
barplot_df = pd.concat([barplot_df, barplot_copied_df], ignore_index=True)
barplot_df.tail()
fig, ax = plt.subplots(figsize=(8, 4), dpi=300)
sns.barplot(y='NLL', x='dataset',
data=barplot_df,
hue_order=['sgan', 'our_full', 'our_most_likely'],
palette=sns.color_palette(['#a6cee3','#b2df8a','#F7BF48']),
hue='method', dodge=0.2, order=['eth', 'hotel', 'univ', 'zara1', 'zara2', 'Average'])
sns.despine()
ax.set_ylabel('Negative Log-Likelihood')
ax.set_xlabel('')
ax.set_xticklabels([pretty_dataset_name(label.get_text()) for label in ax.get_xticklabels()])
handles, labels = ax.get_legend_handles_labels()
labels = ['Social GAN', 'Our Method (Full)', r'Our Method ($z_{best}$)']
ax.legend(handles, labels, loc='best');
plt.savefig('plots/paper_figures/nll_vs_dataset.pdf', dpi=300, bbox_inches='tight')
from statsmodels.stats.weightstats import ttest_ind, DescrStatsW
sgan_df = lls_df[(lls_df['data_precondition'] == 'curr') & (lls_df['method'] == 'sgan')]
our_ml_df = lls_df[(lls_df['data_precondition'] == 'curr') & (lls_df['method'] == 'our_most_likely')]
our_full_df = lls_df[(lls_df['data_precondition'] == 'curr') & (lls_df['method'] == 'our_full')]
dataset_names = ['eth', 'hotel', 'univ', 'zara1', 'zara2', 'Average']
ll_dict = {'dataset': list(), 'method': list(), 'mean_ll': list(),
'conf_int_low': list(), 'conf_int_high': list(),
'p_value': list()}
for dataset_name in dataset_names:
if dataset_name != 'Average':
curr_sgan_df = sgan_df[sgan_df['dataset'] == dataset_name]
curr_our_ml_df = our_ml_df[our_ml_df['dataset'] == dataset_name]
curr_our_full_df = our_full_df[our_full_df['dataset'] == dataset_name]
sgan_nlls = curr_sgan_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
our_ml_nlls = curr_our_ml_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
our_full_nlls = curr_our_full_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
sgan_stats = DescrStatsW(sgan_nlls)
our_ml_stats = DescrStatsW(our_ml_nlls)
our_full_stats = DescrStatsW(our_full_nlls)
low, high = sgan_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, sgan_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Social GAN')
ll_dict['mean_ll'].append(sgan_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
low, high = our_ml_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, our_ml_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Our Method (z_best)')
ll_dict['mean_ll'].append(our_ml_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
low, high = our_full_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, our_full_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Our Method (Full)')
ll_dict['mean_ll'].append(our_full_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
else:
sgan_nlls = sgan_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
our_ml_nlls = our_ml_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
our_full_nlls = our_full_df.groupby(['run', 'node'])['NLL'].mean().reset_index()['NLL']
sgan_stats = DescrStatsW(sgan_nlls)
our_ml_stats = DescrStatsW(our_ml_nlls)
our_full_stats = DescrStatsW(our_full_nlls)
low, high = sgan_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, sgan_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Social GAN')
ll_dict['mean_ll'].append(sgan_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
low, high = our_ml_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, our_ml_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Our Method (z_best)')
ll_dict['mean_ll'].append(our_ml_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
low, high = our_full_stats.tconfint_mean()
_, p_value, _ = ttest_ind(sgan_nlls, our_full_nlls)
ll_dict['dataset'].append(dataset_name)
ll_dict['method'].append('Our Method (Full)')
ll_dict['mean_ll'].append(our_full_stats.mean)
ll_dict['conf_int_low'].append(low)
ll_dict['conf_int_high'].append(high)
ll_dict['p_value'].append(p_value)
ll_tabular_df = pd.DataFrame.from_dict(ll_dict)
ll_tabular_df
###Output
_____no_output_____
###Markdown
Displacement Error Analyses
###Code
# These are for a prediction horizon of 12 timesteps.
prior_work_mse_results = {
'ETH - Univ': OrderedDict([('Linear', 1.33), ('Vanilla LSTM', 1.09), ('Social LSTM', 1.09), ('Social Attention', 0.39)]),
'ETH - Hotel': OrderedDict([('Linear', 0.39), ('Vanilla LSTM', 0.86), ('Social LSTM', 0.79), ('Social Attention', 0.29)]),
'UCY - Univ': OrderedDict([('Linear', 0.82), ('Vanilla LSTM', 0.61), ('Social LSTM', 0.67), ('Social Attention', 0.20)]),
'UCY - Zara 1': OrderedDict([('Linear', 0.62), ('Vanilla LSTM', 0.41), ('Social LSTM', 0.47), ('Social Attention', 0.30)]),
'UCY - Zara 2': OrderedDict([('Linear', 0.77), ('Vanilla LSTM', 0.52), ('Social LSTM', 0.56), ('Social Attention', 0.33)]),
'Average': OrderedDict([('Linear', 0.79), ('Vanilla LSTM', 0.70), ('Social LSTM', 0.72), ('Social Attention', 0.30)])
}
prior_work_fse_results = {
'ETH - Univ': OrderedDict([('Linear', 2.94), ('Vanilla LSTM', 2.41), ('Social LSTM', 2.35), ('Social Attention', 3.74)]),
'ETH - Hotel': OrderedDict([('Linear', 0.72), ('Vanilla LSTM', 1.91), ('Social LSTM', 1.76), ('Social Attention', 2.64)]),
'UCY - Univ': OrderedDict([('Linear', 1.59), ('Vanilla LSTM', 1.31), ('Social LSTM', 1.40), ('Social Attention', 0.52)]),
'UCY - Zara 1': OrderedDict([('Linear', 1.21), ('Vanilla LSTM', 0.88), ('Social LSTM', 1.00), ('Social Attention', 2.13)]),
'UCY - Zara 2': OrderedDict([('Linear', 1.48), ('Vanilla LSTM', 1.11), ('Social LSTM', 1.17), ('Social Attention', 3.92)]),
'Average': OrderedDict([('Linear', 1.59), ('Vanilla LSTM', 1.52), ('Social LSTM', 1.54), ('Social Attention', 2.59)])
}
linestyles = ['--', '-.', '-', ':']
errors_df = pd.concat([pd.read_csv(f) for f in glob.glob('plots/data/*_errors.csv')], ignore_index=True)
errors_df.head()
dataset_names = ['eth', 'hotel', 'univ', 'zara1', 'zara2', 'Average']
sgan_err_df = errors_df[(errors_df['data_precondition'] == 'curr') & (errors_df['method'] == 'sgan')]
our_ml_err_df = errors_df[(errors_df['data_precondition'] == 'curr') & (errors_df['method'] == 'our_most_likely')]
our_full_err_df = errors_df[(errors_df['data_precondition'] == 'curr') & (errors_df['method'] == 'our_full')]
for dataset_name in dataset_names:
if dataset_name != 'Average':
curr_sgan_df = sgan_err_df[sgan_err_df['dataset'] == dataset_name]
curr_our_ml_df = our_ml_err_df[our_ml_err_df['dataset'] == dataset_name]
curr_our_full_df = our_full_err_df[our_full_err_df['dataset'] == dataset_name]
sgan_errs = curr_sgan_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
our_ml_errs = curr_our_ml_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
our_full_errs = curr_our_full_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
sgan_mse_errs = sgan_errs[sgan_errs['error_type'] == 'mse']['error_value']
our_ml_mse_errs = our_ml_errs[our_ml_errs['error_type'] == 'mse']['error_value']
our_full_mse_errs = our_full_errs[our_full_errs['error_type'] == 'mse']['error_value']
sgan_fse_errs = sgan_errs[sgan_errs['error_type'] == 'fse']['error_value']
our_ml_fse_errs = our_ml_errs[our_ml_errs['error_type'] == 'fse']['error_value']
our_full_fse_errs = our_full_errs[our_full_errs['error_type'] == 'fse']['error_value']
sgan_mse_stats = DescrStatsW(sgan_mse_errs)
our_ml_mse_stats = DescrStatsW(our_ml_mse_errs)
our_full_mse_stats = DescrStatsW(our_full_mse_errs)
sgan_fse_stats = DescrStatsW(sgan_fse_errs)
our_ml_fse_stats = DescrStatsW(our_ml_fse_errs)
our_full_fse_stats = DescrStatsW(our_full_fse_errs)
print('\nMSE', dataset_name)
print('sgan', sgan_mse_stats.mean, sgan_mse_stats.tconfint_mean())
print('our_ml', our_ml_mse_stats.mean, our_ml_mse_stats.tconfint_mean(), ttest_ind(sgan_mse_errs, our_ml_mse_errs))
print('our_full', our_full_mse_stats.mean, our_full_mse_stats.tconfint_mean(), ttest_ind(sgan_mse_errs, our_full_mse_errs))
print('FSE', dataset_name)
print('sgan', sgan_fse_stats.mean, sgan_fse_stats.tconfint_mean())
print('our_ml', our_ml_fse_stats.mean, our_ml_fse_stats.tconfint_mean(), ttest_ind(sgan_fse_errs, our_ml_fse_errs))
print('our_full', our_full_fse_stats.mean, our_full_fse_stats.tconfint_mean(), ttest_ind(sgan_fse_errs, our_full_fse_errs))
else:
sgan_errs = sgan_err_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
our_ml_errs = our_ml_err_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
our_full_errs = our_full_err_df.groupby(['run', 'node', 'error_type'])['error_value'].mean().reset_index()
sgan_mse_errs = sgan_errs[sgan_errs['error_type'] == 'mse']['error_value']
our_ml_mse_errs = our_ml_errs[our_ml_errs['error_type'] == 'mse']['error_value']
our_full_mse_errs = our_full_errs[our_full_errs['error_type'] == 'mse']['error_value']
sgan_fse_errs = sgan_errs[sgan_errs['error_type'] == 'fse']['error_value']
our_ml_fse_errs = our_ml_errs[our_ml_errs['error_type'] == 'fse']['error_value']
our_full_fse_errs = our_full_errs[our_full_errs['error_type'] == 'fse']['error_value']
sgan_mse_stats = DescrStatsW(sgan_mse_errs)
our_ml_mse_stats = DescrStatsW(our_ml_mse_errs)
our_full_mse_stats = DescrStatsW(our_full_mse_errs)
sgan_fse_stats = DescrStatsW(sgan_fse_errs)
our_ml_fse_stats = DescrStatsW(our_ml_fse_errs)
our_full_fse_stats = DescrStatsW(our_full_fse_errs)
print('\nMSE', dataset_name)
print('sgan', sgan_mse_stats.mean, sgan_mse_stats.tconfint_mean())
print('our_ml', our_ml_mse_stats.mean, our_ml_mse_stats.tconfint_mean(), ttest_ind(sgan_mse_errs, our_ml_mse_errs))
print('our_full', our_full_mse_stats.mean, our_full_mse_stats.tconfint_mean(), ttest_ind(sgan_mse_errs, our_full_mse_errs))
print('FSE', dataset_name)
print('sgan', sgan_fse_stats.mean, sgan_fse_stats.tconfint_mean())
print('our_ml', our_ml_fse_stats.mean, our_ml_fse_stats.tconfint_mean(), ttest_ind(sgan_fse_errs, our_ml_fse_errs))
print('our_full', our_full_fse_stats.mean, our_full_fse_stats.tconfint_mean(), ttest_ind(sgan_fse_errs, our_full_fse_errs))
perf_df = errors_df[(errors_df['data_precondition'] == 'curr')]
mean_markers = 'X'
marker_size = 7
line_colors = ['#1f78b4','#33a02c','#fb9a99','#e31a1c']
area_colors = ['#a6cee3','#b2df8a','#F7BF48']
area_rgbs = list()
for c in area_colors:
area_rgbs.append([int(c[i:i+2], 16) for i in (1, 3, 5)])
with sns.color_palette("muted"):
fig_mse, ax_mses = plt.subplots(nrows=1, ncols=6, figsize=(8, 4), dpi=300, sharey=True)
for idx, ax_mse in enumerate(ax_mses):
dataset_name = dataset_names[idx]
if dataset_name != 'Average':
specific_df = perf_df[(perf_df['dataset'] == dataset_name) & (perf_df['error_type'] == 'mse')]
specific_df['dataset'] = pretty_dataset_name(dataset_name)
else:
specific_df = perf_df[(perf_df['error_type'] == 'mse')].copy()
specific_df['dataset'] = 'Average'
sns.boxplot(x='dataset', y='error_value', hue='method',
data=specific_df, ax=ax_mse, showfliers=False,
palette=area_colors, hue_order=['sgan', 'our_full', 'our_most_likely'])
for baseline_idx, (baseline, mse_val) in enumerate(prior_work_mse_results[pretty_dataset_name(dataset_name)].items()):
ax_mse.axhline(y=mse_val, label=baseline, color=line_colors[baseline_idx], linestyle=linestyles[baseline_idx])
ax_mse.get_legend().remove()
ax_mse.set_xlabel('')
ax_mse.set_ylabel('' if idx > 0 else 'Average Displacement Error (m)')
if idx == 0:
handles, labels = ax_mse.get_legend_handles_labels()
handles = [handles[0], handles[4], handles[1], handles[5], handles[2], handles[6], handles[3]]
labels = [labels[0], 'Social GAN', labels[1], 'Our Method (Full)', labels[2], r'Our Method ($z_{best}$)', labels[3]]
ax_mse.legend(handles, labels,
loc='lower center', bbox_to_anchor=(0.5, 0.9),
ncol=4, borderaxespad=0, frameon=False,
bbox_transform=fig_mse.transFigure)
ax_mse.scatter([-0.2675, 0, 0.2675],
[np.mean(specific_df[specific_df['method'] == 'sgan']['error_value']),
np.mean(specific_df[specific_df['method'] == 'our_full']['error_value']),
np.mean(specific_df[specific_df['method'] == 'our_most_likely']['error_value'])],
s=marker_size*marker_size, c=np.asarray(area_rgbs)/255.0, marker=mean_markers,
edgecolors='#545454', zorder=10)
# fig_mse.text(0.51, 0.03, 'Dataset', ha='center')
plt.savefig('plots/paper_figures/mse_boxplots.pdf', dpi=300, bbox_inches='tight')
with sns.color_palette("muted"):
fig_fse, ax_fses = plt.subplots(nrows=1, ncols=6, figsize=(8, 4), dpi=300, sharey=True)
for idx, ax_fse in enumerate(ax_fses):
dataset_name = dataset_names[idx]
if dataset_name != 'Average':
specific_df = perf_df[(perf_df['dataset'] == dataset_name) & (perf_df['error_type'] == 'fse')]
specific_df['dataset'] = pretty_dataset_name(dataset_name)
else:
specific_df = perf_df[(perf_df['error_type'] == 'fse')].copy()
specific_df['dataset'] = 'Average'
sns.boxplot(x='dataset', y='error_value', hue='method',
data=specific_df, ax=ax_fse, showfliers=False,
palette=area_colors, hue_order=['sgan', 'our_full', 'our_most_likely'])
for baseline_idx, (baseline, fse_val) in enumerate(prior_work_fse_results[pretty_dataset_name(dataset_name)].items()):
ax_fse.axhline(y=fse_val, label=baseline, color=line_colors[baseline_idx], linestyle=linestyles[baseline_idx])
ax_fse.get_legend().remove()
ax_fse.set_xlabel('')
ax_fse.set_ylabel('' if idx > 0 else 'Final Displacement Error (m)')
if idx == 0:
handles, labels = ax_fse.get_legend_handles_labels()
handles = [handles[0], handles[4], handles[1], handles[5], handles[2], handles[6], handles[3]]
labels = [labels[0], 'Social GAN', labels[1], 'Our Method (Full)', labels[2], r'Our Method ($z_{best}$)', labels[3]]
ax_fse.legend(handles, labels,
loc='lower center', bbox_to_anchor=(0.5, 0.9),
ncol=4, borderaxespad=0, frameon=False,
bbox_transform=fig_fse.transFigure)
ax_fse.scatter([-0.2675, 0, 0.2675],
[np.mean(specific_df[specific_df['method'] == 'sgan']['error_value']),
np.mean(specific_df[specific_df['method'] == 'our_full']['error_value']),
np.mean(specific_df[specific_df['method'] == 'our_most_likely']['error_value'])],
s=marker_size*marker_size, c=np.asarray(area_rgbs)/255.0, marker=mean_markers,
edgecolors='#545454', zorder=10)
# fig_fse.text(0.51, 0.03, 'Dataset', ha='center')
plt.savefig('plots/paper_figures/fse_boxplots.pdf', dpi=300, bbox_inches='tight')
###Output
/home/borisi/anaconda3/envs/dynstg/lib/python3.6/site-packages/ipykernel_launcher.py:7: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
import sys
|
S10/EVA4S10.ipynb | ###Markdown
Installing Packages
###Code
!pip install --no-cache-dir torch-tensornet==0.0.7 torchsummary==1.5.1
###Output
_____no_output_____
###Markdown
ImportsImporting necessary packages and modules
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from torchsummary import summary
from tensornet import train, evaluate
from tensornet.data import CIFAR10
from tensornet.model import ResNet18
from tensornet.model.utils import LRFinder
from tensornet.model.utils.loss import cross_entropy_loss
from tensornet.model.utils.optimizers import sgd
from tensornet.model.utils.callbacks import reduce_lr_on_plateau
from tensornet.gradcam import GradCAMView
from tensornet.utils import initialize_cuda, plot_metric, class_level_accuracy
###Output
_____no_output_____
###Markdown
ConfigurationSet various parameters and hyperparameters
###Code
class Args:
# Data Loading
# ============
train_batch_size = 64
val_batch_size = 64
num_workers = 4
# Augmentation
# ============
horizontal_flip_prob = 0.2
rotate_degree = 20
cutout = 0.3
# Training
# ========
random_seed = 1
epochs = 50
momentum = 0.9
start_lr = 1e-7
end_lr = 5
num_iter = 400
min_lr = 1e-4
lr_decay_factor = 0.1
lr_decay_patience = 2
# Evaluation
# ==========
sample_count = 25
###Output
_____no_output_____
###Markdown
Set Seed and Get GPU Availability
###Code
# Initialize CUDA and set random seed
cuda, device = initialize_cuda(Args.random_seed)
###Output
GPU Available? True
###Markdown
Download DatasetImporting the CIFAR-10 class to download dataset and create data loader
###Code
dataset = CIFAR10(
train_batch_size=Args.train_batch_size,
val_batch_size=Args.val_batch_size,
cuda=cuda,
num_workers=Args.num_workers,
horizontal_flip_prob=Args.horizontal_flip_prob,
rotate_degree=Args.rotate_degree,
cutout=Args.cutout
)
###Output
Files already downloaded and verified
Files already downloaded and verified
Files already downloaded and verified
###Markdown
Training and Validation DataloadersThis is the final step in data preparation. It sets the dataloader arguments and then creates the dataloader
###Code
# Create train data loader
train_loader = dataset.loader(train=True)
# Create val data loader
val_loader = dataset.loader(train=False)
###Output
_____no_output_____
###Markdown
Model Architecture and Summary
###Code
model = ResNet18().to(device) # Create model
summary(model, dataset.image_size) # Display model summary
###Output
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 64, 32, 32] 1,728
BatchNorm2d-2 [-1, 64, 32, 32] 128
Conv2d-3 [-1, 64, 32, 32] 36,864
BatchNorm2d-4 [-1, 64, 32, 32] 128
Conv2d-5 [-1, 64, 32, 32] 36,864
BatchNorm2d-6 [-1, 64, 32, 32] 128
BasicBlock-7 [-1, 64, 32, 32] 0
Conv2d-8 [-1, 64, 32, 32] 36,864
BatchNorm2d-9 [-1, 64, 32, 32] 128
Conv2d-10 [-1, 64, 32, 32] 36,864
BatchNorm2d-11 [-1, 64, 32, 32] 128
BasicBlock-12 [-1, 64, 32, 32] 0
Conv2d-13 [-1, 128, 16, 16] 73,728
BatchNorm2d-14 [-1, 128, 16, 16] 256
Conv2d-15 [-1, 128, 16, 16] 147,456
BatchNorm2d-16 [-1, 128, 16, 16] 256
Conv2d-17 [-1, 128, 16, 16] 8,192
BatchNorm2d-18 [-1, 128, 16, 16] 256
BasicBlock-19 [-1, 128, 16, 16] 0
Conv2d-20 [-1, 128, 16, 16] 147,456
BatchNorm2d-21 [-1, 128, 16, 16] 256
Conv2d-22 [-1, 128, 16, 16] 147,456
BatchNorm2d-23 [-1, 128, 16, 16] 256
BasicBlock-24 [-1, 128, 16, 16] 0
Conv2d-25 [-1, 256, 8, 8] 294,912
BatchNorm2d-26 [-1, 256, 8, 8] 512
Conv2d-27 [-1, 256, 8, 8] 589,824
BatchNorm2d-28 [-1, 256, 8, 8] 512
Conv2d-29 [-1, 256, 8, 8] 32,768
BatchNorm2d-30 [-1, 256, 8, 8] 512
BasicBlock-31 [-1, 256, 8, 8] 0
Conv2d-32 [-1, 256, 8, 8] 589,824
BatchNorm2d-33 [-1, 256, 8, 8] 512
Conv2d-34 [-1, 256, 8, 8] 589,824
BatchNorm2d-35 [-1, 256, 8, 8] 512
BasicBlock-36 [-1, 256, 8, 8] 0
Conv2d-37 [-1, 512, 4, 4] 1,179,648
BatchNorm2d-38 [-1, 512, 4, 4] 1,024
Conv2d-39 [-1, 512, 4, 4] 2,359,296
BatchNorm2d-40 [-1, 512, 4, 4] 1,024
Conv2d-41 [-1, 512, 4, 4] 131,072
BatchNorm2d-42 [-1, 512, 4, 4] 1,024
BasicBlock-43 [-1, 512, 4, 4] 0
Conv2d-44 [-1, 512, 4, 4] 2,359,296
BatchNorm2d-45 [-1, 512, 4, 4] 1,024
Conv2d-46 [-1, 512, 4, 4] 2,359,296
BatchNorm2d-47 [-1, 512, 4, 4] 1,024
BasicBlock-48 [-1, 512, 4, 4] 0
Linear-49 [-1, 10] 5,130
================================================================
Total params: 11,173,962
Trainable params: 11,173,962
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.01
Forward/backward pass size (MB): 11.25
Params size (MB): 42.63
Estimated Total Size (MB): 53.89
----------------------------------------------------------------
###Markdown
Find Initial Learning Rate
###Code
model = ResNet18().to(device) # Create model
optimizer = sgd(model, Args.start_lr, Args.momentum) # Create optimizer
criterion = cross_entropy_loss() # Create loss function
# Find learning rate
lr_finder = LRFinder(model, optimizer, criterion, device=device)
lr_finder.range_test(train_loader, end_lr=Args.end_lr, num_iter=Args.num_iter, step_mode='exp')
# Get best initial learning rate
initial_lr = lr_finder.best_lr
# Print learning rate and loss
print('Learning Rate:', initial_lr)
print('Loss:', lr_finder.best_loss)
# Plot learning rate vs loss
lr_finder.plot()
# Reset graph
lr_finder.reset()
###Output
Learning Rate: 0.012059247341566626
Loss: 1.8264832157316113
###Markdown
Model Training and Validation
###Code
train_accuracies = []
val_losses = []
val_accuracies = []
incorrect_samples = []
criterion = cross_entropy_loss() # Create loss function
optimizer = sgd(model, initial_lr, Args.momentum) # Create optimizer
scheduler = reduce_lr_on_plateau( # Define Reduce LR on plateau
optimizer, factor=Args.lr_decay_factor,
patience=Args.lr_decay_patience, verbose=True,
min_lr=Args.min_lr
)
last_epoch = False
for epoch in range(1, Args.epochs + 1):
print(f'Epoch {epoch}:')
if epoch == Args.epochs:
last_epoch = True
train(model, train_loader, device, optimizer, criterion, accuracies=train_accuracies)
evaluate(
model, val_loader, device, criterion, losses=val_losses,
accuracies=val_accuracies, incorrect_samples=incorrect_samples,
sample_count=Args.sample_count, last_epoch=last_epoch
)
scheduler.step(val_losses[-1])
###Output
0%| | 0/782 [00:00<?, ?it/s]
###Markdown
Plotting Results Plot changes in training and validation accuracy
###Code
plot_metric(
{'Training': train_accuracies, 'Validation': val_accuracies}, 'Accuracy'
)
###Output
_____no_output_____
###Markdown
GradCAM Let's display GradCAM of any 25 misclassified images
###Code
layers = ['layer4']
grad_cam = GradCAMView(
model, layers,
device, dataset.mean, dataset.std
)
gradcam_views = grad_cam([x['image'] for x in incorrect_samples])
def plot_gradcam(cam_data, pred_data, classes, plot_name):
# Initialize plot
fig, axs = plt.subplots(len(cam_data), 2, figsize=(4, 60))
for idx in range(len(cam_data)):
label = classes[pred_data[idx]['label']]
prediction = classes[pred_data[idx]['prediction']]
axs[idx][0].axis('off')
axs[idx][0].set_title(f'Image: {idx + 1}\nLabel: {label}')
axs[idx][0].imshow(cam_data[idx]['image'])
axs[idx][1].axis('off')
axs[idx][1].set_title(f'GradCAM: {idx + 1}\nPrediction: {prediction}')
axs[idx][1].imshow(cam_data[idx]['result']['layer4'])
# Set spacing
fig.tight_layout()
fig.subplots_adjust(top=1.1)
# Save image
fig.savefig(plot_name, bbox_inches='tight')
plot_gradcam(gradcam_views, incorrect_samples, dataset.classes, 'pred_gradcam.png')
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
###Markdown
Result AnalysisDisplaying accuracy for each class in the entire validation dataset
###Code
class_level_accuracy(model, val_loader, device, dataset.classes)
###Output
_____no_output_____ |
2018/rate.am.ipynb | ###Markdown
Scraping Rate.amThis notebook provides code for scraping rates from rate.am. The rates are provided inside an HTML table, thus **pandas.read_html()** function is probably the most user friendly method of extrating infromation from rate.am. However, as one may be interested in extracting information from similar websites with interactive components driven by JavaScript, we use Selenium here first to make some actions and get page soruce and then only use pandas for scraping and manipulation.Selenium functions and methods will be additionally posted in a separate document.Key points:- browser.page_source - provides the HTML source of the page loaded by Selenium,- browser.current_url - provides the URL of the page where Selenium has navigated (maybe different from the base URL has the programmer may aks Selenium to click buttons or follow links),- find_element_by_xpath() - Selenium method for finding HTML elements using Xpath approach- send_keys(Keys.PAGE_DOWN) - tells Selenium to "press" Page Down key on keyboard- browser.implicitly_wait(30) - tells Selenium to wait 30 seconds for some action to be completed.
###Code
import pandas as pd
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
browser = webdriver.Chrome()
url = "http://rate.am/en/armenian-dram-exchange-rates/banks/cash"
browser.get(url) #will wait until page is fully loaded
browser.find_element_by_xpath("//label[contains(text(),'Non-cash')]").click()
#browser.current_url
page = browser.page_source
browser.close()
all_tables = pd.read_html(page)
all_tables[2]
cols = [i for i in range(5,13)]
cols.append(1)
all_tables[2].iloc[2:19,cols]
###Output
_____no_output_____
###Markdown
Starting from here we introduce several Selenium tricks for manipulating the page (such as clicking the Page Down key on the keyboard).
###Code
browser = webdriver.Chrome()
browser.get(url)
button = browser.find_element_by_tag_name("html")
button.send_keys(Keys.PAGE_DOWN)
###Output
_____no_output_____
###Markdown
```old=""new=" "while new>old: old = browser.page_source button.send_keys(Keys.END) new = browser.page_source```
###Code
browser.get("https://www.bloomberg.com/")
browser.implicitly_wait(30)
browser.find_element_by_partial_link_text("S&P")
#EC(presense_of_element_located())
###Output
_____no_output_____ |
LinkedIn/LinkedIn_Send_likes_from_post_in_gsheet.ipynb | ###Markdown
LinkedIn - Send likes from post in gsheet In this template, you will extract likes from post and divide them in 2 categories :- People in your network- People not in your networkThen, data will be sent in 3 sheets to trigger specific actions:- POST_LIKES : total of likes from post- MY_NETWORK : People in your network- NOT_MY_NETWORK : People not in your networkCheck the other templates to create a full workflow Input
###Code
from naas_drivers import linkedin, gsheet
import random
import time
import pandas as pd
from datetime import datetime
###Output
_____no_output_____
###Markdown
Variables LinkedIn Get your cookiesHow to get your cookies ?
###Code
LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2
JSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585
###Output
_____no_output_____
###Markdown
Enter your post URL
###Code
POST_URL = "POST_URL"
###Output
_____no_output_____
###Markdown
Gsheet Enter your gsheet info - Your spreadsheet id is located in your gsheet url after "https://docs.google.com/spreadsheets/d/" and before "/edit"- Remember that you must share your gsheet with our service account to connect : [email protected] You must create your sheet before sending data into it
###Code
# Spreadsheet id
SPREADSHEET_ID = "SPREADSHEET_ID"
# Sheet names
SHEET_POST_LIKES = "POST_LIKES"
SHEET_MY_NETWORK = "MY_NETWORK"
SHEET_NOT_MY_NETWORK = "NOT_MY_NETWORK"
###Output
_____no_output_____
###Markdown
Constant
###Code
DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
###Output
_____no_output_____
###Markdown
Get likes from post
###Code
df_posts = linkedin.connect(LI_AT, JSESSIONID).post.get_likes(POST_URL)
df_posts["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
###Output
_____no_output_____
###Markdown
Model Get network for profiles
###Code
df_network = pd.DataFrame()
for _, row in df_posts.iterrows():
profile_id = row.PROFILE_ID
# Get network information to know distance between you and people who likes the post
tmp_network = linkedin.connect(LI_AT, JSESSIONID).profile.get_network(profile_id)
# Concat dataframe
df_network = pd.concat([df_network, tmp_network], axis=0)
# Time sleep in made to mimic human behavior, here it is randomly done between 2 and 5 seconds
time.sleep(random.randint(2, 5))
df_network.head(5)
###Output
_____no_output_____
###Markdown
Merge posts likes and network data
###Code
df_all = pd.merge(df_posts, df_network, on=["PROFILE_URN", "PROFILE_ID"], how="left")
df_all = df_all.sort_values(by=["FOLLOWERS_COUNT"], ascending=False)
df_all = df_all[df_all["DISTANCE"] != "SELF"].reset_index(drop=True)
df_all.head(5)
###Output
_____no_output_____
###Markdown
Split my network or not
###Code
# My network
my_network = df_all[df_all["DISTANCE"] == "DISTANCE_1"].reset_index(drop=True)
my_network["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
my_network.head(5)
# Not in my network
not_my_network = df_all[df_all["DISTANCE"] != "DISTANCE_1"].reset_index(drop=True)
not_my_network["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
not_my_network.head(5)
###Output
_____no_output_____
###Markdown
Output Save post likes in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(df_posts, sheet_name=SHEET_POST_LIKES, append=False)
###Output
_____no_output_____
###Markdown
Save people from my network in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(my_network, sheet_name=SHEET_MY_NETWORK, append=False)
###Output
_____no_output_____
###Markdown
Save people not in my network in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(not_my_network, sheet_name=SHEET_NOT_MY_NETWORK, append=False)
###Output
_____no_output_____
###Markdown
LinkedIn - Send likes from post in gsheet **Tags:** linkedin post likes gsheet naas_drivers In this template, you will extract likes from post and divide them in 2 categories :- People in your network- People not in your networkThen, data will be sent in 3 sheets to trigger specific actions:- POST_LIKES : total of likes from post- MY_NETWORK : People in your network- NOT_MY_NETWORK : People not in your networkCheck the other templates to create a full workflow Input Import libraries
###Code
from naas_drivers import linkedin, gsheet
import random
import time
import pandas as pd
from datetime import datetime
###Output
_____no_output_____
###Markdown
Variables LinkedIn Get your cookiesHow to get your cookies ?
###Code
LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2
JSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585
###Output
_____no_output_____
###Markdown
Enter your post URL
###Code
POST_URL = "POST_URL"
###Output
_____no_output_____
###Markdown
Gsheet Enter your gsheet info - Your spreadsheet id is located in your gsheet url after "https://docs.google.com/spreadsheets/d/" and before "/edit"- Remember that you must share your gsheet with our service account to connect : [email protected] You must create your sheet before sending data into it
###Code
# Spreadsheet id
SPREADSHEET_ID = "SPREADSHEET_ID"
# Sheet names
SHEET_POST_LIKES = "POST_LIKES"
SHEET_MY_NETWORK = "MY_NETWORK"
SHEET_NOT_MY_NETWORK = "NOT_MY_NETWORK"
###Output
_____no_output_____
###Markdown
Constant
###Code
DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
###Output
_____no_output_____
###Markdown
Get likes from post
###Code
df_posts = linkedin.connect(LI_AT, JSESSIONID).post.get_likes(POST_URL)
df_posts["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
###Output
_____no_output_____
###Markdown
Model Get network for profiles
###Code
df_network = pd.DataFrame()
for _, row in df_posts.iterrows():
profile_id = row.PROFILE_ID
# Get network information to know distance between you and people who likes the post
tmp_network = linkedin.connect(LI_AT, JSESSIONID).profile.get_network(profile_id)
# Concat dataframe
df_network = pd.concat([df_network, tmp_network], axis=0)
# Time sleep in made to mimic human behavior, here it is randomly done between 2 and 5 seconds
time.sleep(random.randint(2, 5))
df_network.head(5)
###Output
_____no_output_____
###Markdown
Merge posts likes and network data
###Code
df_all = pd.merge(df_posts, df_network, on=["PROFILE_URN", "PROFILE_ID"], how="left")
df_all = df_all.sort_values(by=["FOLLOWERS_COUNT"], ascending=False)
df_all = df_all[df_all["DISTANCE"] != "SELF"].reset_index(drop=True)
df_all.head(5)
###Output
_____no_output_____
###Markdown
Split my network or not
###Code
# My network
my_network = df_all[df_all["DISTANCE"] == "DISTANCE_1"].reset_index(drop=True)
my_network["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
my_network.head(5)
# Not in my network
not_my_network = df_all[df_all["DISTANCE"] != "DISTANCE_1"].reset_index(drop=True)
not_my_network["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
not_my_network.head(5)
###Output
_____no_output_____
###Markdown
Output Save post likes in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(df_posts, sheet_name=SHEET_POST_LIKES, append=False)
###Output
_____no_output_____
###Markdown
Save people from my network in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(my_network, sheet_name=SHEET_MY_NETWORK, append=False)
###Output
_____no_output_____
###Markdown
Save people not in my network in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(not_my_network, sheet_name=SHEET_NOT_MY_NETWORK, append=False)
###Output
_____no_output_____
###Markdown
LinkedIn - Send likes from post in gsheet **Tags:** linkedin post likes gsheet naas_drivers In this template, you will extract likes from post and divide them in 2 categories :- People in your network- People not in your networkThen, data will be sent in 3 sheets to trigger specific actions:- POST_LIKES : total of likes from post- MY_NETWORK : People in your network- NOT_MY_NETWORK : People not in your networkCheck the other templates to create a full workflow Input Import libraries
###Code
from naas_drivers import linkedin, gsheet
import random
import time
import pandas as pd
from datetime import datetime
###Output
_____no_output_____
###Markdown
Variables LinkedIn Get your cookiesHow to get your cookies ?
###Code
LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2
JSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585
###Output
_____no_output_____
###Markdown
Enter your post URL
###Code
POST_URL = "POST_URL"
###Output
_____no_output_____
###Markdown
Gsheet Enter your gsheet info - Your spreadsheet id is located in your gsheet url after "https://docs.google.com/spreadsheets/d/" and before "/edit"- Remember that you must share your gsheet with our service account to connect : [email protected] You must create your sheet before sending data into it
###Code
# Spreadsheet id
SPREADSHEET_ID = "SPREADSHEET_ID"
# Sheet names
SHEET_POST_LIKES = "POST_LIKES"
SHEET_MY_NETWORK = "MY_NETWORK"
SHEET_NOT_MY_NETWORK = "NOT_MY_NETWORK"
###Output
_____no_output_____
###Markdown
Constant
###Code
DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
###Output
_____no_output_____
###Markdown
Get likes from post
###Code
df_posts = linkedin.connect(LI_AT, JSESSIONID).post.get_likes(POST_URL)
df_posts["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
###Output
_____no_output_____
###Markdown
Model Get network for profiles
###Code
df_network = pd.DataFrame()
for _, row in df_posts.iterrows():
profile_id = row.PROFILE_ID
# Get network information to know distance between you and people who likes the post
tmp_network = linkedin.connect(LI_AT, JSESSIONID).profile.get_network(profile_id)
# Concat dataframe
df_network = pd.concat([df_network, tmp_network], axis=0)
# Time sleep in made to mimic human behavior, here it is randomly done between 2 and 5 seconds
time.sleep(random.randint(2, 5))
df_network.head(5)
###Output
_____no_output_____
###Markdown
Merge posts likes and network data
###Code
df_all = pd.merge(df_posts, df_network, on=["PROFILE_URN", "PROFILE_ID"], how="left")
df_all = df_all.sort_values(by=["FOLLOWERS_COUNT"], ascending=False)
df_all = df_all[df_all["DISTANCE"] != "SELF"].reset_index(drop=True)
df_all.head(5)
###Output
_____no_output_____
###Markdown
Split my network or not
###Code
# My network
my_network = df_all[df_all["DISTANCE"] == "DISTANCE_1"].reset_index(drop=True)
my_network["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
my_network.head(5)
# Not in my network
not_my_network = df_all[df_all["DISTANCE"] != "DISTANCE_1"].reset_index(drop=True)
not_my_network["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
not_my_network.head(5)
###Output
_____no_output_____
###Markdown
Output Save post likes in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(df_posts, sheet_name=SHEET_POST_LIKES, append=False)
###Output
_____no_output_____
###Markdown
Save people from my network in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(my_network, sheet_name=SHEET_MY_NETWORK, append=False)
###Output
_____no_output_____
###Markdown
Save people not in my network in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(not_my_network, sheet_name=SHEET_NOT_MY_NETWORK, append=False)
###Output
_____no_output_____
###Markdown
LinkedIn - Send likes from post in gsheet **Tags:** linkedin post likes gsheet naas_drivers **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/) In this template, you will extract likes from post and divide them in 2 categories :- People in your network- People not in your networkThen, data will be sent in 3 sheets to trigger specific actions:- POST_LIKES : total of likes from post- MY_NETWORK : People in your network- NOT_MY_NETWORK : People not in your networkCheck the other templates to create a full workflow Input Import libraries
###Code
from naas_drivers import linkedin, gsheet
import random
import time
import pandas as pd
from datetime import datetime
###Output
_____no_output_____
###Markdown
Variables LinkedIn Get your cookiesHow to get your cookies ?
###Code
LI_AT = 'YOUR_COOKIE_LI_AT' # EXAMPLE AQFAzQN_PLPR4wAAAXc-FCKmgiMit5FLdY1af3-2
JSESSIONID = 'YOUR_COOKIE_JSESSIONID' # EXAMPLE ajax:8379907400220387585
###Output
_____no_output_____
###Markdown
Enter your post URL
###Code
POST_URL = "POST_URL"
###Output
_____no_output_____
###Markdown
Gsheet Enter your gsheet info - Your spreadsheet id is located in your gsheet url after "https://docs.google.com/spreadsheets/d/" and before "/edit"- Remember that you must share your gsheet with our service account to connect : [email protected] You must create your sheet before sending data into it
###Code
# Spreadsheet id
SPREADSHEET_ID = "SPREADSHEET_ID"
# Sheet names
SHEET_POST_LIKES = "POST_LIKES"
SHEET_MY_NETWORK = "MY_NETWORK"
SHEET_NOT_MY_NETWORK = "NOT_MY_NETWORK"
###Output
_____no_output_____
###Markdown
Constant
###Code
DATETIME_FORMAT = "%Y-%m-%d %H:%M:%S"
###Output
_____no_output_____
###Markdown
Get likes from post
###Code
df_posts = linkedin.connect(LI_AT, JSESSIONID).post.get_likes(POST_URL)
df_posts["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
###Output
_____no_output_____
###Markdown
Model Get network for profiles
###Code
df_network = pd.DataFrame()
for _, row in df_posts.iterrows():
profile_id = row.PROFILE_ID
# Get network information to know distance between you and people who likes the post
tmp_network = linkedin.connect(LI_AT, JSESSIONID).profile.get_network(profile_id)
# Concat dataframe
df_network = pd.concat([df_network, tmp_network], axis=0)
# Time sleep in made to mimic human behavior, here it is randomly done between 2 and 5 seconds
time.sleep(random.randint(2, 5))
df_network.head(5)
###Output
_____no_output_____
###Markdown
Merge posts likes and network data
###Code
df_all = pd.merge(df_posts, df_network, on=["PROFILE_URN", "PROFILE_ID"], how="left")
df_all = df_all.sort_values(by=["FOLLOWERS_COUNT"], ascending=False)
df_all = df_all[df_all["DISTANCE"] != "SELF"].reset_index(drop=True)
df_all.head(5)
###Output
_____no_output_____
###Markdown
Split my network or not
###Code
# My network
my_network = df_all[df_all["DISTANCE"] == "DISTANCE_1"].reset_index(drop=True)
my_network["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
my_network.head(5)
# Not in my network
not_my_network = df_all[df_all["DISTANCE"] != "DISTANCE_1"].reset_index(drop=True)
not_my_network["DATE_EXTRACT"] = datetime.now().strftime(DATETIME_FORMAT)
not_my_network.head(5)
###Output
_____no_output_____
###Markdown
Output Save post likes in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(df_posts, sheet_name=SHEET_POST_LIKES, append=False)
###Output
_____no_output_____
###Markdown
Save people from my network in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(my_network, sheet_name=SHEET_MY_NETWORK, append=False)
###Output
_____no_output_____
###Markdown
Save people not in my network in gsheet
###Code
gsheet.connect(SPREADSHEET_ID).send(not_my_network, sheet_name=SHEET_NOT_MY_NETWORK, append=False)
###Output
_____no_output_____ |
site/ko/tutorials/generative/cyclegan.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
CycleGAN TensorFlow.org에서보기 Google Colab에서 실행 GitHub에서 소스보기 노트북 다운로드 이 노트북은 CycleGAN이라고도 하는[Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks](https://arxiv.org/abs/1703.10593)에 설명된 것처럼 조건부 GAN을 사용하여 쌍으로 연결되지 않은 이미지 간 변환을 보여줍니다. 이 논문은 한 쌍의 훈련 예제가 없을 때 하나의 이미지 도메인의 특성을 포착하고 이러한 특성이 다른 이미지 도메인으로 어떻게 변환될 수 있는지 알아낼 수있는 방법을 제안합니다.이 노트북은 독자가 [Pix2Pix 튜토리얼](https://www.tensorflow.org/tutorials/generative/pix2pix)에서 배울 수 있는 Pix2Pix에 익숙하다고 가정합니다. CycleGAN의 코드는 비슷하며, 주된 차이점은 추가 손실 함수와 쌍으로 연결되지 않은 훈련 데이터를 사용한다는 점입니다.CycleGAN은 주기 일관성 손실을 사용하여 쌍으로 연결된 데이터 없이도 훈련을 수행할 수 있습니다. 즉, 소스와 대상 도메인 사이에서 일대일 매핑 없이 한 도메인에서 다른 도메인으로 변환할 수 있습니다.이를 통해 사진 향상, 이미지 색상 지정, 스타일 전송 등과 같은 많은 흥미로운 작업을 수행할 수 있습니다. 소스와 대상 데이터세트(단순히 이미지 디렉토리)만 있으면 됩니다. 입력 파이프라인 설정하기 생성기와 판별자 가져오기를 지원하는 [tensorflow_examples](https://github.com/tensorflow/examples) 패키지를 설치합니다.
###Code
!pip install git+https://github.com/tensorflow/examples.git
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow_examples.models.pix2pix import pix2pix
import os
import time
import matplotlib.pyplot as plt
from IPython.display import clear_output
tfds.disable_progress_bar()
AUTOTUNE = tf.data.experimental.AUTOTUNE
###Output
_____no_output_____
###Markdown
입력 파이프라인이 튜토리얼에서는 말의 이미지에서 얼룩말의 이미지로 변환하도록 모델을 훈련합니다. 이 데이터세트 및 이와 유사한 데이터세트는 [여기](https://www.tensorflow.org/datasets/datasetscycle_gan)에서 찾을 수 있습니다.[논문](https://arxiv.org/abs/1703.10593)에 언급된 바와 같이 훈련 데이터세트에 임의의 지터링 및 미러링을 적용합니다. 이것은 과대적합을 피하는 이미지 강화 기법들입니다.이 작업은 [pix2pix](https://www.tensorflow.org/tutorials/generative/pix2pixload_the_dataset)에서 수행한 것과 비슷합니다.- 무작위 지터링에서 이미지는 `286 x 286` 크기로 조정된 후 `256 x 256`로 무작위로 잘립니다.- 무작위 미러링에서는 이미지가 좌우로 무작위로 뒤집힙니다.
###Code
dataset, metadata = tfds.load('cycle_gan/horse2zebra',
with_info=True, as_supervised=True)
train_horses, train_zebras = dataset['trainA'], dataset['trainB']
test_horses, test_zebras = dataset['testA'], dataset['testB']
BUFFER_SIZE = 1000
BATCH_SIZE = 1
IMG_WIDTH = 256
IMG_HEIGHT = 256
def random_crop(image):
cropped_image = tf.image.random_crop(
image, size=[IMG_HEIGHT, IMG_WIDTH, 3])
return cropped_image
# normalizing the images to [-1, 1]
def normalize(image):
image = tf.cast(image, tf.float32)
image = (image / 127.5) - 1
return image
def random_jitter(image):
# resizing to 286 x 286 x 3
image = tf.image.resize(image, [286, 286],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)
# randomly cropping to 256 x 256 x 3
image = random_crop(image)
# random mirroring
image = tf.image.random_flip_left_right(image)
return image
def preprocess_image_train(image, label):
image = random_jitter(image)
image = normalize(image)
return image
def preprocess_image_test(image, label):
image = normalize(image)
return image
train_horses = train_horses.map(
preprocess_image_train, num_parallel_calls=AUTOTUNE).cache().shuffle(
BUFFER_SIZE).batch(1)
train_zebras = train_zebras.map(
preprocess_image_train, num_parallel_calls=AUTOTUNE).cache().shuffle(
BUFFER_SIZE).batch(1)
test_horses = test_horses.map(
preprocess_image_test, num_parallel_calls=AUTOTUNE).cache().shuffle(
BUFFER_SIZE).batch(1)
test_zebras = test_zebras.map(
preprocess_image_test, num_parallel_calls=AUTOTUNE).cache().shuffle(
BUFFER_SIZE).batch(1)
sample_horse = next(iter(train_horses))
sample_zebra = next(iter(train_zebras))
plt.subplot(121)
plt.title('Horse')
plt.imshow(sample_horse[0] * 0.5 + 0.5)
plt.subplot(122)
plt.title('Horse with random jitter')
plt.imshow(random_jitter(sample_horse[0]) * 0.5 + 0.5)
plt.subplot(121)
plt.title('Zebra')
plt.imshow(sample_zebra[0] * 0.5 + 0.5)
plt.subplot(122)
plt.title('Zebra with random jitter')
plt.imshow(random_jitter(sample_zebra[0]) * 0.5 + 0.5)
###Output
_____no_output_____
###Markdown
Pix2Pix 모델 가져오기 및 재사용하기 설치된 [tensorflow_examples](https://github.com/tensorflow/examples) 패키지를 통해 [Pix2Pix](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/pix2pix/pix2pix.py)에서 사용되는 생성기와 판별자를 가져옵니다.이 튜토리얼에서 사용된 모델 아키텍처는 [pix2pix](https://github.com/tensorflow/examples/blob/master/tensorflow_examples/models/pix2pix/pix2pix.py)에서 사용된 것과 매우 유사합니다. 몇 가지 차이점은 다음과 같습니다.- Cyclegan은 [배치 정규화](https://arxiv.org/abs/1502.03167) 대신 [인스턴스 정규화](https://arxiv.org/abs/1607.08022)를 사용합니다.- [CycleGAN 논문](https://arxiv.org/abs/1703.10593)에서는 수정된 `resnet` 기반 생성기를 사용합니다. 이 튜토리얼에서는 단순화를 위해 수정된 `unet` 생성기를 사용합니다.여기서는 2개의 생성기(G 및 F)와 2개의 판별자(X 및 Y)를 훈련합니다.- 생성기 `G`는 이미지 `X`를 이미지 `Y`로 변환하는 방법을 학습합니다. $(G: X -> Y)$- 생성기 `F`는 이미지 `Y`를 이미지 `X`로 변환하는 방법을 학습합니다. $(F: Y -> X)$- 판별자 `D_X`는 이미지 `X`와 생성된 이미지 `X`( `F(Y)` )를 구별하는 방법을 학습합니다.- 판별자 `D_Y`는 이미지 `Y`와 생성된 이미지 `Y`(`G(X)`)를 구별하는 방법을 학습합니다.
###Code
OUTPUT_CHANNELS = 3
generator_g = pix2pix.unet_generator(OUTPUT_CHANNELS, norm_type='instancenorm')
generator_f = pix2pix.unet_generator(OUTPUT_CHANNELS, norm_type='instancenorm')
discriminator_x = pix2pix.discriminator(norm_type='instancenorm', target=False)
discriminator_y = pix2pix.discriminator(norm_type='instancenorm', target=False)
to_zebra = generator_g(sample_horse)
to_horse = generator_f(sample_zebra)
plt.figure(figsize=(8, 8))
contrast = 8
imgs = [sample_horse, to_zebra, sample_zebra, to_horse]
title = ['Horse', 'To Zebra', 'Zebra', 'To Horse']
for i in range(len(imgs)):
plt.subplot(2, 2, i+1)
plt.title(title[i])
if i % 2 == 0:
plt.imshow(imgs[i][0] * 0.5 + 0.5)
else:
plt.imshow(imgs[i][0] * 0.5 * contrast + 0.5)
plt.show()
plt.figure(figsize=(8, 8))
plt.subplot(121)
plt.title('Is a real zebra?')
plt.imshow(discriminator_y(sample_zebra)[0, ..., -1], cmap='RdBu_r')
plt.subplot(122)
plt.title('Is a real horse?')
plt.imshow(discriminator_x(sample_horse)[0, ..., -1], cmap='RdBu_r')
plt.show()
###Output
_____no_output_____
###Markdown
손실 함수 CycleGAN에는 훈련할 쌍으로 연결된 데이터가 없으므로 훈련 중에 입력 `x`와 대상 `y`의 쌍이 의미가 있다는 보장이 없습니다. 따라서 네트워크가 올바른 매핑을 학습하도록 강제하기 위해 저자들은 주기 일관성 손실을 제안합니다.판별자 손실 및 생성기 손실은 [pix2pix](https://www.tensorflow.org/tutorials/generative/pix2pixdefine_the_loss_functions_and_the_optimizer)에 사용된 것과 유사합니다.
###Code
LAMBDA = 10
loss_obj = tf.keras.losses.BinaryCrossentropy(from_logits=True)
def discriminator_loss(real, generated):
real_loss = loss_obj(tf.ones_like(real), real)
generated_loss = loss_obj(tf.zeros_like(generated), generated)
total_disc_loss = real_loss + generated_loss
return total_disc_loss * 0.5
def generator_loss(generated):
return loss_obj(tf.ones_like(generated), generated)
###Output
_____no_output_____
###Markdown
주기 일관성은 결과가 원래 입력에 가까워야 함을 의미합니다. 예를 들어 문장을 영어에서 프랑스어로 번역한 다음 다시 프랑스어에서 영어로 번역하면 결과 문장은 원래 문장과 같아야 합니다.주기 일관성 손실에서,- $X$ 이미지는 $G$ 생성기를 통해 전달되어 $\hat{Y}$의 생성된 이미지가 만들어집니다.- $\hat{Y}$의 생성된 이미지는 $F$ 생성기를 통해 전달되어 $\hat{X}$의 순환 이미지를 생성합니다.- $X$ 및 $\hat{X}$ 사이에서 평균 절대 오차가 계산됩니다.$$forward\ cycle\ consistency\ loss: X -> G(X) -> F(G(X)) \sim \hat{X}$$$$backward\ cycle\ consistency\ loss: Y -> F(Y) -> G(F(Y)) \sim \hat{Y}$$
###Code
def calc_cycle_loss(real_image, cycled_image):
loss1 = tf.reduce_mean(tf.abs(real_image - cycled_image))
return LAMBDA * loss1
###Output
_____no_output_____
###Markdown
위에서 볼 수 있듯이 $G$ 생성기는 $X$ 이미지를 $Y$ 이미지로 변환하는 역할을 합니다. ID 손실은 $Y$ 이미지를 $G$ 생성기에 공급하면 실제 이미지 $Y$ 또는 이미지 $Y$에 가까운 이미지를 생성해야 한다고 지시합니다.$$Identity\ loss = |G(Y) - Y| + |F(X) - X|$$
###Code
def identity_loss(real_image, same_image):
loss = tf.reduce_mean(tf.abs(real_image - same_image))
return LAMBDA * 0.5 * loss
###Output
_____no_output_____
###Markdown
모든 생성기 및 판별자의 옵티마이저를 초기화합니다.
###Code
generator_g_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
generator_f_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
discriminator_x_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
discriminator_y_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)
###Output
_____no_output_____
###Markdown
체크포인트
###Code
checkpoint_path = "./checkpoints/train"
ckpt = tf.train.Checkpoint(generator_g=generator_g,
generator_f=generator_f,
discriminator_x=discriminator_x,
discriminator_y=discriminator_y,
generator_g_optimizer=generator_g_optimizer,
generator_f_optimizer=generator_f_optimizer,
discriminator_x_optimizer=discriminator_x_optimizer,
discriminator_y_optimizer=discriminator_y_optimizer)
ckpt_manager = tf.train.CheckpointManager(ckpt, checkpoint_path, max_to_keep=5)
# if a checkpoint exists, restore the latest checkpoint.
if ckpt_manager.latest_checkpoint:
ckpt.restore(ckpt_manager.latest_checkpoint)
print ('Latest checkpoint restored!!')
###Output
_____no_output_____
###Markdown
훈련하기참고: 이 예제 모델은 이 튜토리얼에 적합한 훈련 시간을 유지하기 위해 논문(200)보다 적은 epoch(40)를 대상으로 훈련합니다. 따라서 예측 정확성이 떨어질 수 있습니다.
###Code
EPOCHS = 40
def generate_images(model, test_input):
prediction = model(test_input)
plt.figure(figsize=(12, 12))
display_list = [test_input[0], prediction[0]]
title = ['Input Image', 'Predicted Image']
for i in range(2):
plt.subplot(1, 2, i+1)
plt.title(title[i])
# getting the pixel values between [0, 1] to plot it.
plt.imshow(display_list[i] * 0.5 + 0.5)
plt.axis('off')
plt.show()
###Output
_____no_output_____
###Markdown
훈련 루프가 복잡해 보이지만 네 가지 기본 단계로 구성됩니다.- 예측을 얻습니다.- 손실을 계산합니다.- 역전파를 사용하여 그래디언트를 계산합니다.- 그래디언트를 옵티마이저에 적용합니다.
###Code
@tf.function
def train_step(real_x, real_y):
# persistent is set to True because the tape is used more than
# once to calculate the gradients.
with tf.GradientTape(persistent=True) as tape:
# Generator G translates X -> Y
# Generator F translates Y -> X.
fake_y = generator_g(real_x, training=True)
cycled_x = generator_f(fake_y, training=True)
fake_x = generator_f(real_y, training=True)
cycled_y = generator_g(fake_x, training=True)
# same_x and same_y are used for identity loss.
same_x = generator_f(real_x, training=True)
same_y = generator_g(real_y, training=True)
disc_real_x = discriminator_x(real_x, training=True)
disc_real_y = discriminator_y(real_y, training=True)
disc_fake_x = discriminator_x(fake_x, training=True)
disc_fake_y = discriminator_y(fake_y, training=True)
# calculate the loss
gen_g_loss = generator_loss(disc_fake_y)
gen_f_loss = generator_loss(disc_fake_x)
total_cycle_loss = calc_cycle_loss(real_x, cycled_x) + calc_cycle_loss(real_y, cycled_y)
# Total generator loss = adversarial loss + cycle loss
total_gen_g_loss = gen_g_loss + total_cycle_loss + identity_loss(real_y, same_y)
total_gen_f_loss = gen_f_loss + total_cycle_loss + identity_loss(real_x, same_x)
disc_x_loss = discriminator_loss(disc_real_x, disc_fake_x)
disc_y_loss = discriminator_loss(disc_real_y, disc_fake_y)
# Calculate the gradients for generator and discriminator
generator_g_gradients = tape.gradient(total_gen_g_loss,
generator_g.trainable_variables)
generator_f_gradients = tape.gradient(total_gen_f_loss,
generator_f.trainable_variables)
discriminator_x_gradients = tape.gradient(disc_x_loss,
discriminator_x.trainable_variables)
discriminator_y_gradients = tape.gradient(disc_y_loss,
discriminator_y.trainable_variables)
# Apply the gradients to the optimizer
generator_g_optimizer.apply_gradients(zip(generator_g_gradients,
generator_g.trainable_variables))
generator_f_optimizer.apply_gradients(zip(generator_f_gradients,
generator_f.trainable_variables))
discriminator_x_optimizer.apply_gradients(zip(discriminator_x_gradients,
discriminator_x.trainable_variables))
discriminator_y_optimizer.apply_gradients(zip(discriminator_y_gradients,
discriminator_y.trainable_variables))
for epoch in range(EPOCHS):
start = time.time()
n = 0
for image_x, image_y in tf.data.Dataset.zip((train_horses, train_zebras)):
train_step(image_x, image_y)
if n % 10 == 0:
print ('.', end='')
n+=1
clear_output(wait=True)
# Using a consistent image (sample_horse) so that the progress of the model
# is clearly visible.
generate_images(generator_g, sample_horse)
if (epoch + 1) % 5 == 0:
ckpt_save_path = ckpt_manager.save()
print ('Saving checkpoint for epoch {} at {}'.format(epoch+1,
ckpt_save_path))
print ('Time taken for epoch {} is {} sec\n'.format(epoch + 1,
time.time()-start))
###Output
_____no_output_____
###Markdown
테스트 데이터세트를 사용하여 생성하기
###Code
# Run the trained model on the test dataset
for inp in test_horses.take(5):
generate_images(generator_g, inp)
###Output
_____no_output_____ |
docs/source/warp_perspective.ipynb | ###Markdown
Warp image using perspective transform
###Code
import torch
import torchgeometry as tgm
import cv2
# read the image with OpenCV
image = cv2.imread('./data/bruce.png')[..., (2,1,0)]
print(image.shape)
img = tgm.image_to_tensor(image)
img = torch.unsqueeze(img.float(), dim=0) # BxCxHxW
# the source points are the region to crop corners
points_src = torch.FloatTensor([[
[125, 150], [562, 40], [562, 282], [54, 328],
]])
# the destination points are the image vertexes
h, w = 64, 128 # destination size
points_dst = torch.FloatTensor([[
[0, 0], [w - 1, 0], [w - 1, h - 1], [0, h - 1],
]])
# compute perspective transform
M = tgm.get_perspective_transform(points_src, points_dst)
# warp the original image by the found transform
img_warp = tgm.warp_perspective(img, M, dsize=(h, w))
# convert back to numpy
image_warp = tgm.tensor_to_image(img_warp.byte())
# draw points into original image
for i in range(4):
center = tuple(points_src[0, i].long().numpy())
image = cv2.circle(image.copy(), center, 5, (0, 255, 0), -1)
import matplotlib.pyplot as plt
%matplotlib inline
# create the plot
fig, axs = plt.subplots(1, 2, figsize=(16, 10))
axs = axs.ravel()
axs[0].axis('off')
axs[0].set_title('image source')
axs[0].imshow(image)
axs[1].axis('off')
axs[1].set_title('image destination')
axs[1].imshow(image_warp)
###Output
_____no_output_____
###Markdown
Warp image using perspective transform
###Code
import torch
import kornia
import cv2
# read the image with OpenCV
image = cv2.imread('./data/bruce.png')[..., (2,1,0)]
print(image.shape)
img = kornia.image_to_tensor(image)
img = torch.unsqueeze(img.float(), dim=0) # BxCxHxW
# the source points are the region to crop corners
points_src = torch.FloatTensor([[
[125, 150], [562, 40], [562, 282], [54, 328],
]])
# the destination points are the image vertexes
h, w = 64, 128 # destination size
points_dst = torch.FloatTensor([[
[0, 0], [w - 1, 0], [w - 1, h - 1], [0, h - 1],
]])
# compute perspective transform
M = kornia.get_perspective_transform(points_src, points_dst)
# warp the original image by the found transform
img_warp = kornia.warp_perspective(img, M, dsize=(h, w))
# convert back to numpy
image_warp = kornia.tensor_to_image(img_warp.byte()[0])
# draw points into original image
for i in range(4):
center = tuple(points_src[0, i].long().numpy())
image = cv2.circle(image.copy(), center, 5, (0, 255, 0), -1)
import matplotlib.pyplot as plt
%matplotlib inline
# create the plot
fig, axs = plt.subplots(1, 2, figsize=(16, 10))
axs = axs.ravel()
axs[0].axis('off')
axs[0].set_title('image source')
axs[0].imshow(image)
axs[1].axis('off')
axs[1].set_title('image destination')
axs[1].imshow(image_warp)
###Output
_____no_output_____ |
Chapter01/omd_imputing_missing_values.ipynb | ###Markdown
Imputing missing values sources: * [scikit-learn.org](https://scikit-learn.org/stable/modules/impute.html)* [scikit-learn.org - example](https://scikit-learn.org/stable/auto_examples/impute/plot_iterative_imputer_variants_comparison.htmlsphx-glr-auto-examples-impute-plot-iterative-imputer-variants-comparison-py)* [analyticsvidhya.com](https://www.analyticsvidhya.com/blog/2021/05/dealing-with-missing-values-in-python-a-complete-guide/) Imputing missing values with variants of IterativeImputer - scikit learn The IterativeImputer class is very flexible - it can be used with a `variety of estimators` to do round-robin regression, treating every variable as an output in turn.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# To use this experimental feature, we need to explicitly ask for it:
from sklearn.experimental import enable_iterative_imputer # noqa
from sklearn.datasets import fetch_california_housing
from sklearn.impute import SimpleImputer
from sklearn.impute import IterativeImputer
from sklearn.linear_model import BayesianRidge
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_val_score
N_SPLITS = 5
rng = np.random.RandomState(0)
X_full, y_full = fetch_california_housing(return_X_y=True)
# ~2k samples is enough for the purpose of the example.
# Remove the following two lines for a slower run with different error bars.
# X_full = X_full[::10]
# y_full = y_full[::10]
n_samples, n_features = X_full.shape
# Estimate the score on the entire dataset, with no missing values
br_estimator = BayesianRidge()
score_full_data = pd.DataFrame(
cross_val_score(
br_estimator, X_full, y_full, scoring="neg_mean_squared_error", cv=N_SPLITS
),
columns=["Full Data"],
)
# Add a single missing value to each row
X_missing = X_full.copy()
y_missing = y_full
missing_samples = np.arange(n_samples)
missing_features = rng.choice(n_features, n_samples, replace=True)
X_missing[missing_samples, missing_features] = np.nan
# Estimate the score after imputation (mean and median strategies)
score_simple_imputer = pd.DataFrame()
for strategy in ("mean", "median"):
estimator = make_pipeline(
SimpleImputer(missing_values=np.nan, strategy=strategy), br_estimator
)
score_simple_imputer[strategy] = cross_val_score(
estimator, X_missing, y_missing, scoring="neg_mean_squared_error", cv=N_SPLITS
)
# Estimate the score after iterative imputation of the missing values
# with different estimators
estimators = [
BayesianRidge(),
DecisionTreeRegressor(max_features="sqrt", random_state=0),
ExtraTreesRegressor(n_estimators=10, random_state=0),
KNeighborsRegressor(n_neighbors=15),
]
score_iterative_imputer = pd.DataFrame()
for impute_estimator in estimators:
estimator = make_pipeline(
IterativeImputer(random_state=0, estimator=impute_estimator), br_estimator
)
score_iterative_imputer[impute_estimator.__class__.__name__] = cross_val_score(
estimator, X_missing, y_missing, scoring="neg_mean_squared_error", cv=N_SPLITS
)
scores = pd.concat(
[score_full_data, score_simple_imputer, score_iterative_imputer],
keys=["Original", "SimpleImputer", "IterativeImputer"],
axis=1,
)
# plot california housing results
fig, ax = plt.subplots(figsize=(13, 6))
means = -scores.mean()
errors = scores.std()
means.plot.barh(xerr=errors, ax=ax)
ax.set_title("California Housing Regression with Different Imputation Methods")
ax.set_xlabel("MSE (smaller is better)")
ax.set_yticks(np.arange(means.shape[0]))
ax.set_yticklabels([" w/ ".join(label) for label in means.index.tolist()])
plt.tight_layout(pad=1)
plt.show()
scores
fig, ax = plt.subplots(figsize=(20, 7))
x_vals = []
for i in range(score_full_data.shape[0]):
x_vals.append('Score '+ str(i+1))
x = x_vals
y = list(score_full_data.values[:,0]* (-1))
ax.bar(x, y, width=0.4, zorder=10)
# ax.set_xlabel('Scores')
ax.set_ylabel('MSE')
ax.set_ylim(0, 1.1)
ax.set_title('MSE \n\nOriginal with Full Data (Algorithm: Bayesian Ridge)', fontsize=14)
ax.axhline(abs(np.mean(score_full_data)).values[0], color='black', ls='--', zorder=2)
for index, value in enumerate(y):
if value >= 0:
plt.text(x=index, y=value + 0.03, s=str(round(value,4)), ha='center')
else:
plt.text(x=index, y=value - 0.06, s=str(round(value,4)), ha='center')
plt.tight_layout()
score_simple_imputer
###Output
_____no_output_____
###Markdown
Build a DataFrame object
###Code
cal_housing = fetch_california_housing(as_frame=True)
housing = cal_housing.data
housing[cal_housing.target_names[0]] = cal_housing.target
housing
print(cal_housing.DESCR)
print('Median House Value: ${0:,.0f}'.format(housing.MedHouseVal.mean() * 100000))
###Output
Median House Value: $206,856
|
tutorials/W1D2_ModelingPractice/student/W1D2_Tutorial2.ipynb | ###Markdown
Neuromatch Academy: Week 1, Day 2, Tutorial 2 Modeling Practice: Model implementation and evaluation__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom --- Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import gamma
from IPython.display import YouTubeVideo
# @title Figure settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def my_moving_window(x, window=3, FUN=np.mean):
"""
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving
average of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
"""
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown, :] = my_moving_window(x[rown, :],
window=window, FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(1 - window), 1):
if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i + wind_i])):
values += [x[samp_i + wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets, dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'], expect['self'], marker='*',
color='xkcd:green', label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:, 0]):
c_idx = np.where(judgments[:, 0] == condition)[0]
cond_self_motion = judgments[c_idx[0], 1]
cond_world_motion = judgments[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = f"condition [{condition:d}] judgments"
plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],
label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:, 0]):
c_idx = np.where(predictions[:, 0] == condition)[0]
cond_self_motion = predictions[c_idx[0], 1]
cond_world_motion = predictions[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = f"condition [{condition:d}] prediction"
plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],
marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis', \
'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1], [0], marker='<', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='world-motion stimulus', s=80)
plt.scatter([0], [1], marker='>', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='self-motion stimulus', s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_stimuli(t, a, v):
plt.figure(figsize=(10, 6))
plt.plot(t, a, label='acceleration [$m/s^2$]')
plt.plot(t, v, label='velocity [$m/s$]')
plt.xlabel('time [s]')
plt.ylabel('[motion]')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_motion_signals():
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = np.cumsum(a * dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',
sharey='row', figsize=(14, 6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t, -v, label='visual [$m/s$]')
ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t, -v, label='visual [$m/s$]')
ax2.plot(t, a, label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,
addaverages=False, integrateVestibular=False,
addGroundTruth=False):
if addGroundTruth:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = a
wm_idx = np.where(judgments[:, 0] == 0)
sm_idx = np.where(judgments[:, 0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:, wm_idx])
sm_opticflow = np.squeeze(opticflow[:, sm_idx])
if integrateVestibular:
vestibular = np.cumsum(vestibular * .1, axis=1)
if addGroundTruth:
v = np.cumsum(a * dt)
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:, wm_idx])
sm_vestibular = np.squeeze(vestibular[:, sm_idx])
X = np.arange(0, 10, .1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col',
sharey='row', figsize=(15, 10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][0].plot(t, -v, color='xkcd:red')
if addaverages:
my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),
color='xkcd:red', alpha=1)
my_axes[0][0].set_title('optic-flow in world-motion condition')
my_axes[0][0].set_ylabel('velocity signal [$m/s$]')
my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][1].plot(t, -v, color='xkcd:blue')
if addaverages:
my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),
color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('optic-flow in self-motion condition')
my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),
color='xkcd:red', alpha=1)
my_axes[1][0].set_title('vestibular signal in world-motion condition')
if addGroundTruth:
my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')
my_axes[1][0].set_xlabel('time [s]')
if integrateVestibular:
my_axes[1][0].set_ylabel('velocity signal [$m/s$]')
else:
my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')
my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[1][1].plot(t, v, color='xkcd:blue')
if addaverages:
my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),
color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('vestibular signal in self-motion condition')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_threshold_solution(selfmotion_vel_est, threshold):
is_move = (selfmotion_vel_est > threshold)
return is_move
def my_moving_threshold(selfmotion_vel_est, thresholds):
pselfmove_nomove = np.empty(thresholds.shape)
pselfmove_move = np.empty(thresholds.shape)
prop_correct = np.empty(thresholds.shape)
pselfmove_nomove[:] = np.NaN
pselfmove_move[:] = np.NaN
prop_correct[:] = np.NaN
for thr_i, threshold in enumerate(thresholds):
# run my_threshold that the students will write:
try:
is_move = my_threshold(selfmotion_vel_est, threshold)
except Exception:
is_move = my_threshold_solution(selfmotion_vel_est, threshold)
# store results:
pselfmove_nomove[thr_i] = np.mean(is_move[0:100])
pselfmove_move[thr_i] = np.mean(is_move[100:200])
# calculate the proportion classified correctly:
# (1-pselfmove_nomove) + ()
# Correct rejections:
p_CR = (1 - pselfmove_nomove[thr_i])
# correct detections:
p_D = pselfmove_move[thr_i]
# this is corrected for proportion of trials in each condition:
prop_correct[thr_i] = (p_CR + p_D) / 2
return [pselfmove_nomove, pselfmove_move, prop_correct]
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12, 8))
plt.title('threshold effects')
plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',
color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion condition')
plt.plot(thresholds, self_prop, label='self motion condition')
plt.plot(thresholds, prop_correct, color='xkcd:purple',
label='correct classification')
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
# conditions = np.concatenate((np.abs(judgments[:, 1]),
# np.abs(judgments[:, 2])))
# veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
# velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
# self:
# conditions_self = np.abs(judgments[:, 1])
veljudgmnt_self = judgments[:, 3]
velpredict_self = predictions[:, 3]
# world:
# conditions_world = np.abs(judgments[:, 2])
veljudgmnt_world = judgments[:, 4]
velpredict_world = predictions[:, 4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',
figsize=(12, 5))
ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)
ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)
ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
# @title Data retrieval
import os
import wget
fname="W1D2_data.npz"
if not os.path.exists(fname):
#!wget https://osf.io/c5xyf/download -O $fname
fname = wget.download('https://osf.io/c5xyf/download')
filez = np.load(file=fname, allow_pickle=True)
judgments = filez['judgments']
opticflow = filez['opticflow']
vestibular = filez['vestibular']
###Output
_____no_output_____
###Markdown
--- Section 6: Model planning
###Code
# @title Video 6: Planning
video = YouTubeVideo(id='dRTOFFigxa0', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? Our model will have:* **inputs**: the values the system has available - this can be broken down in _data:_ the sensory signals, _parameters:_ the threshold and the window sizes for filtering* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial in m/s, just like the judgments participants made.* **model functions**: A set of functions that perform the hypothesized computations.We will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.**Recap of what we've accomplished so far:**To model perceptual estimates from our sensory data, we need to 1. _integrate:_ to ensure sensory information are in appropriate units2. _filter:_ to reduce noise and set timescale3. _threshold:_ to model detectionThis will be done with these operations:1. _integrate:_ `np.cumsum()`2. _filter:_ `my_moving_window()`3. _threshold:_ `if` with a comparison (`>` or `<`) and `else`**_Planning our model:_**We will now start putting all the pieces together. Normally you would sketch this yourself, but here is an overview of how the functions comprising the model are going to work:Below is the main function with a detailed explanation of what the function is supposed to do, exactly what input is expected, and what output will be generated. The model is not complete, so it only returns nans (**n**ot-**a**-**n**umber) for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **Main model function**
###Code
def my_train_illusion_model(sensorydata, params):
"""
Generate output predictions of perceived self-motion and perceived
world-motion velocity based on input visual and vestibular signals.
Args:
sensorydata: (dict) dictionary with two named entries:
opticflow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows
and M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindows: (list of int) determines the strength of filtering for
the vestibular and visual signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
"""
# sanitize input a little
if not ('FUN' in params.keys()):
params['FUN'] = np.mean
if not ('integrate' in params.keys()):
params['integrate'] = True
if not ('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials?
for trialN in range(ntrials):
# these are our sensory variables (inputs)
vis = sensorydata['opticflow'][trialN, :]
ves = sensorydata['vestibular'][trialN, :]
# generate output predicted perception:
selfmotion[trialN],\
worldmotion[trialN] = my_perceived_motion(vis=vis, ves=ves,
params=params)
return {'selfmotion': selfmotion, 'worldmotion': worldmotion}
# here is a mock version of my_perceived motion.
# so you can test my_train_illusion_model()
def my_perceived_motion(*args, **kwargs):
return [np.nan, np.nan]
# let's look at the preditions we generated for two sample trials (0,100)
# we should get a 1x2 vector of self-motion prediction and another
# for world-motion
sensorydata = {
'opticflow': opticflow[[0, 100], :0],
'vestibular': vestibular[[0, 100], :0]
}
params = {'threshold': 0.33, 'filterwindows': [100, 50]}
my_train_illusion_model(sensorydata=sensorydata, params=params)
###Output
_____no_output_____
###Markdown
We've also completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
###Code
# Full perceived motion function
def my_perceived_motion(vis, ves, params):
"""
Takes sensory data and parameters and returns predicted percepts
Args:
vis (numpy.ndarray) : 1xM array of optic flow velocity data
ves (numpy.ndarray) : 1xM array of vestibular acceleration data
params : (dict) dictionary with named entries:
see my_train_illusion_model() for details
Returns:
[list of floats] : prediction for perceived self-motion based on
vestibular data, and prediction for perceived
world-motion based on perceived self-motion and
visual data
"""
# estimate self motion based on only the vestibular data
# pass on the parameters
selfmotion = my_selfmotion(ves=ves, params=params)
# estimate the world motion, based on the selfmotion and visual data
# pass on the parameters as well
worldmotion = my_worldmotion(vis=vis, selfmotion=selfmotion, params=params)
return [selfmotion, worldmotion]
###Output
_____no_output_____
###Markdown
TD 6.1: Formulate purpose of the self motion functionNow we plan out the purpose of one of the remaining functions. **Only name input arguments, write help text and comments, _no code_.** The goal of this exercise is to make writing the code (in Micro-tutorial 7) much easier. Based on our work before the break, you should now be able to answer these questions for each function:* what (sensory) data is necessary? * what parameters does the function need, if any?* which operations will be performed on the input?* what is the output?The number of arguments is correct. **Template calculate self motion**Name the _input arguments_, complete the _help text_, and add _comments_ in the function below to describe the inputs, the outputs, and operations using elements from the recap at the top of this notebook (or from micro-tutorials 3 and 4 in part 1), in order to plan out the function. Do not write any code.
###Code
def my_selfmotion(ves, params):
"""
Takes predicted percepts and returns predicted judgements.
Parameters
----------
ves : numpy.ndarray
1xM array of vestibular acceleration data (reflecting a single
trial)
params : dict
dictionary with named entries:see my_train_illusion_model()
for details
Returns:
self_motion : float
prediction of perceived self-motion based on vestibular data m/s
"""
##################################################
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1. cumsum to integrate
# 2. uniform_filter1d to normalize
# 3. take final
# 4. compare to threshold
# if > threshold, return value
# if < threshold, return 0
##################################################
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_06ea80b7.py) **Template calculate world motion**We have drafted the help text and written comments in the function below that describe the inputs, the outputs, and operations we use to estimate world motion, based on the recap above.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Estimates world motion based on the visual signal, the estimate of
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
##################################################
# 1. running window function
# 2. take final value
# 3. subtract selfmotion from value
# return final value
##################################################
return output
###Output
_____no_output_____
###Markdown
--- Section 7: Model implementation
###Code
# @title Video 7: Implementation
video = YouTubeVideo(id='DMSIt7t-LO8', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* take last `selfmotion` value as our estimate* threshold: if (value > thr): else: TD 7.1: Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you! Exercise 1: finish self motion function
###Code
from scipy.ndimage import uniform_filter1d
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict) : dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float) : an estimate of self motion in m/s
"""
# 1. integrate vestibular signal:
ves = np.cumsum(ves * (1 / params['samplingrate']))
# 2. running window function to accumulate evidence:
selfmotion = uniform_filter1d(ves,
size=params['filterwindows'][0],
mode='nearest')
# 3. take final value of self-motion vector as our estimate
selfmotion = selfmotion[-1]
# 4. compare to threshold. Hint the threshold is stored in
# params['threshold']
# if selfmotion is higher than threshold: return value
# if it's lower than threshold: return 0
if selfmotion < params['threshold']:
selfmotion = 0
return selfmotion
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_4c0b8958.py) Interactive Demo: Unit testingTesting if the functions you wrote do what they are supposed to do is important, and known as 'unit testing'. Here we will simplify this for the `my_selfmotion()` function, by allowing varying the threshold and window size with a slider, and seeing what the distribution of self-motion estimates looks like.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
def refresh(threshold=0, windowsize=100):
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
selfmotion_estimates = np.empty(200)
# get the estimates for each trial:
for trial_number in range(200):
ves = vestibular[trial_number, :]
selfmotion_estimates[trial_number] = my_selfmotion(ves, params)
plt.figure()
plt.hist(selfmotion_estimates, bins=20)
plt.xlabel('self-motion estimate')
plt.ylabel('frequency')
plt.show()
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
**Estimate world motion**We have completed the `my_worldmotion()` function for you below.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Short description of the function
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# running average to smooth/accumulate sensory evidence
visualmotion = my_moving_window(vis, window=params['filterwindows'][1],
FUN=np.mean)
# take final value
visualmotion = visualmotion[-1]
# subtract selfmotion from value
worldmotion = visualmotion + selfmotion # subtract?
# return final value
return worldmotion
###Output
_____no_output_____
###Markdown
--- Section 8: Model completion
###Code
# @title Video 8: Completion
video = YouTubeVideo(id='EM-G8YYdrDg', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). TD 8.1: See if the model produces illusions
###Code
# @markdown Run to plot model predictions of motion estimates
# prepare to run the model again:
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.6, 'filterwindows': [100, 50], 'FUN': np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
my_plot_percepts(datasets={'predictions': predictions}, plotconditions=True)
###Output
_____no_output_____
###Markdown
**Questions:*** How does the distribution of data points compare to the plot in TD 1.2 or in TD 7.1?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two clusters of data points. This mean the model can help us understand the phenomenon. --- Section 9: Model evaluation
###Code
# @title Video 9: Evaluation
video = YouTubeVideo(id='bWLFyobm4Rk', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data.**Quantify model quality with $R^2$**Let's look at how well our model matches the actual judgment data.
###Code
# @markdown Run to plot predictions over data
my_plot_predictions_data(judgments, predictions)
###Output
_____no_output_____
###Markdown
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
###Code
# @markdown Run to calculate R^2
conditions = np.concatenate((np.abs(judgments[:, 1]), np.abs(judgments[:, 2])))
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(conditions, veljudgmnt)
print(f"conditions -> judgments R^2: {r_value ** 2:0.3f}")
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
###Output
conditions -> judgments R^2: 0.032
predictions -> judgments R^2: 0.246
###Markdown
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: in a certain percentage of cases the model tends to have the same illusions as the participants. TD 9.1 Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds** Interactive Demo: optimizing the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
Varying the parameters this way, allows you to increase the models' performance in predicting the actual data as measured by $R^2$. This is called model fitting, and will be done better in the coming weeks. TD 9.2: Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here. Exercise 2: function for credit assigment of self motion
###Code
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
"""
# integrate signal:
ves = np.cumsum(ves * (1 / params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves, window=params['filterwindows'][0],
FUN=params['FUN'])
# take the final value as our estimate:
selfmotion = selfmotion[-1]
# compare to threshold, set to 0 if lower
if selfmotion < params['threshold']:
selfmotion = 0
else:
selfmotion = 1
return selfmotion
# Use the updated function to run the model and plot the data
# Uncomment below to test your function
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.33, 'filterwindows': [100, 50], 'FUN': np.mean}
# modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# my_plot_percepts(datasets={'predictions': predictions}, plotconditions=False)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_97a9e346.py)*Example output:* That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved. Use the optimal values for the threshold and window size that you found previously. Interactive Demo: evaluating the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are a little worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. **Interpret the model's meaning**Here's what you should have learned from model the train illusion: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis.We decided that for now we have learned enough, so it's time to write it up. --- Section 10: Model publication!
###Code
# @title Video 10: Publication
video = YouTubeVideo(id='zm8x7oegN6Q', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week 1, Day 2, Tutorial 2 Modeling Practice: Model implementation and evaluation__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom --- Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import gamma
from IPython.display import YouTubeVideo
# @title Figure settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def my_moving_window(x, window=3, FUN=np.mean):
"""
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving
average of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
"""
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown, :] = my_moving_window(x[rown, :],
window=window, FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(1 - window), 1):
if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i + wind_i])):
values += [x[samp_i + wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets, dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'], expect['self'], marker='*',
color='xkcd:green', label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:, 0]):
c_idx = np.where(judgments[:, 0] == condition)[0]
cond_self_motion = judgments[c_idx[0], 1]
cond_world_motion = judgments[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = f"condition [{condition:d}] judgments"
plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],
label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:, 0]):
c_idx = np.where(predictions[:, 0] == condition)[0]
cond_self_motion = predictions[c_idx[0], 1]
cond_world_motion = predictions[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = f"condition [{condition:d}] prediction"
plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],
marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis', \
'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1], [0], marker='<', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='world-motion stimulus', s=80)
plt.scatter([0], [1], marker='>', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='self-motion stimulus', s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_stimuli(t, a, v):
plt.figure(figsize=(10, 6))
plt.plot(t, a, label='acceleration [$m/s^2$]')
plt.plot(t, v, label='velocity [$m/s$]')
plt.xlabel('time [s]')
plt.ylabel('[motion]')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_motion_signals():
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = np.cumsum(a * dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',
sharey='row', figsize=(14, 6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t, -v, label='visual [$m/s$]')
ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t, -v, label='visual [$m/s$]')
ax2.plot(t, a, label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,
addaverages=False, integrateVestibular=False,
addGroundTruth=False):
if addGroundTruth:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = a
wm_idx = np.where(judgments[:, 0] == 0)
sm_idx = np.where(judgments[:, 0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:, wm_idx])
sm_opticflow = np.squeeze(opticflow[:, sm_idx])
if integrateVestibular:
vestibular = np.cumsum(vestibular * .1, axis=1)
if addGroundTruth:
v = np.cumsum(a * dt)
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:, wm_idx])
sm_vestibular = np.squeeze(vestibular[:, sm_idx])
X = np.arange(0, 10, .1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col',
sharey='row', figsize=(15, 10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][0].plot(t, -v, color='xkcd:red')
if addaverages:
my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),
color='xkcd:red', alpha=1)
my_axes[0][0].set_title('optic-flow in world-motion condition')
my_axes[0][0].set_ylabel('velocity signal [$m/s$]')
my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][1].plot(t, -v, color='xkcd:blue')
if addaverages:
my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),
color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('optic-flow in self-motion condition')
my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),
color='xkcd:red', alpha=1)
my_axes[1][0].set_title('vestibular signal in world-motion condition')
if addGroundTruth:
my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')
my_axes[1][0].set_xlabel('time [s]')
if integrateVestibular:
my_axes[1][0].set_ylabel('velocity signal [$m/s$]')
else:
my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')
my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[1][1].plot(t, v, color='xkcd:blue')
if addaverages:
my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),
color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('vestibular signal in self-motion condition')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_threshold_solution(selfmotion_vel_est, threshold):
is_move = (selfmotion_vel_est > threshold)
return is_move
def my_moving_threshold(selfmotion_vel_est, thresholds):
pselfmove_nomove = np.empty(thresholds.shape)
pselfmove_move = np.empty(thresholds.shape)
prop_correct = np.empty(thresholds.shape)
pselfmove_nomove[:] = np.NaN
pselfmove_move[:] = np.NaN
prop_correct[:] = np.NaN
for thr_i, threshold in enumerate(thresholds):
# run my_threshold that the students will write:
try:
is_move = my_threshold(selfmotion_vel_est, threshold)
except Exception:
is_move = my_threshold_solution(selfmotion_vel_est, threshold)
# store results:
pselfmove_nomove[thr_i] = np.mean(is_move[0:100])
pselfmove_move[thr_i] = np.mean(is_move[100:200])
# calculate the proportion classified correctly:
# (1-pselfmove_nomove) + ()
# Correct rejections:
p_CR = (1 - pselfmove_nomove[thr_i])
# correct detections:
p_D = pselfmove_move[thr_i]
# this is corrected for proportion of trials in each condition:
prop_correct[thr_i] = (p_CR + p_D) / 2
return [pselfmove_nomove, pselfmove_move, prop_correct]
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12, 8))
plt.title('threshold effects')
plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',
color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion condition')
plt.plot(thresholds, self_prop, label='self motion condition')
plt.plot(thresholds, prop_correct, color='xkcd:purple',
label='correct classification')
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
# conditions = np.concatenate((np.abs(judgments[:, 1]),
# np.abs(judgments[:, 2])))
# veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
# velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
# self:
# conditions_self = np.abs(judgments[:, 1])
veljudgmnt_self = judgments[:, 3]
velpredict_self = predictions[:, 3]
# world:
# conditions_world = np.abs(judgments[:, 2])
veljudgmnt_world = judgments[:, 4]
velpredict_world = predictions[:, 4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',
figsize=(12, 5))
ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)
ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)
ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
# @title Data retrieval
import os
fname="W1D2_data.npz"
if not os.path.exists(fname):
!wget https://osf.io/c5xyf/download -O $fname
filez = np.load(file=fname, allow_pickle=True)
judgments = filez['judgments']
opticflow = filez['opticflow']
vestibular = filez['vestibular']
###Output
_____no_output_____
###Markdown
--- Section 6: Model planning
###Code
# @title Video 6: Planning
video = YouTubeVideo(id='dRTOFFigxa0', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
Video available at https://youtube.com/watch?v=dRTOFFigxa0
###Markdown
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? Our model will have:* **inputs**: the values the system has available - this can be broken down in _data:_ the sensory signals, _parameters:_ the threshold and the window sizes for filtering* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial in m/s, just like the judgments participants made.* **model functions**: A set of functions that perform the hypothesized computations.We will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.**Recap of what we've accomplished so far:**To model perceptual estimates from our sensory data, we need to 1. _integrate:_ to ensure sensory information are in appropriate units2. _filter:_ to reduce noise and set timescale3. _threshold:_ to model detectionThis will be done with these operations:1. _integrate:_ `np.cumsum()`2. _filter:_ `my_moving_window()`3. _threshold:_ `if` with a comparison (`>` or `<`) and `else`**_Planning our model:_**We will now start putting all the pieces together. Normally you would sketch this yourself, but here is an overview of how the functions comprising the model are going to work:Below is the main function with a detailed explanation of what the function is supposed to do, exactly what input is expected, and what output will be generated. The model is not complete, so it only returns nans (**n**ot-**a**-**n**umber) for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **Main model function**
###Code
def my_train_illusion_model(sensorydata, params):
"""
Generate output predictions of perceived self-motion and perceived
world-motion velocity based on input visual and vestibular signals.
Args:
sensorydata: (dict) dictionary with two named entries:
opticflow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows
and M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindow: (list of int) determines the strength of filtering for
the visual and vestibular signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
"""
# sanitize input a little
if not('FUN' in params.keys()):
params['FUN'] = np.mean
if not('integrate' in params.keys()):
params['integrate'] = True
if not('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials?
for trialN in range(ntrials):
# these are our sensory variables (inputs)
vis = sensorydata['opticflow'][trialN, :]
ves = sensorydata['vestibular'][trialN, :]
# generate output predicted perception:
selfmotion[trialN],\
worldmotion[trialN] = my_perceived_motion(vis=vis, ves=ves,
params=params)
return {'selfmotion': selfmotion, 'worldmotion': worldmotion}
# here is a mock version of my_perceived motion.
# so you can test my_train_illusion_model()
def my_perceived_motion(*args, **kwargs):
return [np.nan, np.nan]
# let's look at the preditions we generated for two sample trials (0,100)
# we should get a 1x2 vector of self-motion prediction and another
# for world-motion
sensorydata={'opticflow': opticflow[[0, 100], :0],
'vestibular': vestibular[[0, 100], :0]}
params={'threshold': 0.33, 'filterwindows': [100, 50]}
my_train_illusion_model(sensorydata=sensorydata, params=params)
###Output
_____no_output_____
###Markdown
We've also completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
###Code
# Full perceived motion function
def my_perceived_motion(vis, ves, params):
"""
Takes sensory data and parameters and returns predicted percepts
Args:
vis (numpy.ndarray) : 1xM array of optic flow velocity data
ves (numpy.ndarray) : 1xM array of vestibular acceleration data
params : (dict) dictionary with named entries:
see my_train_illusion_model() for details
Returns:
[list of floats] : prediction for perceived self-motion based on
vestibular data, and prediction for perceived
world-motion based on perceived self-motion and
visual data
"""
# estimate self motion based on only the vestibular data
# pass on the parameters
selfmotion = my_selfmotion(ves=ves, params=params)
# estimate the world motion, based on the selfmotion and visual data
# pass on the parameters as well
worldmotion = my_worldmotion(vis=vis, selfmotion=selfmotion, params=params)
return [selfmotion, worldmotion]
###Output
_____no_output_____
###Markdown
TD 6.1: Formulate purpose of the self motion functionNow we plan out the purpose of one of the remaining functions. **Only name input arguments, write help text and comments, _no code_.** The goal of this exercise is to make writing the code (in Micro-tutorial 7) much easier. Based on our work before the break, you should now be able to answer these questions for each function:* what (sensory) data is necessary? * what parameters does the function need, if any?* which operations will be performed on the input?* what is the output?The number of arguments is correct. **Template calculate self motion**Name the _input arguments_, complete the _help text_, and add _comments_ in the function below to describe the inputs, the outputs, and operations using elements from the recap at the top of this notebook (or from micro-tutorials 3 and 4 in part 1), in order to plan out the function. Do not write any code.
###Code
def my_selfmotion(arg1, arg2):
"""
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
Returns:
what output does the function generate?
Any further description?
"""
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# 4.
# what output should this function produce?
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_90e4d753.py) **Template calculate world motion**We have drafted the help text and written comments in the function below that describe the inputs, the outputs, and operations we use to estimate world motion, based on the recap above.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Estimates world motion based on the visual signal, the estimate of
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# 1. running window function
# 2. take final value
# 3. subtract selfmotion from value
# return final value
return output
###Output
_____no_output_____
###Markdown
--- Section 7: Model implementation
###Code
# @title Video 7: Implementation
video = YouTubeVideo(id='DMSIt7t-LO8', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
Video available at https://youtube.com/watch?v=DMSIt7t-LO8
###Markdown
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* take last `selfmotion` value as our estimate* threshold: if (value > thr): else: TD 7.1: Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you! Exercise 1: finish self motion function
###Code
# Self motion function
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict) : dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float) : an estimate of self motion in m/s
"""
# uncomment the code below and fill in with your code
# 1. integrate vestibular signal
# ves = np.cumsum(ves * (1 / params['samplingrate']))
# 2. running window function to accumulate evidence:
# selfmotion = ... YOUR CODE HERE
# 3. take final value of self-motion vector as our estimate
# selfmotion = ... YOUR CODE HERE
# 4. compare to threshold. Hint the threshodl is stored in
# params['threshold']
# if selfmotion is higher than threshold: return value
# if it's lower than threshold: return 0
# if YOURCODEHERE
# selfmotion = YOURCODHERE
# Comment this line when your function is ready
raise NotImplementedError("Student excercise: estimate my_selfmotion")
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_53312239.py) Interactive Demo: Unit testingTesting if the functions you wrote do what they are supposed to do is important, and known as 'unit testing'. Here we will simplify this for the `my_selfmotion()` function, by allowing varying the threshold and window size with a slider, and seeing what the distribution of self-motion estimates looks like.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
def refresh(threshold=0, windowsize=100):
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
selfmotion_estimates = np.empty(200)
# get the estimates for each trial:
for trial_number in range(200):
ves = vestibular[trial_number, :]
selfmotion_estimates[trial_number] = my_selfmotion(ves, params)
plt.figure()
plt.hist(selfmotion_estimates, bins=20)
plt.xlabel('self-motion estimate')
plt.ylabel('frequency')
plt.show()
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
**Estimate world motion**We have completed the `my_worldmotion()` function for you below.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Short description of the function
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# running average to smooth/accumulate sensory evidence
visualmotion = my_moving_window(vis, window=params['filterwindows'][1],
FUN=np.mean)
# take final value
visualmotion = visualmotion[-1]
# subtract selfmotion from value
worldmotion = visualmotion + selfmotion
# return final value
return worldmotion
###Output
_____no_output_____
###Markdown
--- Section 8: Model completion
###Code
# @title Video 8: Completion
video = YouTubeVideo(id='EM-G8YYdrDg', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
Video available at https://youtube.com/watch?v=EM-G8YYdrDg
###Markdown
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). TD 8.1: See if the model produces illusions
###Code
# @markdown Run to plot model predictions of motion estimates
# prepare to run the model again:
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.6, 'filterwindows': [100, 50], 'FUN': np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
my_plot_percepts(datasets={'predictions': predictions}, plotconditions=True)
###Output
_____no_output_____
###Markdown
**Questions:*** How does the distribution of data points compare to the plot in TD 1.2 or in TD 7.1?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two clusters of data points. This mean the model can help us understand the phenomenon. --- Section 9: Model evaluation
###Code
# @title Video 9: Evaluation
video = YouTubeVideo(id='bWLFyobm4Rk', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
Video available at https://youtube.com/watch?v=bWLFyobm4Rk
###Markdown
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data.**Quantify model quality with $R^2$**Let's look at how well our model matches the actual judgment data.
###Code
# @markdown Run to plot predictions over data
my_plot_predictions_data(judgments, predictions)
###Output
_____no_output_____
###Markdown
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
###Code
# @markdown Run to calculate R^2
conditions = np.concatenate((np.abs(judgments[:, 1]), np.abs(judgments[:, 2])))
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(conditions, veljudgmnt)
print(f"conditions -> judgments R^2: {r_value ** 2:0.3f}")
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
###Output
conditions -> judgments R^2: 0.032
predictions -> judgments R^2: 0.256
###Markdown
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: in a certain percentage of cases the model tends to have the same illusions as the participants. TD 9.1 Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds** Interactive Demo: optimizing the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
Varying the parameters this way, allows you to increase the models' performance in predicting the actual data as measured by $R^2$. This is called model fitting, and will be done better in the coming weeks. TD 9.2: Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here. Exercise 2: function for credit assigment of self motion
###Code
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
"""
# integrate signal:
ves = np.cumsum(ves * (1 / params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves, window=params['filterwindows'][0],
FUN=params['FUN'])
# take the final value as our estimate:
selfmotion = selfmotion[-1]
# compare to threshold, set to 0 if lower and else...
if selfmotion < params['threshold']:
selfmotion = 0
###########################################################################
# Exercise: Complete credit assignment. Remove the next line to test your function
else:
selfmotion = ... #YOUR CODE HERE
raise NotImplementedError("Modify with credit assignment")
###########################################################################
return selfmotion
# Use the updated function to run the model and plot the data
# Uncomment below to test your function
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.33, 'filterwindows': [100, 50], 'FUN': np.mean}
#modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
#my_plot_percepts(datasets={'predictions': predictions}, plotconditions=False)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_51dce10c.py)*Example output:* That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved. Use the optimal values for the threshold and window size that you found previously. Interactive Demo: evaluating the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are a little worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. **Interpret the model's meaning**Here's what you should have learned from model the train illusion: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis.We decided that for now we have learned enough, so it's time to write it up. --- Section 10: Model publication!
###Code
# @title Video 10: Publication
video = YouTubeVideo(id='zm8x7oegN6Q', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
Video available at https://youtube.com/watch?v=zm8x7oegN6Q
###Markdown
Neuromatch Academy: Week 1, Day 2, Tutorial 2 Modeling Practice: Model implementation and evaluation__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom --- Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import gamma
from IPython.display import YouTubeVideo
# @title Figure settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def my_moving_window(x, window=3, FUN=np.mean):
"""
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving
average of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
"""
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown, :] = my_moving_window(x[rown, :],
window=window, FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(1 - window), 1):
if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i + wind_i])):
values += [x[samp_i + wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets, dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'], expect['self'], marker='*',
color='xkcd:green', label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:, 0]):
c_idx = np.where(judgments[:, 0] == condition)[0]
cond_self_motion = judgments[c_idx[0], 1]
cond_world_motion = judgments[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = f"condition [{condition:d}] judgments"
plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],
label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:, 0]):
c_idx = np.where(predictions[:, 0] == condition)[0]
cond_self_motion = predictions[c_idx[0], 1]
cond_world_motion = predictions[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = f"condition [{condition:d}] prediction"
plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],
marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis', \
'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1], [0], marker='<', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='world-motion stimulus', s=80)
plt.scatter([0], [1], marker='>', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='self-motion stimulus', s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_stimuli(t, a, v):
plt.figure(figsize=(10, 6))
plt.plot(t, a, label='acceleration [$m/s^2$]')
plt.plot(t, v, label='velocity [$m/s$]')
plt.xlabel('time [s]')
plt.ylabel('[motion]')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_motion_signals():
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = np.cumsum(a * dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',
sharey='row', figsize=(14, 6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t, -v, label='visual [$m/s$]')
ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t, -v, label='visual [$m/s$]')
ax2.plot(t, a, label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,
addaverages=False, integrateVestibular=False,
addGroundTruth=False):
if addGroundTruth:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = a
wm_idx = np.where(judgments[:, 0] == 0)
sm_idx = np.where(judgments[:, 0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:, wm_idx])
sm_opticflow = np.squeeze(opticflow[:, sm_idx])
if integrateVestibular:
vestibular = np.cumsum(vestibular * .1, axis=1)
if addGroundTruth:
v = np.cumsum(a * dt)
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:, wm_idx])
sm_vestibular = np.squeeze(vestibular[:, sm_idx])
X = np.arange(0, 10, .1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col',
sharey='row', figsize=(15, 10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][0].plot(t, -v, color='xkcd:red')
if addaverages:
my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),
color='xkcd:red', alpha=1)
my_axes[0][0].set_title('optic-flow in world-motion condition')
my_axes[0][0].set_ylabel('velocity signal [$m/s$]')
my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][1].plot(t, -v, color='xkcd:blue')
if addaverages:
my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),
color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('optic-flow in self-motion condition')
my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),
color='xkcd:red', alpha=1)
my_axes[1][0].set_title('vestibular signal in world-motion condition')
if addGroundTruth:
my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')
my_axes[1][0].set_xlabel('time [s]')
if integrateVestibular:
my_axes[1][0].set_ylabel('velocity signal [$m/s$]')
else:
my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')
my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[1][1].plot(t, v, color='xkcd:blue')
if addaverages:
my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),
color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('vestibular signal in self-motion condition')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_threshold_solution(selfmotion_vel_est, threshold):
is_move = (selfmotion_vel_est > threshold)
return is_move
def my_moving_threshold(selfmotion_vel_est, thresholds):
pselfmove_nomove = np.empty(thresholds.shape)
pselfmove_move = np.empty(thresholds.shape)
prop_correct = np.empty(thresholds.shape)
pselfmove_nomove[:] = np.NaN
pselfmove_move[:] = np.NaN
prop_correct[:] = np.NaN
for thr_i, threshold in enumerate(thresholds):
# run my_threshold that the students will write:
try:
is_move = my_threshold(selfmotion_vel_est, threshold)
except Exception:
is_move = my_threshold_solution(selfmotion_vel_est, threshold)
# store results:
pselfmove_nomove[thr_i] = np.mean(is_move[0:100])
pselfmove_move[thr_i] = np.mean(is_move[100:200])
# calculate the proportion classified correctly:
# (1-pselfmove_nomove) + ()
# Correct rejections:
p_CR = (1 - pselfmove_nomove[thr_i])
# correct detections:
p_D = pselfmove_move[thr_i]
# this is corrected for proportion of trials in each condition:
prop_correct[thr_i] = (p_CR + p_D) / 2
return [pselfmove_nomove, pselfmove_move, prop_correct]
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12, 8))
plt.title('threshold effects')
plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',
color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion condition')
plt.plot(thresholds, self_prop, label='self motion condition')
plt.plot(thresholds, prop_correct, color='xkcd:purple',
label='correct classification')
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
# conditions = np.concatenate((np.abs(judgments[:, 1]),
# np.abs(judgments[:, 2])))
# veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
# velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
# self:
# conditions_self = np.abs(judgments[:, 1])
veljudgmnt_self = judgments[:, 3]
velpredict_self = predictions[:, 3]
# world:
# conditions_world = np.abs(judgments[:, 2])
veljudgmnt_world = judgments[:, 4]
velpredict_world = predictions[:, 4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',
figsize=(12, 5))
ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)
ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)
ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
# @title Data retrieval
import os
fname="W1D2_data.npz"
if not os.path.exists(fname):
!wget https://osf.io/c5xyf/download -O $fname
filez = np.load(file=fname, allow_pickle=True)
judgments = filez['judgments']
opticflow = filez['opticflow']
vestibular = filez['vestibular']
###Output
_____no_output_____
###Markdown
--- Section 6: Model planning
###Code
# @title Video 6: Planning
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1nC4y1h7yL', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1nC4y1h7yL
###Markdown
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? Our model will have:* **inputs**: the values the system has available - this can be broken down in _data:_ the sensory signals, _parameters:_ the threshold and the window sizes for filtering* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial in m/s, just like the judgments participants made.* **model functions**: A set of functions that perform the hypothesized computations.We will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.**Recap of what we've accomplished so far:**To model perceptual estimates from our sensory data, we need to 1. _integrate:_ to ensure sensory information are in appropriate units2. _filter:_ to reduce noise and set timescale3. _threshold:_ to model detectionThis will be done with these operations:1. _integrate:_ `np.cumsum()`2. _filter:_ `my_moving_window()`3. _threshold:_ `if` with a comparison (`>` or `<`) and `else`**_Planning our model:_**We will now start putting all the pieces together. Normally you would sketch this yourself, but here is an overview of how the functions comprising the model are going to work:Below is the main function with a detailed explanation of what the function is supposed to do, exactly what input is expected, and what output will be generated. The model is not complete, so it only returns nans (**n**ot-**a**-**n**umber) for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **Main model function**
###Code
def my_train_illusion_model(sensorydata, params):
"""
Generate output predictions of perceived self-motion and perceived
world-motion velocity based on input visual and vestibular signals.
Args:
sensorydata: (dict) dictionary with two named entries:
opticflow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows
and M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindow: (list of int) determines the strength of filtering for
the visual and vestibular signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
"""
# sanitize input a little
if not('FUN' in params.keys()):
params['FUN'] = np.mean
if not('integrate' in params.keys()):
params['integrate'] = True
if not('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials?
for trialN in range(ntrials):
# these are our sensory variables (inputs)
vis = sensorydata['opticflow'][trialN, :]
ves = sensorydata['vestibular'][trialN, :]
# generate output predicted perception:
selfmotion[trialN],\
worldmotion[trialN] = my_perceived_motion(vis=vis, ves=ves,
params=params)
return {'selfmotion': selfmotion, 'worldmotion': worldmotion}
# here is a mock version of my_perceived motion.
# so you can test my_train_illusion_model()
def my_perceived_motion(*args, **kwargs):
return [np.nan, np.nan]
# let's look at the preditions we generated for two sample trials (0,100)
# we should get a 1x2 vector of self-motion prediction and another
# for world-motion
sensorydata={'opticflow': opticflow[[0, 100], :0],
'vestibular': vestibular[[0, 100], :0]}
params={'threshold': 0.33, 'filterwindows': [100, 50]}
my_train_illusion_model(sensorydata=sensorydata, params=params)
###Output
_____no_output_____
###Markdown
We've also completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
###Code
# Full perceived motion function
def my_perceived_motion(vis, ves, params):
"""
Takes sensory data and parameters and returns predicted percepts
Args:
vis (numpy.ndarray) : 1xM array of optic flow velocity data
ves (numpy.ndarray) : 1xM array of vestibular acceleration data
params : (dict) dictionary with named entries:
see my_train_illusion_model() for details
Returns:
[list of floats] : prediction for perceived self-motion based on
vestibular data, and prediction for perceived
world-motion based on perceived self-motion and
visual data
"""
# estimate self motion based on only the vestibular data
# pass on the parameters
selfmotion = my_selfmotion(ves=ves, params=params)
# estimate the world motion, based on the selfmotion and visual data
# pass on the parameters as well
worldmotion = my_worldmotion(vis=vis, selfmotion=selfmotion, params=params)
return [selfmotion, worldmotion]
###Output
_____no_output_____
###Markdown
TD 6.1: Formulate purpose of the self motion functionNow we plan out the purpose of one of the remaining functions. **Only name input arguments, write help text and comments, _no code_.** The goal of this exercise is to make writing the code (in Micro-tutorial 7) much easier. Based on our work before the break, you should now be able to answer these questions for each function:* what (sensory) data is necessary? * what parameters does the function need, if any?* which operations will be performed on the input?* what is the output?The number of arguments is correct. **Template calculate self motion**Name the _input arguments_, complete the _help text_, and add _comments_ in the function below to describe the inputs, the outputs, and operations using elements from the recap at the top of this notebook (or from micro-tutorials 3 and 4 in part 1), in order to plan out the function. Do not write any code.
###Code
def my_selfmotion(arg1, arg2):
"""
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
Returns:
what output does the function generate?
Any further description?
"""
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# 4.
# what output should this function produce?
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_90e4d753.py) **Template calculate world motion**We have drafted the help text and written comments in the function below that describe the inputs, the outputs, and operations we use to estimate world motion, based on the recap above.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Estimates world motion based on the visual signal, the estimate of
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# 1. running window function
# 2. take final value
# 3. subtract selfmotion from value
# return final value
return output
###Output
_____no_output_____
###Markdown
--- Section 7: Model implementation
###Code
# @title Video 7: Implementation
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV18Z4y1u7yB', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV18Z4y1u7yB
###Markdown
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* take last `selfmotion` value as our estimate* threshold: if (value > thr): else: TD 7.1: Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you! Exercise 1: finish self motion function
###Code
# Self motion function
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict) : dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float) : an estimate of self motion in m/s
"""
# uncomment the code below and fill in with your code
# 1. integrate vestibular signal
# ves = np.cumsum(ves * (1 / params['samplingrate']))
# 2. running window function to accumulate evidence:
# selfmotion = ... YOUR CODE HERE
# 3. take final value of self-motion vector as our estimate
# selfmotion = ... YOUR CODE HERE
# 4. compare to threshold. Hint the threshodl is stored in
# params['threshold']
# if selfmotion is higher than threshold: return value
# if it's lower than threshold: return 0
# if YOURCODEHERE
# selfmotion = YOURCODHERE
# Comment this line when your function is ready
raise NotImplementedError("Student excercise: estimate my_selfmotion")
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_53312239.py) Interactive Demo: Unit testingTesting if the functions you wrote do what they are supposed to do is important, and known as 'unit testing'. Here we will simplify this for the `my_selfmotion()` function, by allowing varying the threshold and window size with a slider, and seeing what the distribution of self-motion estimates looks like.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
def refresh(threshold=0, windowsize=100):
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
selfmotion_estimates = np.empty(200)
# get the estimates for each trial:
for trial_number in range(200):
ves = vestibular[trial_number, :]
selfmotion_estimates[trial_number] = my_selfmotion(ves, params)
plt.figure()
plt.hist(selfmotion_estimates, bins=20)
plt.xlabel('self-motion estimate')
plt.ylabel('frequency')
plt.show()
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
**Estimate world motion**We have completed the `my_worldmotion()` function for you below.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Short description of the function
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# running average to smooth/accumulate sensory evidence
visualmotion = my_moving_window(vis, window=params['filterwindows'][1],
FUN=np.mean)
# take final value
visualmotion = visualmotion[-1]
# subtract selfmotion from value
worldmotion = visualmotion + selfmotion
# return final value
return worldmotion
###Output
_____no_output_____
###Markdown
--- Section 8: Model completion
###Code
# @title Video 8: Completion
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1YK411H7oW', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1YK411H7oW
###Markdown
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). TD 8.1: See if the model produces illusions
###Code
# @markdown Run to plot model predictions of motion estimates
# prepare to run the model again:
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.6, 'filterwindows': [100, 50], 'FUN': np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
my_plot_percepts(datasets={'predictions': predictions}, plotconditions=True)
###Output
_____no_output_____
###Markdown
**Questions:*** How does the distribution of data points compare to the plot in TD 1.2 or in TD 7.1?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two clusters of data points. This mean the model can help us understand the phenomenon. --- Section 9: Model evaluation
###Code
# @title Video 9: Evaluation
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1uK411H7EK', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1uK411H7EK
###Markdown
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data.**Quantify model quality with $R^2$**Let's look at how well our model matches the actual judgment data.
###Code
# @markdown Run to plot predictions over data
my_plot_predictions_data(judgments, predictions)
###Output
_____no_output_____
###Markdown
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
###Code
# @markdown Run to calculate R^2
conditions = np.concatenate((np.abs(judgments[:, 1]), np.abs(judgments[:, 2])))
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(conditions, veljudgmnt)
print(f"conditions -> judgments R^2: {r_value ** 2:0.3f}")
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
###Output
conditions -> judgments R^2: 0.032
predictions -> judgments R^2: 0.256
###Markdown
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: in a certain percentage of cases the model tends to have the same illusions as the participants. TD 9.1 Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds** Interactive Demo: optimizing the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
Varying the parameters this way, allows you to increase the models' performance in predicting the actual data as measured by $R^2$. This is called model fitting, and will be done better in the coming weeks. TD 9.2: Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here. Exercise 2: function for credit assigment of self motion
###Code
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
"""
# integrate signal:
ves = np.cumsum(ves * (1 / params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves, window=params['filterwindows'][0],
FUN=params['FUN'])
# take the final value as our estimate:
selfmotion = selfmotion[-1]
# compare to threshold, set to 0 if lower and else...
if selfmotion < params['threshold']:
selfmotion = 0
###########################################################################
# Exercise: Complete credit assignment. Remove the next line to test your function
else:
selfmotion = ... #YOUR CODE HERE
raise NotImplementedError("Modify with credit assignment")
###########################################################################
return selfmotion
# Use the updated function to run the model and plot the data
# Uncomment below to test your function
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.33, 'filterwindows': [100, 50], 'FUN': np.mean}
#modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
#my_plot_percepts(datasets={'predictions': predictions}, plotconditions=False)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_51dce10c.py)*Example output:* That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved. Use the optimal values for the threshold and window size that you found previously. Interactive Demo: evaluating the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are a little worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. **Interpret the model's meaning**Here's what you should have learned from model the train illusion: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis.We decided that for now we have learned enough, so it's time to write it up. --- Section 10: Model publication!
###Code
# @title Video 10: Publication
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1M5411e7AG', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1M5411e7AG
###Markdown
###Code
# Mount Google Drive
from google.colab import drive # import drive from google colab
ROOT = "/content/drive" # default location for the drive
print(ROOT) # print content of ROOT (Optional)
drive.mount(ROOT,force_remount=True)
###Output
/content/drive
Mounted at /content/drive
###Markdown
Neuromatch Academy: Week 1, Day 2, Tutorial 2 Modeling Practice: Model implementation and evaluation__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom --- Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import gamma
from IPython.display import YouTubeVideo
# @title Figure settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def my_moving_window(x, window=3, FUN=np.mean):
"""
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving
average of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
"""
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown, :] = my_moving_window(x[rown, :],
window=window, FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(1 - window), 1):
if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i + wind_i])):
values += [x[samp_i + wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets, dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'], expect['self'], marker='*',
color='xkcd:green', label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:, 0]):
c_idx = np.where(judgments[:, 0] == condition)[0]
cond_self_motion = judgments[c_idx[0], 1]
cond_world_motion = judgments[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = f"condition [{condition:d}] judgments"
plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],
label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:, 0]):
c_idx = np.where(predictions[:, 0] == condition)[0]
cond_self_motion = predictions[c_idx[0], 1]
cond_world_motion = predictions[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = f"condition [{condition:d}] prediction"
plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],
marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis', \
'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1], [0], marker='<', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='world-motion stimulus', s=80)
plt.scatter([0], [1], marker='>', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='self-motion stimulus', s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_stimuli(t, a, v):
plt.figure(figsize=(10, 6))
plt.plot(t, a, label='acceleration [$m/s^2$]')
plt.plot(t, v, label='velocity [$m/s$]')
plt.xlabel('time [s]')
plt.ylabel('[motion]')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_motion_signals():
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = np.cumsum(a * dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',
sharey='row', figsize=(14, 6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t, -v, label='visual [$m/s$]')
ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t, -v, label='visual [$m/s$]')
ax2.plot(t, a, label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,
addaverages=False, integrateVestibular=False,
addGroundTruth=False):
if addGroundTruth:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = a
wm_idx = np.where(judgments[:, 0] == 0)
sm_idx = np.where(judgments[:, 0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:, wm_idx])
sm_opticflow = np.squeeze(opticflow[:, sm_idx])
if integrateVestibular:
vestibular = np.cumsum(vestibular * .1, axis=1)
if addGroundTruth:
v = np.cumsum(a * dt)
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:, wm_idx])
sm_vestibular = np.squeeze(vestibular[:, sm_idx])
X = np.arange(0, 10, .1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col',
sharey='row', figsize=(15, 10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][0].plot(t, -v, color='xkcd:red')
if addaverages:
my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),
color='xkcd:red', alpha=1)
my_axes[0][0].set_title('optic-flow in world-motion condition')
my_axes[0][0].set_ylabel('velocity signal [$m/s$]')
my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][1].plot(t, -v, color='xkcd:blue')
if addaverages:
my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),
color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('optic-flow in self-motion condition')
my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),
color='xkcd:red', alpha=1)
my_axes[1][0].set_title('vestibular signal in world-motion condition')
if addGroundTruth:
my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')
my_axes[1][0].set_xlabel('time [s]')
if integrateVestibular:
my_axes[1][0].set_ylabel('velocity signal [$m/s$]')
else:
my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')
my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[1][1].plot(t, v, color='xkcd:blue')
if addaverages:
my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),
color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('vestibular signal in self-motion condition')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_threshold_solution(selfmotion_vel_est, threshold):
is_move = (selfmotion_vel_est > threshold)
return is_move
def my_moving_threshold(selfmotion_vel_est, thresholds):
pselfmove_nomove = np.empty(thresholds.shape)
pselfmove_move = np.empty(thresholds.shape)
prop_correct = np.empty(thresholds.shape)
pselfmove_nomove[:] = np.NaN
pselfmove_move[:] = np.NaN
prop_correct[:] = np.NaN
for thr_i, threshold in enumerate(thresholds):
# run my_threshold that the students will write:
try:
is_move = my_threshold(selfmotion_vel_est, threshold)
except Exception:
is_move = my_threshold_solution(selfmotion_vel_est, threshold)
# store results:
pselfmove_nomove[thr_i] = np.mean(is_move[0:100])
pselfmove_move[thr_i] = np.mean(is_move[100:200])
# calculate the proportion classified correctly:
# (1-pselfmove_nomove) + ()
# Correct rejections:
p_CR = (1 - pselfmove_nomove[thr_i])
# correct detections:
p_D = pselfmove_move[thr_i]
# this is corrected for proportion of trials in each condition:
prop_correct[thr_i] = (p_CR + p_D) / 2
return [pselfmove_nomove, pselfmove_move, prop_correct]
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12, 8))
plt.title('threshold effects')
plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',
color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion condition')
plt.plot(thresholds, self_prop, label='self motion condition')
plt.plot(thresholds, prop_correct, color='xkcd:purple',
label='correct classification')
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
# conditions = np.concatenate((np.abs(judgments[:, 1]),
# np.abs(judgments[:, 2])))
# veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
# velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
# self:
# conditions_self = np.abs(judgments[:, 1])
veljudgmnt_self = judgments[:, 3]
velpredict_self = predictions[:, 3]
# world:
# conditions_world = np.abs(judgments[:, 2])
veljudgmnt_world = judgments[:, 4]
velpredict_world = predictions[:, 4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',
figsize=(12, 5))
ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)
ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)
ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
# @title Data retrieval
import os
fname="W1D2_data.npz"
if not os.path.exists(fname):
!wget https://osf.io/c5xyf/download -O $fname
filez = np.load(file=fname, allow_pickle=True)
judgments = filez['judgments']
opticflow = filez['opticflow']
vestibular = filez['vestibular']
###Output
_____no_output_____
###Markdown
--- Section 6: Model planning
###Code
# @title Video 6: Planning
video = YouTubeVideo(id='dRTOFFigxa0', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
Video available at https://youtube.com/watch?v=dRTOFFigxa0
###Markdown
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? Our model will have:* **inputs**: the values the system has available - this can be broken down in _data:_ the sensory signals, _parameters:_ the threshold and the window sizes for filtering* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial in m/s, just like the judgments participants made.* **model functions**: A set of functions that perform the hypothesized computations.We will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.**Recap of what we've accomplished so far:**To model perceptual estimates from our sensory data, we need to 1. _integrate:_ to ensure sensory information are in appropriate units2. _filter:_ to reduce noise and set timescale3. _threshold:_ to model detectionThis will be done with these operations:1. _integrate:_ `np.cumsum()`2. _filter:_ `my_moving_window()`3. _threshold:_ `if` with a comparison (`>` or `<`) and `else`**_Planning our model:_**We will now start putting all the pieces together. Normally you would sketch this yourself, but here is an overview of how the functions comprising the model are going to work:Below is the main function with a detailed explanation of what the function is supposed to do, exactly what input is expected, and what output will be generated. The model is not complete, so it only returns nans (**n**ot-**a**-**n**umber) for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions Would this be a How model? **Main model function**
###Code
def my_train_illusion_model(sensorydata, params):
"""
Generate output predictions of perceived self-motion and perceived
world-motion velocity based on input visual and vestibular signals.
Args:
sensorydata: (dict) dictionary with two named entries:
opticflow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows
and M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindow: (list of int) determines the strength of filtering for
the visual and vestibular signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
"""
# sanitize input a little
if not('FUN' in params.keys()):
params['FUN'] = np.mean
if not('integrate' in params.keys()):
params['integrate'] = True
if not('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials?
for trialN in range(ntrials):
# these are our sensory variables (inputs)
vis = sensorydata['opticflow'][trialN, :]
ves = sensorydata['vestibular'][trialN, :]
# generate output predicted perception:
selfmotion[trialN],\
worldmotion[trialN] = my_perceived_motion(vis=vis, ves=ves,
params=params)
return {'selfmotion': selfmotion, 'worldmotion': worldmotion}
# here is a mock version of my_perceived motion.
# so you can test my_train_illusion_model()
def my_perceived_motion(*args, **kwargs):
return [np.nan, np.nan]
# let's look at the preditions we generated for two sample trials (0,100)
# we should get a 1x2 vector of self-motion prediction and another
# for world-motion
sensorydata={'opticflow': opticflow[[0, 100], :0],
'vestibular': vestibular[[0, 100], :0]}
params={'threshold': 0.33, 'filterwindows': [100, 50]}
my_train_illusion_model(sensorydata=sensorydata, params=params)
###Output
_____no_output_____
###Markdown
We've also completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
###Code
# Full perceived motion function
def my_perceived_motion(vis, ves, params):
"""
Takes sensory data and parameters and returns predicted percepts
Args:
vis (numpy.ndarray) : 1xM array of optic flow velocity data
ves (numpy.ndarray) : 1xM array of vestibular acceleration data
params : (dict) dictionary with named entries:
see my_train_illusion_model() for details
Returns:
[list of floats] : prediction for perceived self-motion based on
vestibular data, and prediction for perceived
world-motion based on perceived self-motion and
visual data
"""
# estimate self motion based on only the vestibular data
# pass on the parameters
selfmotion = my_selfmotion(ves=ves, params=params)
# estimate the world motion, based on the selfmotion and visual data
# pass on the parameters as well
worldmotion = my_worldmotion(vis=vis, selfmotion=selfmotion, params=params)
return [selfmotion, worldmotion]
###Output
_____no_output_____
###Markdown
TD 6.1: Formulate purpose of the self motion functionNow we plan out the purpose of one of the remaining functions. **Only name input arguments, write help text and comments, _no code_.** The goal of this exercise is to make writing the code (in Micro-tutorial 7) much easier. Based on our work before the break, you should now be able to answer these questions for each function:* what (sensory) data is necessary? * what parameters does the function need, if any?* which operations will be performed on the input?* what is the output?The number of arguments is correct. **Template calculate self motion**Name the _input arguments_, complete the _help text_, and add _comments_ in the function below to describe the inputs, the outputs, and operations using elements from the recap at the top of this notebook (or from micro-tutorials 3 and 4 in part 1), in order to plan out the function. Do not write any code.
###Code
def my_selfmotion(arg1, arg2):
"""
Short description of the function
Args:
ves (numpy.ndarray) : 1xM array of vestibular acceleration data
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindow: (list of int) determines the strength of filtering for
the visual and vestibular signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
what output does the function generate?
zero if below threshold and actual value if greater than threshold
Any further description?
"""
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1. Integrate
# 2. Filter
# 3. Pick last value
# 4. Threshold
# what output should this function produce?
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_90e4d753.py) **Template calculate world motion**We have drafted the help text and written comments in the function below that describe the inputs, the outputs, and operations we use to estimate world motion, based on the recap above.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Estimates world motion based on the visual signal, the estimate of
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# 1. running window function -> Denoise
# 2. take final value
# 3. subtract selfmotion from value
# return final value
return output
###Output
_____no_output_____
###Markdown
--- Section 7: Model implementation
###Code
# @title Video 7: Implementation
video = YouTubeVideo(id='DMSIt7t-LO8', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
Video available at https://youtube.com/watch?v=DMSIt7t-LO8
###Markdown
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* take last `selfmotion` value as our estimate* threshold: if (value > thr): else: TD 7.1: Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you! Exercise 1: finish self motion function
###Code
# Self motion function
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict) : dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float) : an estimate of self motion in m/s
"""
# uncomment the code below and fill in with your code
# 1. integrate vestibular signal
ves = np.cumsum(ves * (1 / params['samplingrate']))
# 2. running window function to accumulate evidence:
selfmotion = my_moving_window(ves,params["filterwindows"][0])
# 3. take final value of self-motion vector as our estimate
selfmotion = selfmotion[-1]
# 4. compare to threshold. Hint the threshodl is stored in
# params['threshold']
# if selfmotion is higher than threshold: return value
# if it's lower than threshold: return 0
if selfmotion > params["threshold"]:
output = selfmotion
else:
output = 0
# Comment this line when your function is ready
#raise NotImplementedError("Student excercise: estimate my_selfmotion")
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_53312239.py) Interactive Demo: Unit testingTesting if the functions you wrote do what they are supposed to do is important, and known as 'unit testing'. Here we will simplify this for the `my_selfmotion()` function, by allowing varying the threshold and window size with a slider, and seeing what the distribution of self-motion estimates looks like.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
def refresh(threshold=0, windowsize=100):
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
selfmotion_estimates = np.empty(200)
# get the estimates for each trial:
for trial_number in range(200):
ves = vestibular[trial_number, :]
selfmotion_estimates[trial_number] = my_selfmotion(ves, params)
plt.figure()
plt.hist(selfmotion_estimates, bins=20)
plt.xlabel('self-motion estimate')
plt.ylabel('frequency')
plt.show()
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
**Estimate world motion**We have completed the `my_worldmotion()` function for you below.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Short description of the function
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# running average to smooth/accumulate sensory evidence
visualmotion = my_moving_window(vis, window=params['filterwindows'][1],
FUN=np.mean)
# take final value
visualmotion = visualmotion[-1]
# subtract selfmotion from value
worldmotion = visualmotion + selfmotion
# return final value
return worldmotion
###Output
_____no_output_____
###Markdown
--- Section 8: Model completion
###Code
# @title Video 8: Completion
video = YouTubeVideo(id='EM-G8YYdrDg', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
Video available at https://youtube.com/watch?v=EM-G8YYdrDg
###Markdown
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). TD 8.1: See if the model produces illusions
###Code
# @markdown Run to plot model predictions of motion estimates
# prepare to run the model again:
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.6, 'filterwindows': [100, 50], 'FUN': np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
my_plot_percepts(datasets={'predictions': predictions}, plotconditions=True)
###Output
_____no_output_____
###Markdown
**Questions:*** How does the distribution of data points compare to the plot in TD 1.2 or in TD 7.1?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two clusters of data points. This mean the model can help us understand the phenomenon. --- Section 9: Model evaluation
###Code
# @title Video 9: Evaluation
video = YouTubeVideo(id='bWLFyobm4Rk', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
Video available at https://youtube.com/watch?v=bWLFyobm4Rk
###Markdown
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data.**Quantify model quality with $R^2$**Let's look at how well our model matches the actual judgment data.
###Code
# @markdown Run to plot predictions over data
my_plot_predictions_data(judgments, predictions)
###Output
_____no_output_____
###Markdown
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
###Code
# @markdown Run to calculate R^2
conditions = np.concatenate((np.abs(judgments[:, 1]), np.abs(judgments[:, 2])))
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(conditions, veljudgmnt)
print(f"conditions -> judgments R^2: {r_value ** 2:0.3f}")
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
###Output
conditions -> judgments R^2: 0.032
predictions -> judgments R^2: 0.256
###Markdown
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: in a certain percentage of cases the model tends to have the same illusions as the participants. TD 9.1 Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds** Interactive Demo: optimizing the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
Varying the parameters this way, allows you to increase the models' performance in predicting the actual data as measured by $R^2$. This is called model fitting, and will be done better in the coming weeks. TD 9.2: Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here. Exercise 2: function for credit assigment of self motion
###Code
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
"""
# integrate signal:
ves = np.cumsum(ves * (1 / params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves, window=params['filterwindows'][0],
FUN=params['FUN'])
# take the final value as our estimate:
selfmotion = selfmotion[-1]
# compare to threshold, set to 0 if lower and else...
if selfmotion < params['threshold']:
selfmotion = 0
###########################################################################
# Exercise: Complete credit assignment. Remove the next line to test your function
else:
selfmotion = 1
#raise NotImplementedError("Modify with credit assignment")
###########################################################################
return selfmotion
# Use the updated function to run the model and plot the data
# Uncomment below to test your function
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.33, 'filterwindows': [100, 50], 'FUN': np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
my_plot_percepts(datasets={'predictions': predictions}, plotconditions=False)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_51dce10c.py)*Example output:* That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved. Use the optimal values for the threshold and window size that you found previously. Interactive Demo: evaluating the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are a little worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. **Interpret the model's meaning**Here's what you should have learned from model the train illusion: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis.We decided that for now we have learned enough, so it's time to write it up. --- Section 10: Model publication!
###Code
# @title Video 10: Publication
video = YouTubeVideo(id='zm8x7oegN6Q', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
Video available at https://youtube.com/watch?v=zm8x7oegN6Q
###Markdown
Neuromatch Academy: Week 1, Day 2, Tutorial 2 Modeling Practice: Model implementation and evaluation__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom --- Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import gamma
from IPython.display import YouTubeVideo
# @title Figure settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def my_moving_window(x, window=3, FUN=np.mean):
"""
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving
average of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
"""
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown, :] = my_moving_window(x[rown, :],
window=window, FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(1 - window), 1):
if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i + wind_i])):
values += [x[samp_i + wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets, dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'], expect['self'], marker='*',
color='xkcd:green', label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:, 0]):
c_idx = np.where(judgments[:, 0] == condition)[0]
cond_self_motion = judgments[c_idx[0], 1]
cond_world_motion = judgments[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = f"condition [{condition:d}] judgments"
plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],
label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:, 0]):
c_idx = np.where(predictions[:, 0] == condition)[0]
cond_self_motion = predictions[c_idx[0], 1]
cond_world_motion = predictions[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = f"condition [{condition:d}] prediction"
plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],
marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis', \
'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1], [0], marker='<', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='world-motion stimulus', s=80)
plt.scatter([0], [1], marker='>', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='self-motion stimulus', s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_stimuli(t, a, v):
plt.figure(figsize=(10, 6))
plt.plot(t, a, label='acceleration [$m/s^2$]')
plt.plot(t, v, label='velocity [$m/s$]')
plt.xlabel('time [s]')
plt.ylabel('[motion]')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_motion_signals():
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = np.cumsum(a * dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',
sharey='row', figsize=(14, 6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t, -v, label='visual [$m/s$]')
ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t, -v, label='visual [$m/s$]')
ax2.plot(t, a, label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,
addaverages=False, integrateVestibular=False,
addGroundTruth=False):
if addGroundTruth:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = a
wm_idx = np.where(judgments[:, 0] == 0)
sm_idx = np.where(judgments[:, 0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:, wm_idx])
sm_opticflow = np.squeeze(opticflow[:, sm_idx])
if integrateVestibular:
vestibular = np.cumsum(vestibular * .1, axis=1)
if addGroundTruth:
v = np.cumsum(a * dt)
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:, wm_idx])
sm_vestibular = np.squeeze(vestibular[:, sm_idx])
X = np.arange(0, 10, .1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col',
sharey='row', figsize=(15, 10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][0].plot(t, -v, color='xkcd:red')
if addaverages:
my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),
color='xkcd:red', alpha=1)
my_axes[0][0].set_title('optic-flow in world-motion condition')
my_axes[0][0].set_ylabel('velocity signal [$m/s$]')
my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][1].plot(t, -v, color='xkcd:blue')
if addaverages:
my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),
color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('optic-flow in self-motion condition')
my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),
color='xkcd:red', alpha=1)
my_axes[1][0].set_title('vestibular signal in world-motion condition')
if addGroundTruth:
my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')
my_axes[1][0].set_xlabel('time [s]')
if integrateVestibular:
my_axes[1][0].set_ylabel('velocity signal [$m/s$]')
else:
my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')
my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[1][1].plot(t, v, color='xkcd:blue')
if addaverages:
my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),
color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('vestibular signal in self-motion condition')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_threshold_solution(selfmotion_vel_est, threshold):
is_move = (selfmotion_vel_est > threshold)
return is_move
def my_moving_threshold(selfmotion_vel_est, thresholds):
pselfmove_nomove = np.empty(thresholds.shape)
pselfmove_move = np.empty(thresholds.shape)
prop_correct = np.empty(thresholds.shape)
pselfmove_nomove[:] = np.NaN
pselfmove_move[:] = np.NaN
prop_correct[:] = np.NaN
for thr_i, threshold in enumerate(thresholds):
# run my_threshold that the students will write:
try:
is_move = my_threshold(selfmotion_vel_est, threshold)
except Exception:
is_move = my_threshold_solution(selfmotion_vel_est, threshold)
# store results:
pselfmove_nomove[thr_i] = np.mean(is_move[0:100])
pselfmove_move[thr_i] = np.mean(is_move[100:200])
# calculate the proportion classified correctly:
# (1-pselfmove_nomove) + ()
# Correct rejections:
p_CR = (1 - pselfmove_nomove[thr_i])
# correct detections:
p_D = pselfmove_move[thr_i]
# this is corrected for proportion of trials in each condition:
prop_correct[thr_i] = (p_CR + p_D) / 2
return [pselfmove_nomove, pselfmove_move, prop_correct]
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12, 8))
plt.title('threshold effects')
plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',
color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion condition')
plt.plot(thresholds, self_prop, label='self motion condition')
plt.plot(thresholds, prop_correct, color='xkcd:purple',
label='correct classification')
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
# conditions = np.concatenate((np.abs(judgments[:, 1]),
# np.abs(judgments[:, 2])))
# veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
# velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
# self:
# conditions_self = np.abs(judgments[:, 1])
veljudgmnt_self = judgments[:, 3]
velpredict_self = predictions[:, 3]
# world:
# conditions_world = np.abs(judgments[:, 2])
veljudgmnt_world = judgments[:, 4]
velpredict_world = predictions[:, 4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',
figsize=(12, 5))
ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)
ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)
ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
# @title Data retrieval
import os
fname="W1D2_data.npz"
if not os.path.exists(fname):
!wget https://osf.io/c5xyf/download -O $fname
filez = np.load(file=fname, allow_pickle=True)
judgments = filez['judgments']
opticflow = filez['opticflow']
vestibular = filez['vestibular']
###Output
_____no_output_____
###Markdown
--- Section 6: Model planning
###Code
# @title Video 6: Planning
video = YouTubeVideo(id='dRTOFFigxa0', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? Our model will have:* **inputs**: the values the system has available - this can be broken down in _data:_ the sensory signals, _parameters:_ the threshold and the window sizes for filtering* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial in m/s, just like the judgments participants made.* **model functions**: A set of functions that perform the hypothesized computations.We will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.**Recap of what we've accomplished so far:**To model perceptual estimates from our sensory data, we need to 1. _integrate:_ to ensure sensory information are in appropriate units2. _filter:_ to reduce noise and set timescale3. _threshold:_ to model detectionThis will be done with these operations:1. _integrate:_ `np.cumsum()`2. _filter:_ `my_moving_window()`3. _threshold:_ `if` with a comparison (`>` or `<`) and `else`**_Planning our model:_**We will now start putting all the pieces together. Normally you would sketch this yourself, but here is an overview of how the functions comprising the model are going to work:Below is the main function with a detailed explanation of what the function is supposed to do, exactly what input is expected, and what output will be generated. The model is not complete, so it only returns nans (**n**ot-**a**-**n**umber) for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **Main model function**
###Code
def my_train_illusion_model(sensorydata, params):
"""
Generate output predictions of perceived self-motion and perceived
world-motion velocity based on input visual and vestibular signals.
Args:
sensorydata: (dict) dictionary with two named entries:
opticflow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows
and M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindows: (list of int) determines the strength of filtering for
the vestibular and visual signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
"""
# sanitize input a little
if not('FUN' in params.keys()):
params['FUN'] = np.mean
if not('integrate' in params.keys()):
params['integrate'] = True
if not('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials?
for trialN in range(ntrials):
# these are our sensory variables (inputs)
vis = sensorydata['opticflow'][trialN, :]
ves = sensorydata['vestibular'][trialN, :]
# generate output predicted perception:
selfmotion[trialN],\
worldmotion[trialN] = my_perceived_motion(vis=vis, ves=ves,
params=params)
return {'selfmotion': selfmotion, 'worldmotion': worldmotion}
# here is a mock version of my_perceived motion.
# so you can test my_train_illusion_model()
def my_perceived_motion(*args, **kwargs):
return [np.nan, np.nan]
# let's look at the preditions we generated for two sample trials (0,100)
# we should get a 1x2 vector of self-motion prediction and another
# for world-motion
sensorydata={'opticflow': opticflow[[0, 100], :0],
'vestibular': vestibular[[0, 100], :0]}
params={'threshold': 0.33, 'filterwindows': [100, 50]}
my_train_illusion_model(sensorydata=sensorydata, params=params)
###Output
_____no_output_____
###Markdown
We've also completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
###Code
# Full perceived motion function
def my_perceived_motion(vis, ves, params):
"""
Takes sensory data and parameters and returns predicted percepts
Args:
vis (numpy.ndarray) : 1xM array of optic flow velocity data
ves (numpy.ndarray) : 1xM array of vestibular acceleration data
params : (dict) dictionary with named entries:
see my_train_illusion_model() for details
Returns:
[list of floats] : prediction for perceived self-motion based on
vestibular data, and prediction for perceived
world-motion based on perceived self-motion and
visual data
"""
# estimate self motion based on only the vestibular data
# pass on the parameters
selfmotion = my_selfmotion(ves=ves, params=params)
# estimate the world motion, based on the selfmotion and visual data
# pass on the parameters as well
worldmotion = my_worldmotion(vis=vis, selfmotion=selfmotion, params=params)
return [selfmotion, worldmotion]
###Output
_____no_output_____
###Markdown
TD 6.1: Formulate purpose of the self motion functionNow we plan out the purpose of one of the remaining functions. **Only name input arguments, write help text and comments, _no code_.** The goal of this exercise is to make writing the code (in Micro-tutorial 7) much easier. Based on our work before the break, you should now be able to answer these questions for each function:* what (sensory) data is necessary? * what parameters does the function need, if any?* which operations will be performed on the input?* what is the output?The number of arguments is correct. **Template calculate self motion**Name the _input arguments_, complete the _help text_, and add _comments_ in the function below to describe the inputs, the outputs, and operations using elements from the recap at the top of this notebook (or from micro-tutorials 3 and 4 in part 1), in order to plan out the function. Do not write any code.
###Code
def my_selfmotion(arg1, arg2):
"""
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
Returns:
what output does the function generate?
Any further description?
"""
##################################################
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# 4.
# what output should this function produce?
##################################################
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_06ea80b7.py) **Template calculate world motion**We have drafted the help text and written comments in the function below that describe the inputs, the outputs, and operations we use to estimate world motion, based on the recap above.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Estimates world motion based on the visual signal, the estimate of
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
##################################################
# 1. running window function
# 2. take final value
# 3. subtract selfmotion from value
# return final value
##################################################
return output
###Output
_____no_output_____
###Markdown
--- Section 7: Model implementation
###Code
# @title Video 7: Implementation
video = YouTubeVideo(id='DMSIt7t-LO8', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* take last `selfmotion` value as our estimate* threshold: if (value > thr): else: TD 7.1: Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you! Exercise 1: finish self motion function
###Code
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict) : dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float) : an estimate of self motion in m/s
"""
##################################################
## TODO for students: fill in ... in code below
# Fill out function and remove
raise NotImplementedError("Student exercise: estimate my_selfmotion")
##################################################
# 1. integrate vestibular signal:
ves = np.cumsum(ves * (1 / params['samplingrate']))
# 2. running window function to accumulate evidence:
selfmotion = ...
# 3. take final value of self-motion vector as our estimate
selfmotion = ...
# 4. compare to threshold. Hint the threshodl is stored in
# params['threshold']
# if selfmotion is higher than threshold: return value
# if it's lower than threshold: return 0
if ...:
selfmotion = ...
return selfmotion
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_4c0b8958.py) Interactive Demo: Unit testingTesting if the functions you wrote do what they are supposed to do is important, and known as 'unit testing'. Here we will simplify this for the `my_selfmotion()` function, by allowing varying the threshold and window size with a slider, and seeing what the distribution of self-motion estimates looks like.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
def refresh(threshold=0, windowsize=100):
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
selfmotion_estimates = np.empty(200)
# get the estimates for each trial:
for trial_number in range(200):
ves = vestibular[trial_number, :]
selfmotion_estimates[trial_number] = my_selfmotion(ves, params)
plt.figure()
plt.hist(selfmotion_estimates, bins=20)
plt.xlabel('self-motion estimate')
plt.ylabel('frequency')
plt.show()
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
**Estimate world motion**We have completed the `my_worldmotion()` function for you below.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Short description of the function
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# running average to smooth/accumulate sensory evidence
visualmotion = my_moving_window(vis, window=params['filterwindows'][1],
FUN=np.mean)
# take final value
visualmotion = visualmotion[-1]
# subtract selfmotion from value
worldmotion = visualmotion + selfmotion
# return final value
return worldmotion
###Output
_____no_output_____
###Markdown
--- Section 8: Model completion
###Code
# @title Video 8: Completion
video = YouTubeVideo(id='EM-G8YYdrDg', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). TD 8.1: See if the model produces illusions
###Code
# @markdown Run to plot model predictions of motion estimates
# prepare to run the model again:
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.6, 'filterwindows': [100, 50], 'FUN': np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
my_plot_percepts(datasets={'predictions': predictions}, plotconditions=True)
###Output
_____no_output_____
###Markdown
**Questions:*** How does the distribution of data points compare to the plot in TD 1.2 or in TD 7.1?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two clusters of data points. This mean the model can help us understand the phenomenon. --- Section 9: Model evaluation
###Code
# @title Video 9: Evaluation
video = YouTubeVideo(id='bWLFyobm4Rk', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data.**Quantify model quality with $R^2$**Let's look at how well our model matches the actual judgment data.
###Code
# @markdown Run to plot predictions over data
my_plot_predictions_data(judgments, predictions)
###Output
_____no_output_____
###Markdown
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
###Code
# @markdown Run to calculate R^2
conditions = np.concatenate((np.abs(judgments[:, 1]), np.abs(judgments[:, 2])))
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(conditions, veljudgmnt)
print(f"conditions -> judgments R^2: {r_value ** 2:0.3f}")
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
###Output
_____no_output_____
###Markdown
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: in a certain percentage of cases the model tends to have the same illusions as the participants. TD 9.1 Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds** Interactive Demo: optimizing the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
Varying the parameters this way, allows you to increase the models' performance in predicting the actual data as measured by $R^2$. This is called model fitting, and will be done better in the coming weeks. TD 9.2: Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here. Exercise 2: function for credit assigment of self motion
###Code
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
"""
# integrate signal:
ves = np.cumsum(ves * (1 / params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves, window=params['filterwindows'][0],
FUN=params['FUN'])
# take the final value as our estimate:
selfmotion = selfmotion[-1]
###########################################################################
# Exercise: Complete credit assignment. Remove the next line to test your function
raise NotImplementedError("Modify with credit assignment")
###########################################################################
# compare to threshold, set to 0 if lower
if selfmotion < params['threshold']:
selfmotion = 0
else:
selfmotion = ...
return selfmotion
# Use the updated function to run the model and plot the data
# Uncomment below to test your function
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.33, 'filterwindows': [100, 50], 'FUN': np.mean}
# modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# my_plot_percepts(datasets={'predictions': predictions}, plotconditions=False)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_97a9e346.py)*Example output:* That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved. Use the optimal values for the threshold and window size that you found previously. Interactive Demo: evaluating the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are a little worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. **Interpret the model's meaning**Here's what you should have learned from model the train illusion: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis.We decided that for now we have learned enough, so it's time to write it up. --- Section 10: Model publication!
###Code
# @title Video 10: Publication
video = YouTubeVideo(id='zm8x7oegN6Q', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week1, Day 2, Tutorial 2 Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Utilities Setup and Convenience FunctionsPlease run the following **3** chunks to have functions and data available.
###Code
#@title Utilities and setup
# set up the environment for this tutorial
import time # import time
import numpy as np # import numpy
import scipy as sp # import scipy
from scipy.stats import gamma # import gamma distribution
import math # import basic math functions
import random # import basic random number generator functions
import matplotlib.pyplot as plt # import matplotlib
from IPython import display
fig_w, fig_h = (12, 8)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
plt.style.use('ggplot')
%matplotlib inline
#%config InlineBackend.figure_format = 'retina'
from scipy.signal import medfilt
# make
#@title Convenience functions: Plotting and Filtering
# define some convenience functions to be used later
def my_moving_window(x, window=3, FUN=np.mean):
'''
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving average
of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
'''
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown,:] = my_moving_window(x[rown,:],window=window,FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(-window), 1):
if ((samp_i+wind_i) < 0) or (samp_i+wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i+wind_i])):
values += [x[samp_i+wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets,dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
fig = plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'],expect['self'],marker='*',color='xkcd:green',label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:,0]):
c_idx = np.where(judgments[:,0] == condition)[0]
cond_self_motion = judgments[c_idx[0],1]
cond_world_motion = judgments[c_idx[0],2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = 'condition [%d] judgments'%condition
plt.scatter(judgments[c_idx,3],judgments[c_idx,4], label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:,0]):
c_idx = np.where(predictions[:,0] == condition)[0]
cond_self_motion = predictions[c_idx[0],1]
cond_world_motion = predictions[c_idx[0],2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = 'condition [%d] prediction'%condition
plt.scatter(predictions[c_idx,4],predictions[c_idx,3], marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis', 'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1],[0],marker='<',facecolor='none',edgecolor='xkcd:black',linewidths=2,label='world-motion stimulus',s=80)
plt.scatter([0],[1],marker='>',facecolor='none',edgecolor='xkcd:black',linewidths=2,label='self-motion stimulus',s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_motion_signals():
dt = 1/10
a = gamma.pdf( np.arange(0,10,dt), 2.5, 0 )
t = np.arange(0,10,dt)
v = np.cumsum(a*dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col', sharey='row', figsize=(14,6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t,-v,label='visual [$m/s$]')
ax1.plot(t,np.zeros(a.size),label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t,-v,label='visual [$m/s$]')
ax2.plot(t,a,label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False, addaverages=False):
wm_idx = np.where(judgments[:,0] == 0)
sm_idx = np.where(judgments[:,0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:,wm_idx])
sm_opticflow = np.squeeze(opticflow[:,sm_idx])
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:,wm_idx])
sm_vestibular = np.squeeze(vestibular[:,sm_idx])
X = np.arange(0,10,.1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row', figsize=(15,10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X,wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[0][0].plot(X,np.average(wm_opticflow, axis=1), color='xkcd:red', alpha=1)
my_axes[0][0].set_title('world-motion optic flow')
my_axes[0][0].set_ylabel('[motion]')
my_axes[0][1].plot(X,sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[0][1].plot(X,np.average(sm_opticflow, axis=1), color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('self-motion optic flow')
my_axes[1][0].plot(X,wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X,np.average(wm_vestibular, axis=1), color='xkcd:red', alpha=1)
my_axes[1][0].set_title('world-motion vestibular signal')
my_axes[1][0].set_xlabel('time [s]')
my_axes[1][0].set_ylabel('[motion]')
my_axes[1][1].plot(X,sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[1][1].plot(X,np.average(sm_vestibular, axis=1), color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('self-motion vestibular signal')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12,8))
plt.title('threshold effects')
plt.plot([min(thresholds),max(thresholds)],[0,0],':',color='xkcd:black')
plt.plot([min(thresholds),max(thresholds)],[0.5,0.5],':',color='xkcd:black')
plt.plot([min(thresholds),max(thresholds)],[1,1],':',color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion')
plt.plot(thresholds, self_prop, label='self motion')
plt.plot(thresholds, prop_correct, color='xkcd:purple', label='correct classification')
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
# self:
conditions_self = np.abs(judgments[:,1])
veljudgmnt_self = judgments[:,3]
velpredict_self = predictions[:,3]
# world:
conditions_world = np.abs(judgments[:,2])
veljudgmnt_world = judgments[:,4]
velpredict_world = predictions[:,4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row', figsize=(12,5))
ax1.scatter(veljudgmnt_self,velpredict_self, alpha=0.2)
ax1.plot([0,1],[0,1],':',color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world,velpredict_world, alpha=0.2)
ax2.plot([0,1],[0,1],':',color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
#@title Data generation code (needs to go on OSF and deleted here)
def my_simulate_data(repetitions=100, conditions=[(0,-1),(+1,0)] ):
"""
Generate simulated data for this tutorial. You do not need to run this
yourself.
Args:
repetitions: (int) number of repetitions of each condition (default: 30)
conditions: list of 2-tuples of floats, indicating the self velocity and
world velocity in each condition (default: returns data that is
good for exploration: [(-1,0),(0,+1)] but can be flexibly
extended)
The total number of trials used (ntrials) is equal to:
repetitions * len(conditions)
Returns:
dict with three entries:
'judgments': ntrials * 5 matrix
'opticflow': ntrials * 100 matrix
'vestibular': ntrials * 100 matrix
The default settings would result in data where first 30 trials reflect a
situation where the world (other train) moves in one direction, supposedly
at 1 m/s (perhaps to the left: -1) while the participant does not move at
all (0), and 30 trials from a second condition, where the world does not
move, while the participant moves with 1 m/s in the opposite direction from
where the world is moving in the first condition (0,+1). The optic flow
should be the same, but the vestibular input is not.
"""
# reproducible output
np.random.seed(1937)
# set up some variables:
ntrials = repetitions * len(conditions)
# the following arrays will contain the simulated data:
judgments = np.empty(shape=(ntrials,5))
opticflow = np.empty(shape=(ntrials,100))
vestibular = np.empty(shape=(ntrials,100))
# acceleration:
a = gamma.pdf(np.arange(0,10,.1), 2.5, 0 )
# divide by 10 so that velocity scales from 0 to 1 (m/s)
# max acceleration ~ .308 m/s^2
# not realistic! should be about 1/10 of that
# velocity:
v = np.cumsum(a*.1)
# position: (not necessary)
#x = np.cumsum(v)
#################################
# REMOVE ARBITRARY SCALING & CORRECT NOISE PARAMETERS
vest_amp = 1
optf_amp = 1
# we start at the first trial:
trialN = 0
# we start with only a single velocity, but it should be possible to extend this
for conditionno in range(len(conditions)):
condition = conditions[conditionno]
for repetition in range(repetitions):
#
# generate optic flow signal
OF = v * np.diff(condition) # optic flow: difference between self & world motion
OF = (OF * optf_amp) # fairly large spike range
OF = OF + (np.random.randn(len(OF)) * .1) # adding noise
# generate vestibular signal
VS = a * condition[0] # vestibular signal: only self motion
VS = (VS * vest_amp) # less range
VS = VS + (np.random.randn(len(VS)) * 1.) # acceleration is a smaller signal, what is a good noise level?
# store in matrices, corrected for sign
#opticflow[trialN,:] = OF * -1 if (np.sign(np.diff(condition)) < 0) else OF
#vestibular[trialN,:] = VS * -1 if (np.sign(condition[1]) < 0) else VS
opticflow[trialN,:], vestibular[trialN,:] = OF, VS
#########################################################
# store conditions in judgments matrix:
judgments[trialN,0:3] = [ conditionno, condition[0], condition[1] ]
# vestibular SD: 1.0916052957046194 and 0.9112684509277528
# visual SD: 0.10228834313079663 and 0.10975472557444346
# generate judgments:
if (abs(np.average(np.cumsum(medfilt(VS/vest_amp,5)*.1)[70:90])) < 1):
###########################
# NO self motion detected
###########################
selfmotion_weights = np.array([.01,.01]) # there should be low/no self motion
worldmotion_weights = np.array([.01,.99]) # world motion is dictated by optic flow
else:
########################
# self motion DETECTED
########################
#if (abs(np.average(np.cumsum(medfilt(VS/vest_amp,15)*.1)[70:90]) - np.average(medfilt(OF,15)[70:90])) < 5):
if True:
####################
# explain all self motion by optic flow
selfmotion_weights = np.array([.01,.99]) # there should be lots of self motion, but determined by optic flow
worldmotion_weights = np.array([.01,.01]) # very low world motion?
else:
# we use both optic flow and vestibular info to explain both
selfmotion_weights = np.array([ 1, 0]) # motion, but determined by vestibular signal
worldmotion_weights = np.array([ 1, 1]) # very low world motion?
#
integrated_signals = np.array([
np.average( np.cumsum(medfilt(VS/vest_amp,15))[90:100]*.1 ),
np.average((medfilt(OF/optf_amp,15))[90:100])
])
selfmotion = np.sum(integrated_signals * selfmotion_weights)
worldmotion = np.sum(integrated_signals * worldmotion_weights)
#print(worldmotion,selfmotion)
judgments[trialN,3] = abs(selfmotion)
judgments[trialN,4] = abs(worldmotion)
# this ends the trial loop, so we increment the counter:
trialN += 1
return {'judgments':judgments,
'opticflow':opticflow,
'vestibular':vestibular}
simulated_data = my_simulate_data()
judgments = simulated_data['judgments']
opticflow = simulated_data['opticflow']
vestibular = simulated_data['vestibular']
###Output
_____no_output_____
###Markdown
Micro-tutorial 6 - planning the model
###Code
#@title Video: Planning the model
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='daEtkVporBE', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=daEtkVporBE
###Markdown
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? The figure below shows a generic model we will use to guide our code construction. Our model will have:* **inputs**: the values the system has available - for this tutorial the sensory information in a trial. We want to gather these together and plan how to process them. * **parameters**: unless we are lucky, our functions will have unknown parameters - we want to identify these and plan for them.* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial. Ideally these are directly comparable to our data. * **Model functions**: A set of functions that perform the hypothesized computations.>Using Python (with Numpy and Scipy) we will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.Recap of what we've accomplished so far:To model perceptual estimates from our sensory data, we need to 1. _integrate_ to ensure sensory information are in appropriate units2. _reduce noise and set timescale_ by filtering3. _threshold_ to model detection Remember the kind of operations we identified:* integration: `np.cumsum()`* filtering: `my_moving_window()`* threshold: `if` with a comparison (`>` or `<`) and `else`We will collect all the components we've developed and design the code by:1. **identifying the key functions** we need2. **sketching the operations** needed in each. **_Planning our model:_**We know what we want the model to do, but we need to plan and organize the model into functions and operations. We're providing a draft of the first function. For each of the two other code chunks, write mostly comments and help text first. This should put into words what role each of the functions plays in the overall model, implementing one of the steps decided above. _______Below is the main function with a detailed explanation of what the function is supposed to do: what input is expected, and what output will generated. The code is not complete, and only returns nans for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **TD 6.1**: Complete main model functionThe function `my_train_illusion_model()` below should call one other function: `my_perceived_motion()`. What input do you think this function should get? **Complete main model function**
###Code
def my_train_illusion_model(sensorydata, params):
'''
Generate output predictions of perceived self-motion and perceived world-motion velocity
based on input visual and vestibular signals.
Args (Input variables passed into function):
sensorydata: (dict) dictionary with two named entries:
opticflow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows
and M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindow: (list of int) determines the strength of filtering for
the visual and vestibular signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
'''
# sanitize input a little
if not('FUN' in params.keys()):
params['FUN'] = np.mean
if not('integrate' in params.keys()):
params['integrate'] = True
if not('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials?
for trialN in range(ntrials):
#these are our sensory variables (inputs)
vis = sensorydata['opticflow'][trialN,:]
ves = sensorydata['vestibular'][trialN,:]
########################################################
# generate output predicted perception:
########################################################
#our inputs our vis, ves, and params
selfmotion[trialN], worldmotion[trialN] = [np.nan, np.nan]
########################################################
# replace above with
# selfmotion[trialN], worldmotion[trialN] = my_perceived_motion( ???, ???, params=params)
# and fill in question marks
########################################################
# comment this out when you've filled
raise NotImplementedError("Student excercise: generate predictions")
return {'selfmotion':selfmotion, 'worldmotion':worldmotion}
# uncomment the following lines to run the main model function:
## here is a mock version of my_perceived motion.
## so you can test my_train_illusion_model()
#def my_perceived_motion(*args, **kwargs):
#return np.random.rand(2)
##let's look at the preditions we generated for two sample trials (0,100)
##we should get a 1x2 vector of self-motion prediction and another for world-motion
#sensorydata={'opticflow':opticflow[[0,100],:0], 'vestibular':vestibular[[0,100],:0]}
#params={'threshold':0.33, 'filterwindow':[100,50]}
#my_train_illusion_model(sensorydata=sensorydata, params=params)
###Output
_____no_output_____
###Markdown
**Example output:** **TD 6.2**: Draft perceived motion functionsNow we draft a set of functions, the first of which is used in the main model function (see above) and serves to generate perceived velocities. The other two are used in the first one. Only write help text and/or comments, you don't have to write the whole function. Each time ask yourself these questions:* what sensory data is necessary? * what other input does the function need, if any?* which operations are performed on the input?* what is the output?(the number of arguments is correct) **Template perceived motion**
###Code
# fill in the input arguments the function should have:
# write the help text for the function:
def my_perceived_motion(arg1, arg2, arg3):
'''
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
argument 3: explain the format and content of the third argument
Returns:
what output does the function generate?
Any further description?
'''
# structure your code into two functions: "my_selfmotion" and "my_worldmotion"
# write comments outlining the operations to be performed on the inputs by each of these functions
# use the elements from micro-tutorials 3, 4, and 5 (found in W1D2 Tutorial Part 1)
#
#
#
# what kind of output should this function produce?
return output
###Output
_____no_output_____
###Markdown
We've completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
###Code
#Full perceived motion function
def my_perceived_motion(vis, ves, params):
'''
Takes sensory data and parameters and returns predicted percepts
Args:
vis (numpy.ndarray): 1xM array of optic flow velocity data
ves (numpy.ndarray): 1xM array of vestibular acceleration data
params: (dict) dictionary with named entries:
see my_train_illusion_model() for details
Returns:
[list of floats]: prediction for perceived self-motion based on
vestibular data, and prediction for perceived world-motion based on
perceived self-motion and visual data
'''
# estimate self motion based on only the vestibular data
# pass on the parameters
selfmotion = my_selfmotion(ves=ves,
params=params)
# estimate the world motion, based on the selfmotion and visual data
# pass on the parameters as well
worldmotion = my_worldmotion(vis=vis,
selfmotion=selfmotion,
params=params)
return [selfmotion, worldmotion]
###Output
_____no_output_____
###Markdown
**Template calculate self motion**Put notes in the function below that describe the inputs, the outputs, and steps that transform the output from the input using elements from micro-tutorials 3,4,5.
###Code
def my_selfmotion(arg1, arg2):
'''
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
Returns:
what output does the function generate?
Any further description?
'''
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# 4.
# what output should this function produce?
return output
###Output
_____no_output_____
###Markdown
**Template calculate world motion**Put notes in the function below that describe the inputs, the outputs, and steps that transform the output from the input using elements from micro-tutorials 3,4,5.
###Code
def my_worldmotion(arg1, arg2, arg3):
'''
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
argument 3: explain the format and content of the third argument
Returns:
what output does the function generate?
Any further description?
'''
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# what output should this function produce?
return output
###Output
_____no_output_____
###Markdown
Micro-tutorial 7 - implement model
###Code
#@title Video: implement the model
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='gtSOekY8jkw', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=gtSOekY8jkw
###Markdown
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* average: `np.mean(data)`* threshold: if (value > thr): else: **TD 7.1:** Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you!**Template finish self motion function**
###Code
def my_selfmotion(ves, params):
'''
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
'''
###uncomment the code below and fill in with your code
## 1. integrate vestibular signal
#ves = np.cumsum(ves*(1/params['samplingrate']))
## 2. running window function to accumulate evidence:
#selfmotion = YOUR CODE HERE
## 3. take final value of self-motion vector as our estimate
#selfmotion =
## 4. compare to threshold. Hint the threshodl is stored in params['threshold']
## if selfmotion is higher than threshold: return value
## if it's lower than threshold: return 0
#if YOURCODEHERE
#selfmotion = YOURCODHERE
# comment this out when you've filled
raise NotImplementedError("Student excercise: estimate my_selfmotion")
return output
###Output
_____no_output_____
###Markdown
Estimate world motionWe have completed the `my_worldmotion()` function for you.**World motion function**
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
'''
Short description of the function
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
'''
# running average to smooth/accumulate sensory evidence
visualmotion = my_moving_window(vis,
window=params['filterwindows'][1],
FUN=np.mean)
# take final value
visualmotion = visualmotion[-1]
# subtract selfmotion from value
worldmotion = visualmotion + selfmotion
# return final value
return worldmotion
###Output
_____no_output_____
###Markdown
Micro-tutorial 8 - completing the model
###Code
#@title Video: completing the model
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='-NiHSv4xCDs', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=-NiHSv4xCDs
###Markdown
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). **TD 8.1:** See if the model produces illusions
###Code
#@title Run to plot model predictions of motion estimates
# prepare to run the model again:
data = {'opticflow':opticflow, 'vestibular':vestibular}
params = {'threshold':0.6, 'filterwindows':[100,50], 'FUN':np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:,0:3] = judgments[:,0:3]
predictions[:,3] = modelpredictions['selfmotion']
predictions[:,4] = modelpredictions['worldmotion'] *-1
my_plot_percepts(datasets={'predictions':predictions}, plotconditions=True)
###Output
_____no_output_____
###Markdown
**Questions:*** Why is the data distributed this way? How does it compare to the plot in TD 1.2?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two sets of data. Does this mean the model can help us understand the phenomenon? Micro-tutorial 9 - testing and evaluating the model
###Code
#@title Video: Background
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='5vnDOxN3M_k', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=5vnDOxN3M_k
###Markdown
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data. Quantify model quality with $R^2$Let's look at how well our model matches the actual judgment data.
###Code
#@title Run to plot predictions over data
my_plot_predictions_data(judgments, predictions)
###Output
_____no_output_____
###Markdown
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
###Code
#@title Run to calculate R^2
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(conditions,veljudgmnt)
print('conditions -> judgments R^2: %0.3f'%( r_value**2 ))
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(veljudgmnt,velpredict)
print('predictions -> judgments R^2: %0.3f'%( r_value**2 ))
###Output
conditions -> judgments R^2: 0.032
predictions -> judgments R^2: 0.256
###Markdown
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: the model tends to have the same illusions as the participants. **TD 9.1** Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds**
###Code
# Testing thresholds
def test_threshold(threshold=0.33):
# prepare to run model
data = {'opticflow':opticflow, 'vestibular':vestibular}
params = {'threshold':threshold, 'filterwindows':[100,50], 'FUN':np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# get predictions in matrix
predictions = np.zeros(judgments.shape)
predictions[:,0:3] = judgments[:,0:3]
predictions[:,3] = modelpredictions['selfmotion']
predictions[:,4] = modelpredictions['worldmotion'] *-1
# get percepts from participants and model
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
# calculate R2
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(veljudgmnt,velpredict)
print('predictions -> judgments R2: %0.3f'%( r_value**2 ))
test_threshold(threshold=0.5)
###Output
predictions -> judgments R2: 0.267
###Markdown
**TD 9.2:** Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here.**Template function for credit assigment of self motion**
###Code
# Template binary self-motion estimates
def my_selfmotion(ves, params):
'''
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
'''
# integrate signal:
ves = np.cumsum(ves*(1/params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves,
window=params['filterwindows'][0],
FUN=params['FUN'])
## take the final value as our estimate:
selfmotion = selfmotion[-1]
##########################################
# this last part will have to be changed
# compare to threshold, set to 0 if lower and else...
if selfmotion < params['threshold']:
selfmotion = 0
#uncomment the lines below and fill in with your code
#else:
#YOUR CODE HERE
# comment this out when you've filled
raise NotImplementedError("Student excercise: modify with credit assignment")
return selfmotion
###Output
_____no_output_____
###Markdown
The function you just wrote will be used when we run the model again below.
###Code
#@title Run model credit assigment of self motion
# prepare to run the model again:
data = {'opticflow':opticflow, 'vestibular':vestibular}
params = {'threshold':0.33, 'filterwindows':[100,50], 'FUN':np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# no process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:,0:3] = judgments[:,0:3]
predictions[:,3] = modelpredictions['selfmotion']
predictions[:,4] = modelpredictions['worldmotion'] *-1
my_plot_percepts(datasets={'predictions':predictions}, plotconditions=False)
###Output
_____no_output_____
###Markdown
That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved:
###Code
#@title Run to calculate R^2 for model with self motion credit assignment
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
my_plot_predictions_data(judgments, predictions)
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(conditions,veljudgmnt)
print('conditions -> judgments R2: %0.3f'%( r_value**2 ))
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(velpredict,veljudgmnt)
print('predictions -> judgments R2: %0.3f'%( r_value**2 ))
###Output
_____no_output_____
###Markdown
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are actually worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. Interpret the model's meaningHere's what you should have learned: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis._It's always possible to refine our models to improve the fits._There are many ways to try to do this. A few examples; we could implement a full sensory cue integration model, perhaps with Kalman filters (Week 2, Day 3), or we could add prior knowledge (at what time do the trains depart?). However, we decided that for now we have learned enough, so it's time to write it up. Micro-tutorial 10 - publishing the model
###Code
#@title Video: Background
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='kf4aauCr5vA', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=kf4aauCr5vA
###Markdown
Neuromatch Academy: Week 1, Day 2, Tutorial 2 Modeling Practice: Model implementation and evaluation__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom --- Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import gamma
from IPython.display import YouTubeVideo
# @title Figure settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def my_moving_window(x, window=3, FUN=np.mean):
"""
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving
average of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
"""
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown, :] = my_moving_window(x[rown, :],
window=window, FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(1 - window), 1):
if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i + wind_i])):
values += [x[samp_i + wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets, dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'], expect['self'], marker='*',
color='xkcd:green', label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:, 0]):
c_idx = np.where(judgments[:, 0] == condition)[0]
cond_self_motion = judgments[c_idx[0], 1]
cond_world_motion = judgments[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = f"condition [{condition:d}] judgments"
plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],
label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:, 0]):
c_idx = np.where(predictions[:, 0] == condition)[0]
cond_self_motion = predictions[c_idx[0], 1]
cond_world_motion = predictions[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = f"condition [{condition:d}] prediction"
plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],
marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis', \
'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1], [0], marker='<', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='world-motion stimulus', s=80)
plt.scatter([0], [1], marker='>', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='self-motion stimulus', s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_stimuli(t, a, v):
plt.figure(figsize=(10, 6))
plt.plot(t, a, label='acceleration [$m/s^2$]')
plt.plot(t, v, label='velocity [$m/s$]')
plt.xlabel('time [s]')
plt.ylabel('[motion]')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_motion_signals():
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = np.cumsum(a * dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',
sharey='row', figsize=(14, 6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t, -v, label='visual [$m/s$]')
ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t, -v, label='visual [$m/s$]')
ax2.plot(t, a, label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,
addaverages=False, integrateVestibular=False,
addGroundTruth=False):
if addGroundTruth:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = a
wm_idx = np.where(judgments[:, 0] == 0)
sm_idx = np.where(judgments[:, 0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:, wm_idx])
sm_opticflow = np.squeeze(opticflow[:, sm_idx])
if integrateVestibular:
vestibular = np.cumsum(vestibular * .1, axis=1)
if addGroundTruth:
v = np.cumsum(a * dt)
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:, wm_idx])
sm_vestibular = np.squeeze(vestibular[:, sm_idx])
X = np.arange(0, 10, .1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col',
sharey='row', figsize=(15, 10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][0].plot(t, -v, color='xkcd:red')
if addaverages:
my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),
color='xkcd:red', alpha=1)
my_axes[0][0].set_title('optic-flow in world-motion condition')
my_axes[0][0].set_ylabel('velocity signal [$m/s$]')
my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][1].plot(t, -v, color='xkcd:blue')
if addaverages:
my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),
color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('optic-flow in self-motion condition')
my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),
color='xkcd:red', alpha=1)
my_axes[1][0].set_title('vestibular signal in world-motion condition')
if addGroundTruth:
my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')
my_axes[1][0].set_xlabel('time [s]')
if integrateVestibular:
my_axes[1][0].set_ylabel('velocity signal [$m/s$]')
else:
my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')
my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[1][1].plot(t, v, color='xkcd:blue')
if addaverages:
my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),
color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('vestibular signal in self-motion condition')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_threshold_solution(selfmotion_vel_est, threshold):
is_move = (selfmotion_vel_est > threshold)
return is_move
def my_moving_threshold(selfmotion_vel_est, thresholds):
pselfmove_nomove = np.empty(thresholds.shape)
pselfmove_move = np.empty(thresholds.shape)
prop_correct = np.empty(thresholds.shape)
pselfmove_nomove[:] = np.NaN
pselfmove_move[:] = np.NaN
prop_correct[:] = np.NaN
for thr_i, threshold in enumerate(thresholds):
# run my_threshold that the students will write:
try:
is_move = my_threshold(selfmotion_vel_est, threshold)
except Exception:
is_move = my_threshold_solution(selfmotion_vel_est, threshold)
# store results:
pselfmove_nomove[thr_i] = np.mean(is_move[0:100])
pselfmove_move[thr_i] = np.mean(is_move[100:200])
# calculate the proportion classified correctly:
# (1-pselfmove_nomove) + ()
# Correct rejections:
p_CR = (1 - pselfmove_nomove[thr_i])
# correct detections:
p_D = pselfmove_move[thr_i]
# this is corrected for proportion of trials in each condition:
prop_correct[thr_i] = (p_CR + p_D) / 2
return [pselfmove_nomove, pselfmove_move, prop_correct]
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12, 8))
plt.title('threshold effects')
plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',
color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion condition')
plt.plot(thresholds, self_prop, label='self motion condition')
plt.plot(thresholds, prop_correct, color='xkcd:purple',
label='correct classification')
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
# conditions = np.concatenate((np.abs(judgments[:, 1]),
# np.abs(judgments[:, 2])))
# veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
# velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
# self:
# conditions_self = np.abs(judgments[:, 1])
veljudgmnt_self = judgments[:, 3]
velpredict_self = predictions[:, 3]
# world:
# conditions_world = np.abs(judgments[:, 2])
veljudgmnt_world = judgments[:, 4]
velpredict_world = predictions[:, 4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',
figsize=(12, 5))
ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)
ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)
ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
# @title Data retrieval
import os
fname="W1D2_data.npz"
if not os.path.exists(fname):
!wget https://osf.io/c5xyf/download -O $fname
filez = np.load(file=fname, allow_pickle=True)
judgments = filez['judgments']
opticflow = filez['opticflow']
vestibular = filez['vestibular']
###Output
_____no_output_____
###Markdown
--- Section 6: Model planning
###Code
# @title Video 6: Planning
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1nC4y1h7yL', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtube.com/watch?v=dRTOFFigxa0
###Markdown
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? Our model will have:* **inputs**: the values the system has available - this can be broken down in _data:_ the sensory signals, _parameters:_ the threshold and the window sizes for filtering* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial in m/s, just like the judgments participants made.* **model functions**: A set of functions that perform the hypothesized computations.We will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.**Recap of what we've accomplished so far:**To model perceptual estimates from our sensory data, we need to 1. _integrate:_ to ensure sensory information are in appropriate units2. _filter:_ to reduce noise and set timescale3. _threshold:_ to model detectionThis will be done with these operations:1. _integrate:_ `np.cumsum()`2. _filter:_ `my_moving_window()`3. _threshold:_ `if` with a comparison (`>` or `<`) and `else`**_Planning our model:_**We will now start putting all the pieces together. Normally you would sketch this yourself, but here is an overview of how the functions comprising the model are going to work:Below is the main function with a detailed explanation of what the function is supposed to do, exactly what input is expected, and what output will be generated. The model is not complete, so it only returns nans (**n**ot-**a**-**n**umber) for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **Main model function**
###Code
def my_train_illusion_model(sensorydata, params):
"""
Generate output predictions of perceived self-motion and perceived
world-motion velocity based on input visual and vestibular signals.
Args:
sensorydata: (dict) dictionary with two named entries:
opticflow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows
and M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindow: (list of int) determines the strength of filtering for
the visual and vestibular signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
"""
# sanitize input a little
if not('FUN' in params.keys()):
params['FUN'] = np.mean
if not('integrate' in params.keys()):
params['integrate'] = True
if not('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials?
for trialN in range(ntrials):
# these are our sensory variables (inputs)
vis = sensorydata['opticflow'][trialN, :]
ves = sensorydata['vestibular'][trialN, :]
# generate output predicted perception:
selfmotion[trialN],\
worldmotion[trialN] = my_perceived_motion(vis=vis, ves=ves,
params=params)
return {'selfmotion': selfmotion, 'worldmotion': worldmotion}
# here is a mock version of my_perceived motion.
# so you can test my_train_illusion_model()
def my_perceived_motion(*args, **kwargs):
return [np.nan, np.nan]
# let's look at the preditions we generated for two sample trials (0,100)
# we should get a 1x2 vector of self-motion prediction and another
# for world-motion
sensorydata={'opticflow': opticflow[[0, 100], :0],
'vestibular': vestibular[[0, 100], :0]}
params={'threshold': 0.33, 'filterwindows': [100, 50]}
my_train_illusion_model(sensorydata=sensorydata, params=params)
###Output
_____no_output_____
###Markdown
We've also completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
###Code
# Full perceived motion function
def my_perceived_motion(vis, ves, params):
"""
Takes sensory data and parameters and returns predicted percepts
Args:
vis (numpy.ndarray) : 1xM array of optic flow velocity data
ves (numpy.ndarray) : 1xM array of vestibular acceleration data
params : (dict) dictionary with named entries:
see my_train_illusion_model() for details
Returns:
[list of floats] : prediction for perceived self-motion based on
vestibular data, and prediction for perceived
world-motion based on perceived self-motion and
visual data
"""
# estimate self motion based on only the vestibular data
# pass on the parameters
selfmotion = my_selfmotion(ves=ves, params=params)
# estimate the world motion, based on the selfmotion and visual data
# pass on the parameters as well
worldmotion = my_worldmotion(vis=vis, selfmotion=selfmotion, params=params)
return [selfmotion, worldmotion]
###Output
_____no_output_____
###Markdown
TD 6.1: Formulate purpose of the self motion functionNow we plan out the purpose of one of the remaining functions. **Only name input arguments, write help text and comments, _no code_.** The goal of this exercise is to make writing the code (in Micro-tutorial 7) much easier. Based on our work before the break, you should now be able to answer these questions for each function:* what (sensory) data is necessary? * what parameters does the function need, if any?* which operations will be performed on the input?* what is the output?The number of arguments is correct. **Template calculate self motion**Name the _input arguments_, complete the _help text_, and add _comments_ in the function below to describe the inputs, the outputs, and operations using elements from the recap at the top of this notebook (or from micro-tutorials 3 and 4 in part 1), in order to plan out the function. Do not write any code.
###Code
def my_selfmotion(arg1, arg2):
"""
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
Returns:
what output does the function generate?
Any further description?
"""
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# 4.
# what output should this function produce?
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_90e4d753.py) **Template calculate world motion**We have drafted the help text and written comments in the function below that describe the inputs, the outputs, and operations we use to estimate world motion, based on the recap above.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Estimates world motion based on the visual signal, the estimate of
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# 1. running window function
# 2. take final value
# 3. subtract selfmotion from value
# return final value
return output
###Output
_____no_output_____
###Markdown
--- Section 7: Model implementation
###Code
# @title Video 7: Implementation
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV18Z4y1u7yB', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtube.com/watch?v=DMSIt7t-LO8
###Markdown
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* take last `selfmotion` value as our estimate* threshold: if (value > thr): else: TD 7.1: Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you!**Template finish self motion function**
###Code
# Self motion function
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict) : dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float) : an estimate of self motion in m/s
"""
# uncomment the code below and fill in with your code
# 1. integrate vestibular signal
# ves = np.cumsum(ves * (1 / params['samplingrate']))
# 2. running window function to accumulate evidence:
# selfmotion = ... YOUR CODE HERE
# 3. take final value of self-motion vector as our estimate
# selfmotion = ... YOUR CODE HERE
# 4. compare to threshold. Hint the threshodl is stored in
# params['threshold']
# if selfmotion is higher than threshold: return value
# if it's lower than threshold: return 0
# if YOURCODEHERE
# selfmotion = YOURCODHERE
# Comment this line when your function is ready
raise NotImplementedError("Student excercise: estimate my_selfmotion")
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_53312239.py) Interactive Demo: Unit testingTesting if the functions you wrote do what they are supposed to do is important, and known as 'unit testing'. Here we will simplify this for the `my_selfmotion()` function, by allowing varying the threshold and window size with a slider, and seeing what the distribution of self-motion estimates looks like.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
def refresh(threshold=0, windowsize=100):
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
selfmotion_estimates = np.empty(200)
# get the estimates for each trial:
for trial_number in range(200):
ves = vestibular[trial_number, :]
selfmotion_estimates[trial_number] = my_selfmotion(ves, params)
plt.figure()
plt.hist(selfmotion_estimates, bins=20)
plt.xlabel('self-motion estimate')
plt.ylabel('frequency')
plt.show()
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
**Estimate world motion**We have completed the `my_worldmotion()` function for you below.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Short description of the function
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# running average to smooth/accumulate sensory evidence
visualmotion = my_moving_window(vis, window=params['filterwindows'][1],
FUN=np.mean)
# take final value
visualmotion = visualmotion[-1]
# subtract selfmotion from value
worldmotion = visualmotion + selfmotion
# return final value
return worldmotion
###Output
_____no_output_____
###Markdown
--- Section 8: Model completion
###Code
# @title Video 8: Completion
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1YK411H7oW', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtube.com/watch?v=EM-G8YYdrDg
###Markdown
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). TD 8.1: See if the model produces illusions
###Code
# @markdown Run to plot model predictions of motion estimates
# prepare to run the model again:
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.6, 'filterwindows': [100, 50], 'FUN': np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
my_plot_percepts(datasets={'predictions': predictions}, plotconditions=True)
###Output
_____no_output_____
###Markdown
**Questions:*** How does the distribution of data points compare to the plot in TD 1.2 or in TD 7.1?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two clusters of data points. This mean the model can help us understand the phenomenon. --- Section 9: Model evaluation
###Code
# @title Video 9: Evaluation
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1uK411H7EK', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtube.com/watch?v=bWLFyobm4Rk
###Markdown
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data.**Quantify model quality with $R^2$**Let's look at how well our model matches the actual judgment data.
###Code
# @markdown Run to plot predictions over data
my_plot_predictions_data(judgments, predictions)
###Output
_____no_output_____
###Markdown
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
###Code
# @markdown Run to calculate R^2
conditions = np.concatenate((np.abs(judgments[:, 1]), np.abs(judgments[:, 2])))
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(conditions, veljudgmnt)
print(f"conditions -> judgments R^2: {r_value ** 2:0.3f}")
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
###Output
conditions -> judgments R^2: 0.032
predictions -> judgments R^2: 0.256
###Markdown
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: in a certain percentage of cases the model tends to have the same illusions as the participants. TD 9.1 Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds** Interactive Demo: optimizing the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
Varying the parameters this way, allows you to increase the models' performance in predicting the actual data as measured by $R^2$. This is called model fitting, and will be done better in the coming weeks. TD 9.2: Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here. Exercise 1: function for credit assigment of self motion
###Code
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
"""
# integrate signal:
ves = np.cumsum(ves * (1 / params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves, window=params['filterwindows'][0],
FUN=params['FUN'])
# take the final value as our estimate:
selfmotion = selfmotion[-1]
# compare to threshold, set to 0 if lower and else...
if selfmotion < params['threshold']:
selfmotion = 0
###########################################################################
# Exercise: Complete credit assignment. Remove the next line to test your function
else:
selfmotion = ... #YOUR CODE HERE
raise NotImplementedError("Modify with credit assignment")
###########################################################################
return selfmotion
# Use the updated function to run the model and plot the data
# Uncomment below to test your function
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.33, 'filterwindows': [100, 50], 'FUN': np.mean}
#modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
#my_plot_percepts(datasets={'predictions': predictions}, plotconditions=False)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_51dce10c.py)*Example output:* That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved. Use the optimal values for the threshold and window size that you found previously. Interactive Demo: evaluating the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are a little worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. **Interpret the model's meaning**Here's what you should have learned from model the train illusion: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis.We decided that for now we have learned enough, so it's time to write it up. --- Section 10: Model publication!
###Code
# @title Video 10: Publication
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "//player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1M5411e7AG', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://youtube.com/watch?v=zm8x7oegN6Q
###Markdown
Neuromatch Academy: Week 1, Day 2, Tutorial 2 Modeling Practice: Model implementation and evaluation__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom --- Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import gamma
from IPython.display import YouTubeVideo
# @title Figure settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# @title Helper functions
def my_moving_window(x, window=3, FUN=np.mean):
"""
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving
average of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
"""
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown, :] = my_moving_window(x[rown, :],
window=window, FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(1 - window), 1):
if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i + wind_i])):
values += [x[samp_i + wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets, dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'], expect['self'], marker='*',
color='xkcd:green', label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:, 0]):
c_idx = np.where(judgments[:, 0] == condition)[0]
cond_self_motion = judgments[c_idx[0], 1]
cond_world_motion = judgments[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = f"condition [{condition:d}] judgments"
plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],
label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:, 0]):
c_idx = np.where(predictions[:, 0] == condition)[0]
cond_self_motion = predictions[c_idx[0], 1]
cond_world_motion = predictions[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = f"condition [{condition:d}] prediction"
plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],
marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis', \
'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1], [0], marker='<', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='world-motion stimulus', s=80)
plt.scatter([0], [1], marker='>', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='self-motion stimulus', s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_stimuli(t, a, v):
plt.figure(figsize=(10, 6))
plt.plot(t, a, label='acceleration [$m/s^2$]')
plt.plot(t, v, label='velocity [$m/s$]')
plt.xlabel('time [s]')
plt.ylabel('[motion]')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_motion_signals():
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = np.cumsum(a * dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',
sharey='row', figsize=(14, 6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t, -v, label='visual [$m/s$]')
ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t, -v, label='visual [$m/s$]')
ax2.plot(t, a, label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,
addaverages=False, integrateVestibular=False,
addGroundTruth=False):
if addGroundTruth:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = a
wm_idx = np.where(judgments[:, 0] == 0)
sm_idx = np.where(judgments[:, 0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:, wm_idx])
sm_opticflow = np.squeeze(opticflow[:, sm_idx])
if integrateVestibular:
vestibular = np.cumsum(vestibular * .1, axis=1)
if addGroundTruth:
v = np.cumsum(a * dt)
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:, wm_idx])
sm_vestibular = np.squeeze(vestibular[:, sm_idx])
X = np.arange(0, 10, .1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col',
sharey='row', figsize=(15, 10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][0].plot(t, -v, color='xkcd:red')
if addaverages:
my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),
color='xkcd:red', alpha=1)
my_axes[0][0].set_title('optic-flow in world-motion condition')
my_axes[0][0].set_ylabel('velocity signal [$m/s$]')
my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][1].plot(t, -v, color='xkcd:blue')
if addaverages:
my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),
color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('optic-flow in self-motion condition')
my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),
color='xkcd:red', alpha=1)
my_axes[1][0].set_title('vestibular signal in world-motion condition')
if addGroundTruth:
my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')
my_axes[1][0].set_xlabel('time [s]')
if integrateVestibular:
my_axes[1][0].set_ylabel('velocity signal [$m/s$]')
else:
my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')
my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[1][1].plot(t, v, color='xkcd:blue')
if addaverages:
my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),
color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('vestibular signal in self-motion condition')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_threshold_solution(selfmotion_vel_est, threshold):
is_move = (selfmotion_vel_est > threshold)
return is_move
def my_moving_threshold(selfmotion_vel_est, thresholds):
pselfmove_nomove = np.empty(thresholds.shape)
pselfmove_move = np.empty(thresholds.shape)
prop_correct = np.empty(thresholds.shape)
pselfmove_nomove[:] = np.NaN
pselfmove_move[:] = np.NaN
prop_correct[:] = np.NaN
for thr_i, threshold in enumerate(thresholds):
# run my_threshold that the students will write:
try:
is_move = my_threshold(selfmotion_vel_est, threshold)
except Exception:
is_move = my_threshold_solution(selfmotion_vel_est, threshold)
# store results:
pselfmove_nomove[thr_i] = np.mean(is_move[0:100])
pselfmove_move[thr_i] = np.mean(is_move[100:200])
# calculate the proportion classified correctly:
# (1-pselfmove_nomove) + ()
# Correct rejections:
p_CR = (1 - pselfmove_nomove[thr_i])
# correct detections:
p_D = pselfmove_move[thr_i]
# this is corrected for proportion of trials in each condition:
prop_correct[thr_i] = (p_CR + p_D) / 2
return [pselfmove_nomove, pselfmove_move, prop_correct]
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12, 8))
plt.title('threshold effects')
plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',
color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion condition')
plt.plot(thresholds, self_prop, label='self motion condition')
plt.plot(thresholds, prop_correct, color='xkcd:purple',
label='correct classification')
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
# conditions = np.concatenate((np.abs(judgments[:, 1]),
# np.abs(judgments[:, 2])))
# veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
# velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
# self:
# conditions_self = np.abs(judgments[:, 1])
veljudgmnt_self = judgments[:, 3]
velpredict_self = predictions[:, 3]
# world:
# conditions_world = np.abs(judgments[:, 2])
veljudgmnt_world = judgments[:, 4]
velpredict_world = predictions[:, 4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',
figsize=(12, 5))
ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)
ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)
ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
# @title Data retrieval
import os
fname="W1D2_data.npz"
if not os.path.exists(fname):
!wget https://osf.io/c5xyf/download -O $fname
filez = np.load(file=fname, allow_pickle=True)
judgments = filez['judgments']
opticflow = filez['opticflow']
vestibular = filez['vestibular']
###Output
_____no_output_____
###Markdown
--- Section 6: Model planning
###Code
# @title Video 6: Planning
video = YouTubeVideo(id='dRTOFFigxa0', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? Our model will have:* **inputs**: the values the system has available - this can be broken down in _data:_ the sensory signals, _parameters:_ the threshold and the window sizes for filtering* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial in m/s, just like the judgments participants made.* **model functions**: A set of functions that perform the hypothesized computations.We will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.**Recap of what we've accomplished so far:**To model perceptual estimates from our sensory data, we need to 1. _integrate:_ to ensure sensory information are in appropriate units2. _filter:_ to reduce noise and set timescale3. _threshold:_ to model detectionThis will be done with these operations:1. _integrate:_ `np.cumsum()`2. _filter:_ `my_moving_window()`3. _threshold:_ `if` with a comparison (`>` or `<`) and `else`**_Planning our model:_**We will now start putting all the pieces together. Normally you would sketch this yourself, but here is an overview of how the functions comprising the model are going to work:Below is the main function with a detailed explanation of what the function is supposed to do, exactly what input is expected, and what output will be generated. The model is not complete, so it only returns nans (**n**ot-**a**-**n**umber) for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **Main model function**
###Code
def my_train_illusion_model(sensorydata, params):
"""
Generate output predictions of perceived self-motion and perceived
world-motion velocity based on input visual and vestibular signals.
Args:
sensorydata: (dict) dictionary with two named entries:
opticflow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows
and M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindow: (list of int) determines the strength of filtering for
the visual and vestibular signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
"""
# sanitize input a little
if not('FUN' in params.keys()):
params['FUN'] = np.mean
if not('integrate' in params.keys()):
params['integrate'] = True
if not('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials?
for trialN in range(ntrials):
# these are our sensory variables (inputs)
vis = sensorydata['opticflow'][trialN, :]
ves = sensorydata['vestibular'][trialN, :]
# generate output predicted perception:
selfmotion[trialN],\
worldmotion[trialN] = my_perceived_motion(vis=vis, ves=ves,
params=params)
return {'selfmotion': selfmotion, 'worldmotion': worldmotion}
# here is a mock version of my_perceived motion.
# so you can test my_train_illusion_model()
def my_perceived_motion(*args, **kwargs):
return [np.nan, np.nan]
# let's look at the preditions we generated for two sample trials (0,100)
# we should get a 1x2 vector of self-motion prediction and another
# for world-motion
sensorydata={'opticflow': opticflow[[0, 100], :0],
'vestibular': vestibular[[0, 100], :0]}
params={'threshold': 0.33, 'filterwindows': [100, 50]}
my_train_illusion_model(sensorydata=sensorydata, params=params)
###Output
_____no_output_____
###Markdown
We've also completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
###Code
# Full perceived motion function
def my_perceived_motion(vis, ves, params):
"""
Takes sensory data and parameters and returns predicted percepts
Args:
vis (numpy.ndarray) : 1xM array of optic flow velocity data
ves (numpy.ndarray) : 1xM array of vestibular acceleration data
params : (dict) dictionary with named entries:
see my_train_illusion_model() for details
Returns:
[list of floats] : prediction for perceived self-motion based on
vestibular data, and prediction for perceived
world-motion based on perceived self-motion and
visual data
"""
# estimate self motion based on only the vestibular data
# pass on the parameters
selfmotion = my_selfmotion(ves=ves, params=params)
# estimate the world motion, based on the selfmotion and visual data
# pass on the parameters as well
worldmotion = my_worldmotion(vis=vis, selfmotion=selfmotion, params=params)
return [selfmotion, worldmotion]
###Output
_____no_output_____
###Markdown
TD 6.1: Formulate purpose of the self motion functionNow we plan out the purpose of one of the remaining functions. **Only name input arguments, write help text and comments, _no code_.** The goal of this exercise is to make writing the code (in Micro-tutorial 7) much easier. Based on our work before the break, you should now be able to answer these questions for each function:* what (sensory) data is necessary? * what parameters does the function need, if any?* which operations will be performed on the input?* what is the output?The number of arguments is correct. **Template calculate self motion**Name the _input arguments_, complete the _help text_, and add _comments_ in the function below to describe the inputs, the outputs, and operations using elements from the recap at the top of this notebook (or from micro-tutorials 3 and 4 in part 1), in order to plan out the function. Do not write any code.
###Code
def my_selfmotion(arg1, arg2):
"""
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
Returns:
what output does the function generate?
Any further description?
"""
##################################################
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# 4.
# what output should this function produce?
##################################################
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_06ea80b7.py) **Template calculate world motion**We have drafted the help text and written comments in the function below that describe the inputs, the outputs, and operations we use to estimate world motion, based on the recap above.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Estimates world motion based on the visual signal, the estimate of
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
##################################################
# 1. running window function
# 2. take final value
# 3. subtract selfmotion from value
# return final value
##################################################
return output
###Output
_____no_output_____
###Markdown
--- Section 7: Model implementation
###Code
# @title Video 7: Implementation
video = YouTubeVideo(id='DMSIt7t-LO8', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* take last `selfmotion` value as our estimate* threshold: if (value > thr): else: TD 7.1: Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you! Exercise 1: finish self motion function
###Code
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict) : dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float) : an estimate of self motion in m/s
"""
##################################################
## TODO for students: fill in ... in code below
# Fill out function and remove
raise NotImplementedError("Student exercise: estimate my_selfmotion")
##################################################
# 1. integrate vestibular signal:
ves = np.cumsum(ves * (1 / params['samplingrate']))
# 2. running window function to accumulate evidence:
selfmotion = ...
# 3. take final value of self-motion vector as our estimate
selfmotion = ...
# 4. compare to threshold. Hint the threshodl is stored in
# params['threshold']
# if selfmotion is higher than threshold: return value
# if it's lower than threshold: return 0
if ...:
selfmotion = ...
return selfmotion
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_4c0b8958.py) Interactive Demo: Unit testingTesting if the functions you wrote do what they are supposed to do is important, and known as 'unit testing'. Here we will simplify this for the `my_selfmotion()` function, by allowing varying the threshold and window size with a slider, and seeing what the distribution of self-motion estimates looks like.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
def refresh(threshold=0, windowsize=100):
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
selfmotion_estimates = np.empty(200)
# get the estimates for each trial:
for trial_number in range(200):
ves = vestibular[trial_number, :]
selfmotion_estimates[trial_number] = my_selfmotion(ves, params)
plt.figure()
plt.hist(selfmotion_estimates, bins=20)
plt.xlabel('self-motion estimate')
plt.ylabel('frequency')
plt.show()
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
**Estimate world motion**We have completed the `my_worldmotion()` function for you below.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Short description of the function
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# running average to smooth/accumulate sensory evidence
visualmotion = my_moving_window(vis, window=params['filterwindows'][1],
FUN=np.mean)
# take final value
visualmotion = visualmotion[-1]
# subtract selfmotion from value
worldmotion = visualmotion + selfmotion
# return final value
return worldmotion
###Output
_____no_output_____
###Markdown
--- Section 8: Model completion
###Code
# @title Video 8: Completion
video = YouTubeVideo(id='EM-G8YYdrDg', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). TD 8.1: See if the model produces illusions
###Code
# @markdown Run to plot model predictions of motion estimates
# prepare to run the model again:
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.6, 'filterwindows': [100, 50], 'FUN': np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
my_plot_percepts(datasets={'predictions': predictions}, plotconditions=True)
###Output
_____no_output_____
###Markdown
**Questions:*** How does the distribution of data points compare to the plot in TD 1.2 or in TD 7.1?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two clusters of data points. This mean the model can help us understand the phenomenon. --- Section 9: Model evaluation
###Code
# @title Video 9: Evaluation
video = YouTubeVideo(id='bWLFyobm4Rk', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data.**Quantify model quality with $R^2$**Let's look at how well our model matches the actual judgment data.
###Code
# @markdown Run to plot predictions over data
my_plot_predictions_data(judgments, predictions)
###Output
_____no_output_____
###Markdown
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
###Code
# @markdown Run to calculate R^2
conditions = np.concatenate((np.abs(judgments[:, 1]), np.abs(judgments[:, 2])))
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(conditions, veljudgmnt)
print(f"conditions -> judgments R^2: {r_value ** 2:0.3f}")
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
###Output
_____no_output_____
###Markdown
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: in a certain percentage of cases the model tends to have the same illusions as the participants. TD 9.1 Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds** Interactive Demo: optimizing the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
Varying the parameters this way, allows you to increase the models' performance in predicting the actual data as measured by $R^2$. This is called model fitting, and will be done better in the coming weeks. TD 9.2: Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here. Exercise 2: function for credit assigment of self motion
###Code
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
"""
# integrate signal:
ves = np.cumsum(ves * (1 / params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves, window=params['filterwindows'][0],
FUN=params['FUN'])
# take the final value as our estimate:
selfmotion = selfmotion[-1]
###########################################################################
# Exercise: Complete credit assignment. Remove the next line to test your function
raise NotImplementedError("Modify with credit assignment")
###########################################################################
# compare to threshold, set to 0 if lower
if selfmotion < params['threshold']:
selfmotion = 0
else:
selfmotion = ...
return selfmotion
# Use the updated function to run the model and plot the data
# Uncomment below to test your function
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.33, 'filterwindows': [100, 50], 'FUN': np.mean}
# modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# my_plot_percepts(datasets={'predictions': predictions}, plotconditions=False)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_97a9e346.py)*Example output:* That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved. Use the optimal values for the threshold and window size that you found previously. Interactive Demo: evaluating the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are a little worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. **Interpret the model's meaning**Here's what you should have learned from model the train illusion: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis.We decided that for now we have learned enough, so it's time to write it up. --- Section 10: Model publication!
###Code
# @title Video 10: Publication
video = YouTubeVideo(id='zm8x7oegN6Q', width=854, height=480, fs=1)
print(f"Video available at https://youtube.com/watch?v={video.id}")
video
###Output
_____no_output_____
###Markdown
Neuromatch Academy: Week1, Day 2, Tutorial 2 Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Utilities Setup and Convenience FunctionsPlease run the following **3** chunks to have functions and data available.
###Code
#@title Utilities and setup
# set up the environment for this tutorial
import time # import time
import numpy as np # import numpy
import scipy as sp # import scipy
from scipy.stats import gamma # import gamma distribution
import math # import basic math functions
import random # import basic random number generator functions
import matplotlib.pyplot as plt # import matplotlib
from IPython import display
fig_w, fig_h = (12, 8)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
plt.style.use('ggplot')
%matplotlib inline
#%config InlineBackend.figure_format = 'retina'
from scipy.signal import medfilt
# make
#@title Convenience functions: Plotting and Filtering
# define some convenience functions to be used later
def my_moving_window(x, window=3, FUN=np.mean):
'''
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving average
of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
'''
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown,:] = my_moving_window(x[rown,:],window=window,FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(-window), 1):
if ((samp_i+wind_i) < 0) or (samp_i+wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i+wind_i])):
values += [x[samp_i+wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets,dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
fig = plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'],expect['self'],marker='*',color='xkcd:green',label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:,0]):
c_idx = np.where(judgments[:,0] == condition)[0]
cond_self_motion = judgments[c_idx[0],1]
cond_world_motion = judgments[c_idx[0],2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = 'condition [%d] judgments'%condition
plt.scatter(judgments[c_idx,3],judgments[c_idx,4], label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:,0]):
c_idx = np.where(predictions[:,0] == condition)[0]
cond_self_motion = predictions[c_idx[0],1]
cond_world_motion = predictions[c_idx[0],2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = 'condition [%d] prediction'%condition
plt.scatter(predictions[c_idx,4],predictions[c_idx,3], marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis', 'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1],[0],marker='<',facecolor='none',edgecolor='xkcd:black',linewidths=2,label='world-motion stimulus',s=80)
plt.scatter([0],[1],marker='>',facecolor='none',edgecolor='xkcd:black',linewidths=2,label='self-motion stimulus',s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_motion_signals():
dt = 1/10
a = gamma.pdf( np.arange(0,10,dt), 2.5, 0 )
t = np.arange(0,10,dt)
v = np.cumsum(a*dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col', sharey='row', figsize=(14,6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t,-v,label='visual [$m/s$]')
ax1.plot(t,np.zeros(a.size),label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t,-v,label='visual [$m/s$]')
ax2.plot(t,a,label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False, addaverages=False):
wm_idx = np.where(judgments[:,0] == 0)
sm_idx = np.where(judgments[:,0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:,wm_idx])
sm_opticflow = np.squeeze(opticflow[:,sm_idx])
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:,wm_idx])
sm_vestibular = np.squeeze(vestibular[:,sm_idx])
X = np.arange(0,10,.1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row', figsize=(15,10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X,wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[0][0].plot(X,np.average(wm_opticflow, axis=1), color='xkcd:red', alpha=1)
my_axes[0][0].set_title('world-motion optic flow')
my_axes[0][0].set_ylabel('[motion]')
my_axes[0][1].plot(X,sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[0][1].plot(X,np.average(sm_opticflow, axis=1), color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('self-motion optic flow')
my_axes[1][0].plot(X,wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X,np.average(wm_vestibular, axis=1), color='xkcd:red', alpha=1)
my_axes[1][0].set_title('world-motion vestibular signal')
my_axes[1][0].set_xlabel('time [s]')
my_axes[1][0].set_ylabel('[motion]')
my_axes[1][1].plot(X,sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[1][1].plot(X,np.average(sm_vestibular, axis=1), color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('self-motion vestibular signal')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12,8))
plt.title('threshold effects')
plt.plot([min(thresholds),max(thresholds)],[0,0],':',color='xkcd:black')
plt.plot([min(thresholds),max(thresholds)],[0.5,0.5],':',color='xkcd:black')
plt.plot([min(thresholds),max(thresholds)],[1,1],':',color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion')
plt.plot(thresholds, self_prop, label='self motion')
plt.plot(thresholds, prop_correct, color='xkcd:purple', label='correct classification')
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
# self:
conditions_self = np.abs(judgments[:,1])
veljudgmnt_self = judgments[:,3]
velpredict_self = predictions[:,3]
# world:
conditions_world = np.abs(judgments[:,2])
veljudgmnt_world = judgments[:,4]
velpredict_world = predictions[:,4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row', figsize=(12,5))
ax1.scatter(veljudgmnt_self,velpredict_self, alpha=0.2)
ax1.plot([0,1],[0,1],':',color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world,velpredict_world, alpha=0.2)
ax2.plot([0,1],[0,1],':',color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
#@title Data generation code (needs to go on OSF and deleted here)
def my_simulate_data(repetitions=100, conditions=[(0,-1),(+1,0)] ):
"""
Generate simulated data for this tutorial. You do not need to run this
yourself.
Args:
repetitions: (int) number of repetitions of each condition (default: 30)
conditions: list of 2-tuples of floats, indicating the self velocity and
world velocity in each condition (default: returns data that is
good for exploration: [(-1,0),(0,+1)] but can be flexibly
extended)
The total number of trials used (ntrials) is equal to:
repetitions * len(conditions)
Returns:
dict with three entries:
'judgments': ntrials * 5 matrix
'opticflow': ntrials * 100 matrix
'vestibular': ntrials * 100 matrix
The default settings would result in data where first 30 trials reflect a
situation where the world (other train) moves in one direction, supposedly
at 1 m/s (perhaps to the left: -1) while the participant does not move at
all (0), and 30 trials from a second condition, where the world does not
move, while the participant moves with 1 m/s in the opposite direction from
where the world is moving in the first condition (0,+1). The optic flow
should be the same, but the vestibular input is not.
"""
# reproducible output
np.random.seed(1937)
# set up some variables:
ntrials = repetitions * len(conditions)
# the following arrays will contain the simulated data:
judgments = np.empty(shape=(ntrials,5))
opticflow = np.empty(shape=(ntrials,100))
vestibular = np.empty(shape=(ntrials,100))
# acceleration:
a = gamma.pdf(np.arange(0,10,.1), 2.5, 0 )
# divide by 10 so that velocity scales from 0 to 1 (m/s)
# max acceleration ~ .308 m/s^2
# not realistic! should be about 1/10 of that
# velocity:
v = np.cumsum(a*.1)
# position: (not necessary)
#x = np.cumsum(v)
#################################
# REMOVE ARBITRARY SCALING & CORRECT NOISE PARAMETERS
vest_amp = 1
optf_amp = 1
# we start at the first trial:
trialN = 0
# we start with only a single velocity, but it should be possible to extend this
for conditionno in range(len(conditions)):
condition = conditions[conditionno]
for repetition in range(repetitions):
#
# generate optic flow signal
OF = v * np.diff(condition) # optic flow: difference between self & world motion
OF = (OF * optf_amp) # fairly large spike range
OF = OF + (np.random.randn(len(OF)) * .1) # adding noise
# generate vestibular signal
VS = a * condition[0] # vestibular signal: only self motion
VS = (VS * vest_amp) # less range
VS = VS + (np.random.randn(len(VS)) * 1.) # acceleration is a smaller signal, what is a good noise level?
# store in matrices, corrected for sign
#opticflow[trialN,:] = OF * -1 if (np.sign(np.diff(condition)) < 0) else OF
#vestibular[trialN,:] = VS * -1 if (np.sign(condition[1]) < 0) else VS
opticflow[trialN,:], vestibular[trialN,:] = OF, VS
#########################################################
# store conditions in judgments matrix:
judgments[trialN,0:3] = [ conditionno, condition[0], condition[1] ]
# vestibular SD: 1.0916052957046194 and 0.9112684509277528
# visual SD: 0.10228834313079663 and 0.10975472557444346
# generate judgments:
if (abs(np.average(np.cumsum(medfilt(VS/vest_amp,5)*.1)[70:90])) < 1):
###########################
# NO self motion detected
###########################
selfmotion_weights = np.array([.01,.01]) # there should be low/no self motion
worldmotion_weights = np.array([.01,.99]) # world motion is dictated by optic flow
else:
########################
# self motion DETECTED
########################
#if (abs(np.average(np.cumsum(medfilt(VS/vest_amp,15)*.1)[70:90]) - np.average(medfilt(OF,15)[70:90])) < 5):
if True:
####################
# explain all self motion by optic flow
selfmotion_weights = np.array([.01,.99]) # there should be lots of self motion, but determined by optic flow
worldmotion_weights = np.array([.01,.01]) # very low world motion?
else:
# we use both optic flow and vestibular info to explain both
selfmotion_weights = np.array([ 1, 0]) # motion, but determined by vestibular signal
worldmotion_weights = np.array([ 1, 1]) # very low world motion?
#
integrated_signals = np.array([
np.average( np.cumsum(medfilt(VS/vest_amp,15))[90:100]*.1 ),
np.average((medfilt(OF/optf_amp,15))[90:100])
])
selfmotion = np.sum(integrated_signals * selfmotion_weights)
worldmotion = np.sum(integrated_signals * worldmotion_weights)
#print(worldmotion,selfmotion)
judgments[trialN,3] = abs(selfmotion)
judgments[trialN,4] = abs(worldmotion)
# this ends the trial loop, so we increment the counter:
trialN += 1
return {'judgments':judgments,
'opticflow':opticflow,
'vestibular':vestibular}
simulated_data = my_simulate_data()
judgments = simulated_data['judgments']
opticflow = simulated_data['opticflow']
vestibular = simulated_data['vestibular']
###Output
_____no_output_____
###Markdown
Micro-tutorial 6 - planning the model
###Code
#@title Video: Planning the model
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='daEtkVporBE', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=daEtkVporBE
###Markdown
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? The figure below shows a generic model we will use to guide our code construction. Our model will have:* **inputs**: the values the system has available - for this tutorial the sensory information in a trial. We want to gather these together and plan how to process them. * **parameters**: unless we are lucky, our functions will have unknown parameters - we want to identify these and plan for them.* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial. Ideally these are directly comparable to our data. * **Model functions**: A set of functions that perform the hypothesized computations.>Using Python (with Numpy and Scipy) we will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.Recap of what we've accomplished so far:To model perceptual estimates from our sensory data, we need to 1. _integrate_ to ensure sensory information are in appropriate units2. _reduce noise and set timescale_ by filtering3. _threshold_ to model detection Remember the kind of operations we identified:* integration: `np.cumsum()`* filtering: `my_moving_window()`* threshold: `if` with a comparison (`>` or `<`) and `else`We will collect all the components we've developed and design the code by:1. **identifying the key functions** we need2. **sketching the operations** needed in each. **_Planning our model:_**We know what we want the model to do, but we need to plan and organize the model into functions and operations. We're providing a draft of the first function. For each of the two other code chunks, write mostly comments and help text first. This should put into words what role each of the functions plays in the overall model, implementing one of the steps decided above. _______Below is the main function with a detailed explanation of what the function is supposed to do: what input is expected, and what output will generated. The code is not complete, and only returns nans for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **TD 6.1**: Complete main model functionThe function `my_train_illusion_model()` below should call one other function: `my_perceived_motion()`. What input do you think this function should get? **Complete main model function**
###Code
def my_train_illusion_model(sensorydata, params):
'''
Generate output predictions of perceived self-motion and perceived world-motion velocity
based on input visual and vestibular signals.
Args (Input variables passed into function):
sensorydata: (dict) dictionary with two named entries:
opticflow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows
and M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindow: (list of int) determines the strength of filtering for
the visual and vestibular signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
'''
# sanitize input a little
if not('FUN' in params.keys()):
params['FUN'] = np.mean
if not('integrate' in params.keys()):
params['integrate'] = True
if not('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials?
for trialN in range(ntrials):
#these are our sensory variables (inputs)
vis = sensorydata['opticflow'][trialN,:]
ves = sensorydata['vestibular'][trialN,:]
########################################################
# generate output predicted perception:
########################################################
#our inputs our vis, ves, and params
selfmotion[trialN], worldmotion[trialN] = [np.nan, np.nan]
########################################################
# replace above with
# selfmotion[trialN], worldmotion[trialN] = my_perceived_motion( ???, ???, params=params)
# and fill in question marks
########################################################
# comment this out when you've filled
raise NotImplementedError("Student excercise: generate predictions")
return {'selfmotion':selfmotion, 'worldmotion':worldmotion}
# uncomment the following lines to run the main model function:
## here is a mock version of my_perceived motion.
## so you can test my_train_illusion_model()
#def my_perceived_motion(*args, **kwargs):
#return np.random.rand(2)
##let's look at the preditions we generated for two sample trials (0,100)
##we should get a 1x2 vector of self-motion prediction and another for world-motion
#sensorydata={'opticflow':opticflow[[0,100],:0], 'vestibular':vestibular[[0,100],:0]}
#params={'threshold':0.33, 'filterwindow':[100,50]}
#my_train_illusion_model(sensorydata=sensorydata, params=params)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_685e0a13.py) **TD 6.2**: Draft perceived motion functionsNow we draft a set of functions, the first of which is used in the main model function (see above) and serves to generate perceived velocities. The other two are used in the first one. Only write help text and/or comments, you don't have to write the whole function. Each time ask yourself these questions:* what sensory data is necessary? * what other input does the function need, if any?* which operations are performed on the input?* what is the output?(the number of arguments is correct) **Template perceived motion**
###Code
# fill in the input arguments the function should have:
# write the help text for the function:
def my_perceived_motion(arg1, arg2, arg3):
'''
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
argument 3: explain the format and content of the third argument
Returns:
what output does the function generate?
Any further description?
'''
# structure your code into two functions: "my_selfmotion" and "my_worldmotion"
# write comments outlining the operations to be performed on the inputs by each of these functions
# use the elements from micro-tutorials 3, 4, and 5 (found in W1D2 Tutorial Part 1)
#
#
#
# what kind of output should this function produce?
return output
###Output
_____no_output_____
###Markdown
We've completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
###Code
#Full perceived motion function
def my_perceived_motion(vis, ves, params):
'''
Takes sensory data and parameters and returns predicted percepts
Args:
vis (numpy.ndarray): 1xM array of optic flow velocity data
ves (numpy.ndarray): 1xM array of vestibular acceleration data
params: (dict) dictionary with named entries:
see my_train_illusion_model() for details
Returns:
[list of floats]: prediction for perceived self-motion based on
vestibular data, and prediction for perceived world-motion based on
perceived self-motion and visual data
'''
# estimate self motion based on only the vestibular data
# pass on the parameters
selfmotion = my_selfmotion(ves=ves,
params=params)
# estimate the world motion, based on the selfmotion and visual data
# pass on the parameters as well
worldmotion = my_worldmotion(vis=vis,
selfmotion=selfmotion,
params=params)
return [selfmotion, worldmotion]
###Output
_____no_output_____
###Markdown
**Template calculate self motion**Put notes in the function below that describe the inputs, the outputs, and steps that transform the output from the input using elements from micro-tutorials 3,4,5.
###Code
def my_selfmotion(arg1, arg2):
'''
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
Returns:
what output does the function generate?
Any further description?
'''
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# 4.
# what output should this function produce?
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_181325a9.py) **Template calculate world motion**Put notes in the function below that describe the inputs, the outputs, and steps that transform the output from the input using elements from micro-tutorials 3,4,5.
###Code
def my_worldmotion(arg1, arg2, arg3):
'''
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
argument 3: explain the format and content of the third argument
Returns:
what output does the function generate?
Any further description?
'''
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# what output should this function produce?
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_8f913582.py) Micro-tutorial 7 - implement model
###Code
#@title Video: implement the model
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='gtSOekY8jkw', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=gtSOekY8jkw
###Markdown
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* average: `np.mean(data)`* threshold: if (value > thr): else: **TD 7.1:** Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you!**Template finish self motion function**
###Code
def my_selfmotion(ves, params):
'''
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
'''
###uncomment the code below and fill in with your code
## 1. integrate vestibular signal
#ves = np.cumsum(ves*(1/params['samplingrate']))
## 2. running window function to accumulate evidence:
#selfmotion = YOUR CODE HERE
## 3. take final value of self-motion vector as our estimate
#selfmotion =
## 4. compare to threshold. Hint the threshodl is stored in params['threshold']
## if selfmotion is higher than threshold: return value
## if it's lower than threshold: return 0
#if YOURCODEHERE
#selfmotion = YOURCODHERE
# comment this out when you've filled
raise NotImplementedError("Student excercise: estimate my_selfmotion")
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_3ea16348.py) Estimate world motionWe have completed the `my_worldmotion()` function for you.**World motion function**
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
'''
Short description of the function
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
'''
# running average to smooth/accumulate sensory evidence
visualmotion = my_moving_window(vis,
window=params['filterwindows'][1],
FUN=np.mean)
# take final value
visualmotion = visualmotion[-1]
# subtract selfmotion from value
worldmotion = visualmotion + selfmotion
# return final value
return worldmotion
###Output
_____no_output_____
###Markdown
Micro-tutorial 8 - completing the model
###Code
#@title Video: completing the model
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='-NiHSv4xCDs', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=-NiHSv4xCDs
###Markdown
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). **TD 8.1:** See if the model produces illusions
###Code
#@title Run to plot model predictions of motion estimates
# prepare to run the model again:
data = {'opticflow':opticflow, 'vestibular':vestibular}
params = {'threshold':0.6, 'filterwindows':[100,50], 'FUN':np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:,0:3] = judgments[:,0:3]
predictions[:,3] = modelpredictions['selfmotion']
predictions[:,4] = modelpredictions['worldmotion'] *-1
my_plot_percepts(datasets={'predictions':predictions}, plotconditions=True)
###Output
_____no_output_____
###Markdown
**Questions:*** Why is the data distributed this way? How does it compare to the plot in TD 1.2?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two sets of data. Does this mean the model can help us understand the phenomenon? Micro-tutorial 9 - testing and evaluating the model
###Code
#@title Video: Background
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='5vnDOxN3M_k', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=5vnDOxN3M_k
###Markdown
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data. Quantify model quality with $R^2$Let's look at how well our model matches the actual judgment data.
###Code
#@title Run to plot predictions over data
my_plot_predictions_data(judgments, predictions)
###Output
_____no_output_____
###Markdown
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
###Code
#@title Run to calculate R^2
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(conditions,veljudgmnt)
print('conditions -> judgments R^2: %0.3f'%( r_value**2 ))
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(veljudgmnt,velpredict)
print('predictions -> judgments R^2: %0.3f'%( r_value**2 ))
###Output
conditions -> judgments R^2: 0.032
predictions -> judgments R^2: 0.256
###Markdown
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: the model tends to have the same illusions as the participants. **TD 9.1** Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds**
###Code
# Testing thresholds
def test_threshold(threshold=0.33):
# prepare to run model
data = {'opticflow':opticflow, 'vestibular':vestibular}
params = {'threshold':threshold, 'filterwindows':[100,50], 'FUN':np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# get predictions in matrix
predictions = np.zeros(judgments.shape)
predictions[:,0:3] = judgments[:,0:3]
predictions[:,3] = modelpredictions['selfmotion']
predictions[:,4] = modelpredictions['worldmotion'] *-1
# get percepts from participants and model
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
# calculate R2
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(veljudgmnt,velpredict)
print('predictions -> judgments R2: %0.3f'%( r_value**2 ))
test_threshold(threshold=0.5)
###Output
predictions -> judgments R2: 0.267
###Markdown
**TD 9.2:** Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here.**Template function for credit assigment of self motion**
###Code
# Template binary self-motion estimates
def my_selfmotion(ves, params):
'''
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
'''
# integrate signal:
ves = np.cumsum(ves*(1/params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves,
window=params['filterwindows'][0],
FUN=params['FUN'])
## take the final value as our estimate:
selfmotion = selfmotion[-1]
##########################################
# this last part will have to be changed
# compare to threshold, set to 0 if lower and else...
if selfmotion < params['threshold']:
selfmotion = 0
#uncomment the lines below and fill in with your code
#else:
#YOUR CODE HERE
# comment this out when you've filled
raise NotImplementedError("Student excercise: modify with credit assignment")
return selfmotion
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_90571e21.py) The function you just wrote will be used when we run the model again below.
###Code
#@title Run model credit assigment of self motion
# prepare to run the model again:
data = {'opticflow':opticflow, 'vestibular':vestibular}
params = {'threshold':0.33, 'filterwindows':[100,50], 'FUN':np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# no process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:,0:3] = judgments[:,0:3]
predictions[:,3] = modelpredictions['selfmotion']
predictions[:,4] = modelpredictions['worldmotion'] *-1
my_plot_percepts(datasets={'predictions':predictions}, plotconditions=False)
###Output
_____no_output_____
###Markdown
That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved:
###Code
#@title Run to calculate R^2 for model with self motion credit assignment
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
my_plot_predictions_data(judgments, predictions)
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(conditions,veljudgmnt)
print('conditions -> judgments R2: %0.3f'%( r_value**2 ))
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(velpredict,veljudgmnt)
print('predictions -> judgments R2: %0.3f'%( r_value**2 ))
###Output
_____no_output_____
###Markdown
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are actually worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. Interpret the model's meaningHere's what you should have learned: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis._It's always possible to refine our models to improve the fits._There are many ways to try to do this. A few examples; we could implement a full sensory cue integration model, perhaps with Kalman filters (Week 2, Day 3), or we could add prior knowledge (at what time do the trains depart?). However, we decided that for now we have learned enough, so it's time to write it up. Micro-tutorial 10 - publishing the model
###Code
#@title Video: Background
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='kf4aauCr5vA', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=kf4aauCr5vA
###Markdown
Neuromatch Academy: Week 1, Day 2, Tutorial 2 Modeling Practice: Model implementation and evaluation__Content creators:__ Marius 't Hart, Paul Schrater, Gunnar Blohm__Content reviewers:__ Norma Kuhn, Saeed Salehi, Madineh Sarvestani, Spiros Chavlis, Michael Waskom --- Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Setup
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import gamma
from IPython.display import YouTubeVideo
# @title Figure settings
import ipywidgets as widgets
%config InlineBackend.figure_format = 'retina'
plt.style.use("/share/dataset/COMMON/nma.mplstyle.txt")
# @title Helper functions
def my_moving_window(x, window=3, FUN=np.mean):
"""
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving
average of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
"""
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown, :] = my_moving_window(x[rown, :],
window=window, FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(1 - window), 1):
if ((samp_i + wind_i) < 0) or (samp_i + wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i + wind_i])):
values += [x[samp_i + wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets, dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'], expect['self'], marker='*',
color='xkcd:green', label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:, 0]):
c_idx = np.where(judgments[:, 0] == condition)[0]
cond_self_motion = judgments[c_idx[0], 1]
cond_world_motion = judgments[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = f"condition [{condition:d}] judgments"
plt.scatter(judgments[c_idx, 3], judgments[c_idx, 4],
label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:, 0]):
c_idx = np.where(predictions[:, 0] == condition)[0]
cond_self_motion = predictions[c_idx[0], 1]
cond_world_motion = predictions[c_idx[0], 2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = f"condition [{condition:d}] prediction"
plt.scatter(predictions[c_idx, 4], predictions[c_idx, 3],
marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis', \
'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1], [0], marker='<', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='world-motion stimulus', s=80)
plt.scatter([0], [1], marker='>', facecolor='none',
edgecolor='xkcd:black', linewidths=2,
label='self-motion stimulus', s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_stimuli(t, a, v):
plt.figure(figsize=(10, 6))
plt.plot(t, a, label='acceleration [$m/s^2$]')
plt.plot(t, v, label='velocity [$m/s$]')
plt.xlabel('time [s]')
plt.ylabel('[motion]')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_motion_signals():
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = np.cumsum(a * dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col',
sharey='row', figsize=(14, 6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t, -v, label='visual [$m/s$]')
ax1.plot(t, np.zeros(a.size), label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t, -v, label='visual [$m/s$]')
ax2.plot(t, a, label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False,
addaverages=False, integrateVestibular=False,
addGroundTruth=False):
if addGroundTruth:
dt = 1 / 10
a = gamma.pdf(np.arange(0, 10, dt), 2.5, 0)
t = np.arange(0, 10, dt)
v = a
wm_idx = np.where(judgments[:, 0] == 0)
sm_idx = np.where(judgments[:, 0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:, wm_idx])
sm_opticflow = np.squeeze(opticflow[:, sm_idx])
if integrateVestibular:
vestibular = np.cumsum(vestibular * .1, axis=1)
if addGroundTruth:
v = np.cumsum(a * dt)
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:, wm_idx])
sm_vestibular = np.squeeze(vestibular[:, sm_idx])
X = np.arange(0, 10, .1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col',
sharey='row', figsize=(15, 10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X, wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][0].plot(t, -v, color='xkcd:red')
if addaverages:
my_axes[0][0].plot(X, np.average(wm_opticflow, axis=1),
color='xkcd:red', alpha=1)
my_axes[0][0].set_title('optic-flow in world-motion condition')
my_axes[0][0].set_ylabel('velocity signal [$m/s$]')
my_axes[0][1].plot(X, sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[0][1].plot(t, -v, color='xkcd:blue')
if addaverages:
my_axes[0][1].plot(X, np.average(sm_opticflow, axis=1),
color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('optic-flow in self-motion condition')
my_axes[1][0].plot(X, wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X, np.average(wm_vestibular, axis=1),
color='xkcd:red', alpha=1)
my_axes[1][0].set_title('vestibular signal in world-motion condition')
if addGroundTruth:
my_axes[1][0].plot(t, np.zeros(100), color='xkcd:red')
my_axes[1][0].set_xlabel('time [s]')
if integrateVestibular:
my_axes[1][0].set_ylabel('velocity signal [$m/s$]')
else:
my_axes[1][0].set_ylabel('acceleration signal [$m/s^2$]')
my_axes[1][1].plot(X, sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0, 10], [0, 0], ':', color='xkcd:black')
if addGroundTruth:
my_axes[1][1].plot(t, v, color='xkcd:blue')
if addaverages:
my_axes[1][1].plot(X, np.average(sm_vestibular, axis=1),
color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('vestibular signal in self-motion condition')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_threshold_solution(selfmotion_vel_est, threshold):
is_move = (selfmotion_vel_est > threshold)
return is_move
def my_moving_threshold(selfmotion_vel_est, thresholds):
pselfmove_nomove = np.empty(thresholds.shape)
pselfmove_move = np.empty(thresholds.shape)
prop_correct = np.empty(thresholds.shape)
pselfmove_nomove[:] = np.NaN
pselfmove_move[:] = np.NaN
prop_correct[:] = np.NaN
for thr_i, threshold in enumerate(thresholds):
# run my_threshold that the students will write:
try:
is_move = my_threshold(selfmotion_vel_est, threshold)
except Exception:
is_move = my_threshold_solution(selfmotion_vel_est, threshold)
# store results:
pselfmove_nomove[thr_i] = np.mean(is_move[0:100])
pselfmove_move[thr_i] = np.mean(is_move[100:200])
# calculate the proportion classified correctly:
# (1-pselfmove_nomove) + ()
# Correct rejections:
p_CR = (1 - pselfmove_nomove[thr_i])
# correct detections:
p_D = pselfmove_move[thr_i]
# this is corrected for proportion of trials in each condition:
prop_correct[thr_i] = (p_CR + p_D) / 2
return [pselfmove_nomove, pselfmove_move, prop_correct]
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12, 8))
plt.title('threshold effects')
plt.plot([min(thresholds), max(thresholds)], [0, 0], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [0.5, 0.5], ':',
color='xkcd:black')
plt.plot([min(thresholds), max(thresholds)], [1, 1], ':',
color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion condition')
plt.plot(thresholds, self_prop, label='self motion condition')
plt.plot(thresholds, prop_correct, color='xkcd:purple',
label='correct classification')
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
# conditions = np.concatenate((np.abs(judgments[:, 1]),
# np.abs(judgments[:, 2])))
# veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
# velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
# self:
# conditions_self = np.abs(judgments[:, 1])
veljudgmnt_self = judgments[:, 3]
velpredict_self = predictions[:, 3]
# world:
# conditions_world = np.abs(judgments[:, 2])
veljudgmnt_world = judgments[:, 4]
velpredict_world = predictions[:, 4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row',
figsize=(12, 5))
ax1.scatter(veljudgmnt_self, velpredict_self, alpha=0.2)
ax1.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world, velpredict_world, alpha=0.2)
ax2.plot([0, 1], [0, 1], ':', color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
# @title Data retrieval
import os
fname="/share/dataset/W1D2/W1D2_data.npz"
# https://lib.tls.moe/file/OneDrive_CN/SummerSchool/dataset/W1D2/W1D2_data.npz
filez = np.load(file=fname, allow_pickle=True)
judgments = filez['judgments']
opticflow = filez['opticflow']
vestibular = filez['vestibular']
###Output
_____no_output_____
###Markdown
--- Section 6: Model planning
###Code
# @title Video 6: Planning
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1nC4y1h7yL', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1nC4y1h7yL
###Markdown
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? Our model will have:* **inputs**: the values the system has available - this can be broken down in _data:_ the sensory signals, _parameters:_ the threshold and the window sizes for filtering* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial in m/s, just like the judgments participants made.* **model functions**: A set of functions that perform the hypothesized computations.We will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.**Recap of what we've accomplished so far:**To model perceptual estimates from our sensory data, we need to 1. _integrate:_ to ensure sensory information are in appropriate units2. _filter:_ to reduce noise and set timescale3. _threshold:_ to model detectionThis will be done with these operations:1. _integrate:_ `np.cumsum()`2. _filter:_ `my_moving_window()`3. _threshold:_ `if` with a comparison (`>` or `<`) and `else`**_Planning our model:_**We will now start putting all the pieces together. Normally you would sketch this yourself, but here is an overview of how the functions comprising the model are going to work:Below is the main function with a detailed explanation of what the function is supposed to do, exactly what input is expected, and what output will be generated. The model is not complete, so it only returns nans (**n**ot-**a**-**n**umber) for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **Main model function**
###Code
def my_train_illusion_model(sensorydata, params):
"""
Generate output predictions of perceived self-motion and perceived
world-motion velocity based on input visual and vestibular signals.
Args:
sensorydata: (dict) dictionary with two named entries:
opticflow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows
and M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindow: (list of int) determines the strength of filtering for
the visual and vestibular signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
"""
# sanitize input a little
if not('FUN' in params.keys()):
params['FUN'] = np.mean
if not('integrate' in params.keys()):
params['integrate'] = True
if not('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials?
for trialN in range(ntrials):
# these are our sensory variables (inputs)
vis = sensorydata['opticflow'][trialN, :]
ves = sensorydata['vestibular'][trialN, :]
# generate output predicted perception:
selfmotion[trialN],\
worldmotion[trialN] = my_perceived_motion(vis=vis, ves=ves,
params=params)
return {'selfmotion': selfmotion, 'worldmotion': worldmotion}
# here is a mock version of my_perceived motion.
# so you can test my_train_illusion_model()
def my_perceived_motion(*args, **kwargs):
return [np.nan, np.nan]
# let's look at the preditions we generated for two sample trials (0,100)
# we should get a 1x2 vector of self-motion prediction and another
# for world-motion
sensorydata={'opticflow': opticflow[[0, 100], :0],
'vestibular': vestibular[[0, 100], :0]}
params={'threshold': 0.33, 'filterwindows': [100, 50]}
my_train_illusion_model(sensorydata=sensorydata, params=params)
###Output
_____no_output_____
###Markdown
We've also completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
###Code
# Full perceived motion function
def my_perceived_motion(vis, ves, params):
"""
Takes sensory data and parameters and returns predicted percepts
Args:
vis (numpy.ndarray) : 1xM array of optic flow velocity data
ves (numpy.ndarray) : 1xM array of vestibular acceleration data
params : (dict) dictionary with named entries:
see my_train_illusion_model() for details
Returns:
[list of floats] : prediction for perceived self-motion based on
vestibular data, and prediction for perceived
world-motion based on perceived self-motion and
visual data
"""
# estimate self motion based on only the vestibular data
# pass on the parameters
selfmotion = my_selfmotion(ves=ves, params=params)
# estimate the world motion, based on the selfmotion and visual data
# pass on the parameters as well
worldmotion = my_worldmotion(vis=vis, selfmotion=selfmotion, params=params)
return [selfmotion, worldmotion]
###Output
_____no_output_____
###Markdown
TD 6.1: Formulate purpose of the self motion functionNow we plan out the purpose of one of the remaining functions. **Only name input arguments, write help text and comments, _no code_.** The goal of this exercise is to make writing the code (in Micro-tutorial 7) much easier. Based on our work before the break, you should now be able to answer these questions for each function:* what (sensory) data is necessary? * what parameters does the function need, if any?* which operations will be performed on the input?* what is the output?The number of arguments is correct. **Template calculate self motion**Name the _input arguments_, complete the _help text_, and add _comments_ in the function below to describe the inputs, the outputs, and operations using elements from the recap at the top of this notebook (or from micro-tutorials 3 and 4 in part 1), in order to plan out the function. Do not write any code.
###Code
def my_selfmotion(arg1, arg2):
"""
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
Returns:
what output does the function generate?
Any further description?
"""
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# 4.
# what output should this function produce?
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_90e4d753.py) **Template calculate world motion**We have drafted the help text and written comments in the function below that describe the inputs, the outputs, and operations we use to estimate world motion, based on the recap above.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Estimates world motion based on the visual signal, the estimate of
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# 1. running window function
# 2. take final value
# 3. subtract selfmotion from value
# return final value
return output
###Output
_____no_output_____
###Markdown
--- Section 7: Model implementation
###Code
# @title Video 7: Implementation
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV18Z4y1u7yB', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV18Z4y1u7yB
###Markdown
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* take last `selfmotion` value as our estimate* threshold: if (value > thr): else: TD 7.1: Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you! Exercise 1: finish self motion function
###Code
# Self motion function
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict) : dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float) : an estimate of self motion in m/s
"""
# uncomment the code below and fill in with your code
# 1. integrate vestibular signal
# ves = np.cumsum(ves * (1 / params['samplingrate']))
# 2. running window function to accumulate evidence:
# selfmotion = ... YOUR CODE HERE
# 3. take final value of self-motion vector as our estimate
# selfmotion = ... YOUR CODE HERE
# 4. compare to threshold. Hint the threshodl is stored in
# params['threshold']
# if selfmotion is higher than threshold: return value
# if it's lower than threshold: return 0
# if YOURCODEHERE
# selfmotion = YOURCODHERE
# Comment this line when your function is ready
raise NotImplementedError("Student excercise: estimate my_selfmotion")
return output
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_53312239.py) Interactive Demo: Unit testingTesting if the functions you wrote do what they are supposed to do is important, and known as 'unit testing'. Here we will simplify this for the `my_selfmotion()` function, by allowing varying the threshold and window size with a slider, and seeing what the distribution of self-motion estimates looks like.
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
def refresh(threshold=0, windowsize=100):
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
selfmotion_estimates = np.empty(200)
# get the estimates for each trial:
for trial_number in range(200):
ves = vestibular[trial_number, :]
selfmotion_estimates[trial_number] = my_selfmotion(ves, params)
plt.figure()
plt.hist(selfmotion_estimates, bins=20)
plt.xlabel('self-motion estimate')
plt.ylabel('frequency')
plt.show()
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
**Estimate world motion**We have completed the `my_worldmotion()` function for you below.
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
"""
Short description of the function
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
"""
# running average to smooth/accumulate sensory evidence
visualmotion = my_moving_window(vis, window=params['filterwindows'][1],
FUN=np.mean)
# take final value
visualmotion = visualmotion[-1]
# subtract selfmotion from value
worldmotion = visualmotion + selfmotion
# return final value
return worldmotion
###Output
_____no_output_____
###Markdown
--- Section 8: Model completion
###Code
# @title Video 8: Completion
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1YK411H7oW', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1YK411H7oW
###Markdown
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). TD 8.1: See if the model produces illusions
###Code
# @markdown Run to plot model predictions of motion estimates
# prepare to run the model again:
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.6, 'filterwindows': [100, 50], 'FUN': np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
my_plot_percepts(datasets={'predictions': predictions}, plotconditions=True)
###Output
_____no_output_____
###Markdown
**Questions:*** How does the distribution of data points compare to the plot in TD 1.2 or in TD 7.1?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two clusters of data points. This mean the model can help us understand the phenomenon. --- Section 9: Model evaluation
###Code
# @title Video 9: Evaluation
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1uK411H7EK', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1uK411H7EK
###Markdown
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data.**Quantify model quality with $R^2$**Let's look at how well our model matches the actual judgment data.
###Code
# @markdown Run to plot predictions over data
my_plot_predictions_data(judgments, predictions)
###Output
_____no_output_____
###Markdown
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
###Code
# @markdown Run to calculate R^2
conditions = np.concatenate((np.abs(judgments[:, 1]), np.abs(judgments[:, 2])))
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(conditions, veljudgmnt)
print(f"conditions -> judgments R^2: {r_value ** 2:0.3f}")
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
###Output
conditions -> judgments R^2: 0.032
predictions -> judgments R^2: 0.256
###Markdown
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: in a certain percentage of cases the model tends to have the same illusions as the participants. TD 9.1 Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds** Interactive Demo: optimizing the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R^2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
Varying the parameters this way, allows you to increase the models' performance in predicting the actual data as measured by $R^2$. This is called model fitting, and will be done better in the coming weeks. TD 9.2: Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here. Exercise 2: function for credit assigment of self motion
###Code
def my_selfmotion(ves, params):
"""
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
"""
# integrate signal:
ves = np.cumsum(ves * (1 / params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves, window=params['filterwindows'][0],
FUN=params['FUN'])
# take the final value as our estimate:
selfmotion = selfmotion[-1]
# compare to threshold, set to 0 if lower and else...
if selfmotion < params['threshold']:
selfmotion = 0
###########################################################################
# Exercise: Complete credit assignment. Remove the next line to test your function
else:
selfmotion = ... #YOUR CODE HERE
raise NotImplementedError("Modify with credit assignment")
###########################################################################
return selfmotion
# Use the updated function to run the model and plot the data
# Uncomment below to test your function
data = {'opticflow': opticflow, 'vestibular': vestibular}
params = {'threshold': 0.33, 'filterwindows': [100, 50], 'FUN': np.mean}
#modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
#my_plot_percepts(datasets={'predictions': predictions}, plotconditions=False)
###Output
_____no_output_____
###Markdown
[*Click for solution*](https://github.com/erlichlab/course-content/tree/master//tutorials/W1D2_ModelingPractice/solutions/W1D2_Tutorial2_Solution_51dce10c.py)*Example output:* That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved. Use the optimal values for the threshold and window size that you found previously. Interactive Demo: evaluating the model
###Code
#@title
#@markdown Make sure you execute this cell to enable the widget!
data = {'opticflow': opticflow, 'vestibular': vestibular}
def refresh(threshold=0, windowsize=100):
# set parameters according to sliders:
params = {'samplingrate': 10, 'FUN': np.mean}
params['filterwindows'] = [windowsize, 50]
params['threshold'] = threshold
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
predictions = np.zeros(judgments.shape)
predictions[:, 0:3] = judgments[:, 0:3]
predictions[:, 3] = modelpredictions['selfmotion']
predictions[:, 4] = modelpredictions['worldmotion'] * -1
# plot the predictions:
my_plot_predictions_data(judgments, predictions)
# calculate R2
veljudgmnt = np.concatenate((judgments[:, 3], judgments[:, 4]))
velpredict = np.concatenate((predictions[:, 3], predictions[:, 4]))
slope, intercept, r_value,\
p_value, std_err = stats.linregress(veljudgmnt, velpredict)
print(f"predictions -> judgments R2: {r_value ** 2:0.3f}")
_ = widgets.interact(refresh, threshold=(-1, 2, .01), windowsize=(1, 100, 1))
###Output
_____no_output_____
###Markdown
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are a little worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. **Interpret the model's meaning**Here's what you should have learned from model the train illusion: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis.We decided that for now we have learned enough, so it's time to write it up. --- Section 10: Model publication!
###Code
# @title Video 10: Publication
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id='BV1M5411e7AG', width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
video
###Output
Video available at https://www.bilibili.com/video/BV1M5411e7AG
|
Prace_domowe/Praca_domowa2/Grupa2/MorgenPawel/Praca_domowa_2.ipynb | ###Markdown
Praca domowa 2 Wstęp do uczenia maszynowego Paweł Morgen
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import sklearn
import category_encoders as cat_enc
data = pd.read_csv("allegro-api-transactions.csv")
# data.head()
###Output
_____no_output_____
###Markdown
1. Kodowanie zmiennych kategorycznych
###Code
print('Ilość kategorii zmiennej it_location:', data.loc[:,'it_location'].unique().shape[0])
te=cat_enc.target_encoder.TargetEncoder(data)
encoded=te.fit_transform(data.loc[:,'it_location'],data.loc[:,'price'])
data['it_location_target_encoded'] = encoded
###Output
Ilość kategorii zmiennej it_location: 10056
###Markdown
Zalety target encodingu w stosunku do One-hot encodingu: * Efektywniejsza gospodarka pamięcią (tu mamy ponad 10000 unikatowych wartości; do OHC potrzebowalibyśmy 2^10000 bitów pamięci * Niosą pewną informację o związku między zmienną zakodowaną a zmienną celu - zwiększona skuteczność modelu Zakodowanie zmiennej *main_category*Oprócz One-Hot użyjemy metody *leave one out* (`LeaveOneOutEncoder`) oraz metody Jamesa-Steina (`JamesSteinEncoder`). Adnotacja: próbowałem użyć haszowania (`HashingEncoder`), ale bez sukcesu (wyglądało to, jakby program wpadał w nieskończoną pętlę).
###Code
print('Ilość kategorii zmiennej main_category:', data.loc[:,'main_category'].unique().shape[0])
from category_encoders import LeaveOneOutEncoder, JamesSteinEncoder
from sklearn.preprocessing import OneHotEncoder
one_hot_encoder = OneHotEncoder()
one_hot_encoded = one_hot_encoder.fit_transform(data.loc[:,'main_category'].to_numpy().reshape(-1,1))
james_stein_encoder = JamesSteinEncoder()
james_stein_encoded = james_stein_encoder.fit_transform(data.loc[:,'main_category'],
data.loc[:,'price'])
leave_1_encoder = LeaveOneOutEncoder(data)
leave_1_encoded = leave_1_encoder.fit_transform(data.loc[:,'main_category'],
data.loc[:,'price'])
data['main_one_hot_encoded'] = one_hot_encoded
data['main_leave_1_encoded'] = leave_1_encoded
data['main_js_encoded'] = james_stein_encoded
data.loc[:,['main_category','main_one_hot_encoded','main_leave_1_encoded','main_js_encoded']].head()
###Output
Ilość kategorii zmiennej main_category: 27
###Markdown
2. Uzupełnianie braków
###Code
from sklearn.impute import KNNImputer, SimpleImputer
from sklearn.metrics import mean_squared_error
from sklearn import preprocessing
from math import sqrt
# Z powodów wydajności będziemy korzystać z 1/50 rekordów.
# Testy przeprowadzimy dla zmiennych oryginalnych i ustandaryzowanych.
# Ponadto porównamy wyniki KNNImputera z SimpleImputerem, korzystającym z mediany.
original_data_num = data.loc[np.random.randint(0, data.shape[0], data.shape[0] // 50),
['price', 'it_seller_rating', 'it_quantity']].reset_index(drop = True)
scaler = preprocessing.StandardScaler()
scaled_data_num = pd.DataFrame(scaler.fit_transform(original_data_num))
scaled_data_num.columns = original_data_num.columns
def run_tests(original_data_num):
errors = [[None] * 10 for i in range(4)]
imp = KNNImputer(n_neighbors = 2)
simple_imp = SimpleImputer(strategy = 'median')
for i in range(10):
missing_data_num = original_data_num.copy()
NA_indexes = np.random.randint(0, original_data_num.shape[0], original_data_num.shape[0] // 10)
missing_data_num.loc[NA_indexes, 'it_seller_rating'] = np.nan
knn_data_num = missing_data_num.copy()
knn_data_num = pd.DataFrame(imp.fit_transform(knn_data_num))
knn_data_num.columns = original_data_num.columns
errors[0][i] = sqrt(mean_squared_error(original_data_num.loc[NA_indexes, 'it_seller_rating'],
knn_data_num.loc[NA_indexes, 'it_seller_rating']))
median_data_num = missing_data_num
median_data_num.loc[:,'it_seller_rating'] = simple_imp.fit_transform(median_data_num.loc[:,'it_seller_rating'].to_numpy().reshape(-1,1))
errors[1][i] = sqrt(mean_squared_error(original_data_num.loc[NA_indexes, 'it_seller_rating'],
median_data_num.loc[NA_indexes, 'it_seller_rating']))
for i in range(10):
missing_data_num = original_data_num.copy()
NA_indexes = [np.random.randint(0, original_data_num.shape[0], original_data_num.shape[0] // 10),
np.random.randint(0, original_data_num.shape[0], original_data_num.shape[0] // 10)]
missing_data_num.loc[NA_indexes[0], 'it_seller_rating'] = np.nan
missing_data_num.loc[NA_indexes[1], 'it_quantity'] = np.nan
knn_data_num = missing_data_num.copy()
knn_data_num = pd.DataFrame(imp.fit_transform(knn_data_num))
knn_data_num.columns = original_data_num.columns
errors[2][i] = sqrt(mean_squared_error(original_data_num.loc[NA_indexes[0], 'it_seller_rating'],
knn_data_num.loc[NA_indexes[0], 'it_seller_rating'])) + sqrt(
mean_squared_error(original_data_num.loc[NA_indexes[1], 'it_quantity'],
knn_data_num.loc[NA_indexes[1], 'it_quantity']))
median_data_num = missing_data_num
median_data_num.loc[:,'it_seller_rating'] = simple_imp.fit_transform(median_data_num.loc[:,'it_seller_rating'].to_numpy().reshape(-1,1))
median_data_num.loc[:,'it_quantity'] = simple_imp.fit_transform(median_data_num.loc[:,'it_quantity'].to_numpy().reshape(-1,1))
# print(median_data_num)
errors[3][i] = sqrt(mean_squared_error(original_data_num.loc[NA_indexes[0], 'it_seller_rating'],
median_data_num.loc[NA_indexes[0], 'it_seller_rating'])) + sqrt(
mean_squared_error(original_data_num.loc[NA_indexes[1], 'it_quantity'],
median_data_num.loc[NA_indexes[1], 'it_quantity']))
return errors
errors = run_tests(original_data_num)
errors_stand = run_tests(scaled_data_num)
def plot_summary(errors):
error_df = pd.DataFrame({'knn1_column' : errors[0],
'median1_column' : errors[1],
'knn2_column' : errors[2],
'median2_column' : errors[3]})
df = pd.melt(error_df,
value_vars = ['knn1_column', 'median1_column', 'knn2_column', 'median2_column'],
value_name = 'RSME_value')
df = df.assign(n_with_NAs = df.loc[:, 'variable'].str.extract('([12]_column)'),
imp_type = df.loc[:,'variable'].str.replace('([12]_column)', ''))
sns.boxplot(x = df.loc[:,'n_with_NAs'],
y = df.loc[:,'RSME_value'],
hue = df.loc[:,'imp_type']).set_title('Imputation comparison')
data.loc[:,['it_quantity', 'it_seller_rating']].describe()
plot_summary(errors)
plot_summary(errors_stand)
###Output
_____no_output_____ |
00-Python3 Object and Data Structure Basics/01-statements, Indentation & comments.ipynb | ###Markdown
Python Statement, Indentation and Comments Python StatementInstructions that a Python interpreter can execute are called statements. For example, a = 1 is an assignment statement. if statement, for statement, while statement, etc. are other kinds of statements which will be discussed later. Multi-line statementIn Python, the end of a statement is marked by a newline character. But we can make a statement extend over multiple lines with the line continuation character (\). For example:
###Code
a = 1 + 2 + 3 + \
4 + 5 + 6 + \
7 + 8 + 9
###Output
_____no_output_____
###Markdown
This is an explicit line continuation. In Python, line continuation is implied inside parentheses ( ), brackets [ ], and braces { }. For instance, we can implement the above multi-line statement as:
###Code
a = (1 + 2 + 3 +
4 + 5 + 6 +
7 + 8 + 9)
###Output
_____no_output_____
###Markdown
Here, the surrounding parentheses ( ) do the line continuation implicitly. Same is the case with [ ] and { }. For example:
###Code
colors = ['red',
'blue',
'green']
###Output
_____no_output_____
###Markdown
We can also put multiple statements in a single line using semicolons, as follows:
###Code
a = 1; b = 2; c = 3
###Output
_____no_output_____
###Markdown
Python IndentationMost of the programming languages like C, C++, and Java use braces { } to define a block of code. Python, however, uses indentation.A code block (body of a function, loop, etc.) starts with indentation and ends with the first unindented line. The amount of indentation is up to you, but it must be consistent throughout that block.Generally, four whitespaces are used for indentation and are preferred over tabs. Here is an example.
###Code
for i in range(1,11):
print(i)
if i == 5:
break
###Output
1
2
3
4
5
###Markdown
The enforcement of indentation in Python makes the code look neat and clean. This results in Python programs that look similar and consistent.Indentation can be ignored in line continuation, but it's always a good idea to indent. It makes the code more readable. For example:
###Code
if True:
print('Hello')
a = 5
if True: print('Hello'); a = 5
###Output
Hello
###Markdown
both are valid and do the same thing, but the former style is clearer. Python Comments Comments are very important while writing a program. They describe what is going on inside a program, so that a person looking at the source code does not have a hard time figuring it out.You might forget the key details of the program you just wrote in a month's time. So taking the time to explain these concepts in the form of comments is always fruitful.In Python, we use the hash () symbol to start writing a comment.It extends up to the newline character. Comments are for programmers to better understand a program. Python Interpreter ignores comments.
###Code
#This is a comment
#print out Hello
print('Hello')
###Output
Hello
###Markdown
Multi-line commentsWe can have comments that extend up to multiple lines. One way is to use the hash() symbol at the beginning of each line. For example:This is a long commentand it extendsto multiple linesAnother way of doing this is to use triple quotes, either ''' or """.These triple quotes are generally used for multi-line strings. But they can be used as a multi-line comment as well. Unless they are not docstrings, they do not generate any extra code.
###Code
"""This is also a
perfect example of
multi-line comments"""
###Output
_____no_output_____
###Markdown
Docstrings in PythonA docstring is short for documentation string.Python docstrings (documentation strings) are the string literals that appear right after the definition of a function, method, class, or module.Triple quotes are used while writing docstrings. For example:
###Code
def double(num):
"""Function to double the value"""
return 2*num
###Output
_____no_output_____
###Markdown
Docstrings appear right after the definition of a function, class, or a module. This separates docstrings from multiline comments using triple quotes.The docstrings are associated with the object as their __doc__ attribute.So, we can access the docstrings of the above function with the following lines of code:
###Code
def double(num):
"""Function to double the value"""
return 2*num
print(double.__doc__)
###Output
Function to double the value
|
eda/13-parking_w_gmap_info.ipynb | ###Markdown
Follow up
###Code
follow_up = stage1mrg[stage1mrg['Search_GPS'].isna()].groupby(['lat','lon','pid']).size().reset_index()
import googlemaps
with open('/Users/timlee/Dropbox/keys/google_api_key.txt','r') as f:
gmap_api_key = f.read()
gmaps = googlemaps.Client( key = gmap_api_key)
output = []
raw_json = []
for lat, lng, pid, ct in follow_up.values:
# print('Reverse pulling ... %s %s' %(lat,lng))
geocode_result = gmaps.reverse_geocode((lat,lng))
store_json = {
'lat' : lat,
'lng' : lng,
'pid' : pid,
'data': geocode_result
}
raw_json.append(store_json)
raw_json[0]
follow_up = []
for i, rj in enumerate(raw_json):
lat = rj['lat']
lng = rj['lng']
pid = rj['pid']
one_addr_details = gpsaddr_extract_json(rj['data'])
one_addr_details['orig_lat'] = float(lat)
one_addr_details['orig_lng'] = float(lng)
one_addr_details['pid'] = int(pid)
one_addr_details['Search_GPS'] = str(lat)+','+str(lng)
addr_collection.append(one_addr_details)
follow_up.append(one_addr_details)
follow_up_df = pd.DataFrame(follow_up)
follow_up_df.shape
follow_up_df.head()
backfill_dict = {pid: srch for pid, srch in follow_up_df[['pid','Search_GPS']].values}
stage1mrg.drop(columns=['related_addr','address_tags'], inplace=True)
for row in follow_up_df.values:
pid = row[-5]
mask = stage1mrg['pid']==pid
stage1mrg.loc[mask, 'Search_GPS'] = row[0]
stage1mrg.loc[mask, 'neighborhood'] = row[2]
stage1mrg.loc[mask, 'orig_lat'] = row[3]
stage1mrg.loc[mask, 'orig_lng'] = row[4]
stage1mrg.loc[mask, 'street_name'] = row[7]
stage1mrg.loc[mask, 'street_no'] = row[8]
stage1mrg.loc[mask, 'zipcode'] = row[9]
stage1mrg.drop(columns=['address_tags', 'related_addr'], inplace=True)
stage1mrg.reset_index(inplace=True)
stage1mrg.to_feather('../ref_data/gmaps_df_parking.feather')
###Output
_____no_output_____ |
StatsCan Historical Weather Data.ipynb | ###Markdown
1. Installing Cygwin. Further to downloading the compatible bit-version of Cygwin in your computer (32/64 bit), one of the reasons why the command line may not generate csv files with data is because there may a little installation missteps when setting up Cygwin. The following are some directions that you may follow in order to set the program properly and solve issues:- When prompt to the window “Available Download Sites” select the option “cygwin.mirrors.hoobly.com” - Then, at the step “Select Packages” during the installation process, Search for package “wget”, click on the web@default option and click on skip, and click next. 2. Using the full command line to extract data. The command line listed Run Cygwin terminal command: for year in `seq 2001 2018`;do for month in `seq 1 12`;do wget --content-disposition "http://climate.weather.gc.ca/climate_data/bulk_data_e.html?format=csv&stationID=31688&Year=${year}&Month=${month}&Day=14&timeframe=1&submit= Download+Data" ;done;doneCopy data from c:\cygwin64\bin to dataRun notebook cells below https://www.reddit.com/r/Python/comments/52sw9q/opening_a_cygwin_terminal_with_a_python_script/ find way to change cygwin download directory to data directory
###Code
import pandas as pd
import numpy as np
import sys, os, subprocess
root = 'data/'
filenames = []
for path, subdirs, files in os.walk(root):
for name in files:
filenames.append(os.path.join(path, name))
filenames[0:5]
df = pd.concat( [ pd.read_csv(f,skiprows=15) for f in filenames ] )
df.info()
df.to_csv('weather_data_2002_2018.csv')
###Output
_____no_output_____ |
src/Part3/Quiz25.ipynb | ###Markdown
Imagine how they sound[](https://github.com/Dragon1573/PyChallenge-Tips/blob/master/LICENSE)[](http://www.pythonchallenge.com/pc/hex/lake.html)   可以看出,这张图片是由$5\times5$块小拼图拼接而成的,而关卡标题提到了关键词`sound`,估计存在对应名称的音频文件。  日常先检查源代码。
###Code
from requests import get
from bs4 import BeautifulSoup as Soup
""" 获取关卡源代码 """
response = get(
'http://www.pythonchallenge.com/pc/hex/lake.html',
headers={'Authorization': 'Basic YnV0dGVyOmZseQ=='}
)
response = Soup(response.text, features='html5lib')
print(response.img)
print(response.img.next.next.strip())
###Output
<img src="lake1.jpg"/>
can you see the waves?
###Markdown
  `lake1.jpg`、`waves`?难道换成`lake1.wav`?而且关卡图片有25块小拼图,莫非需要25段音频拼接起来?
###Code
from io import BytesIO
import wave
""" 获取25个音频文件 """
archives = []
for k in range(1, 26):
response = get(
'http://www.pythonchallenge.com/pc/hex/lake{0}.wav'.format(k),
headers={'Authorization': 'Basic YnV0dGVyOmZseQ=='}
)
archives.append(wave.open(BytesIO(response.content), mode='rb'))
""" 获取音频帧数 """
for audio in archives:
print('Frames: %d' % audio.getnframes(), end='\t')
###Output
Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800 Frames: 10800
###Markdown
  我们看到,每一个音频都有10.8kFrames,而每一个`RGB`像素需要3Frames,这样每个音频就能构成3600个像素,即一块$60\times60$的小拼图。将25个音频按$5\times5$拼接成1张$300 \times 300$的大图,就能获得答案。
###Code
from PIL import Image
""" 音频转图像,合成拼图 """
result = Image.new('RGB', (300, 300))
for k in range(25):
data = archives[k].readframes(archives[k].getnframes())
image = Image.frombytes('RGB', (60, 60), data)
result.paste(image, (60 * (k % 5), 60 * (k // 5)))
display(result)
###Output
_____no_output_____ |
Assignment_06_ver1.0.ipynb | ###Markdown
###Code
import numpy as np # linear algebra
import pandas as pd # data processing
from sklearn.model_selection import train_test_split
#!pip install pydataset
from pydataset import data
from sklearn.datasets import load_breast_cancer
from sklearn.metrics import accuracy_score, recall_score, roc_auc_score, confusion_matrix
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import RFE
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
cancerdata = load_breast_cancer()
print(cancerdata.DESCR)
can = pd.DataFrame(cancerdata.data,columns=cancerdata.feature_names)
can['diagnosis'] = cancerdata.target
can = pd.DataFrame(cancerdata.data,cancerdata.target)
can.isnull().sum().sum()
# This indicates no data cleaning required
# estimate how many are 0 and 1 is present in target diagnosis
pd.crosstab(index = cancerdata.target, columns = 'count')
can.describe().unstack()
# Create correlation matrix
df_features = pd.DataFrame(cancerdata.data, columns = cancerdata.feature_names)
df_target = pd.DataFrame(cancerdata.target, columns=['target'])
df = pd.concat([df_features, df_target], axis=1)
corr_mat = df.corr()
# Create mask
mask = np.zeros_like(corr_mat, dtype=np.bool)
mask[np.triu_indices_from(mask, k=1)] = True
# Plot heatmap
plt.figure(figsize=(15, 10))
sns.heatmap(corr_mat[corr_mat > 0.8], annot=True, fmt='.1f',
cmap='RdBu_r', vmin=-1, vmax=1,
mask=mask)
##I will use Univariate Feature Selection (sklearn.feature_selection.SelectKBest)
##to choose 5 features with the k highest scores.
##I choose 5 because from the heatmap I could see about 5 groups of features that are highly correlated.
from sklearn.feature_selection import SelectKBest, chi2
feature_selection = SelectKBest(chi2, k=5)
feature_selection.fit(df_features, df_target)
selected_features = df_features.columns[feature_selection.get_support()]
print("The five selected features are: ", list(selected_features))
X = pd.DataFrame(feature_selection.transform(df_features),
columns=selected_features)
X.head()
can = pd.DataFrame(cancerdata.data,columns=cancerdata.feature_names)
can['diagnosis'] = cancerdata.target
can.sample(5)
sns.pairplot(pd.concat([X, df['target']], axis=1), hue='target')
from sklearn.model_selection import train_test_split
y = df_target['target']
X_train, X_test, y_train, y_test = train_test_split(
X, y, train_size=0.10,test_size=0.90, random_state=42)
#Random Forest classifier
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(n_estimators=200)
rfc.fit(X_train, y_train)
y_pred = rfc.predict(X_test)
#Confusion matrix
from sklearn.metrics import confusion_matrix, classification_report
print("Confusion Matrix:\n", confusion_matrix(y_test, y_pred))
print("\n")
print("Classification Report:\n", classification_report(y_test, y_pred))
#PCA analysis to analyse distribution of the features
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(df_features)
features_scaled = scaler.transform(df_features)
features_scaled = pd.DataFrame(data=features_scaled,
columns=df_features.columns)
features_scaled.head(5)
df_scaled = pd.concat([features_scaled, df['target']], axis=1)
X_scaled = features_scaled
pca = PCA(n_components=2)
pca.fit(X_scaled)
X_pca = pca.transform(X_scaled)
plt.figure(figsize=(8, 8))
sns.scatterplot(X_pca[:, 0], X_pca[:, 1], hue=df['target'])
plt.title("PCA")
plt.xlabel("First Principal Component")
plt.xlabel("Second Principal Component")
X_train, X_test, y_train, y_test = train_test_split(cancerdata.data, cancerdata.target, stratify=cancerdata.target, train_size=0.10, test_size=0.90, random_state=42)
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train, y_train)
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_train,y_train)))
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_test,y_test)))
#drawing the graph
training_accuracy = []
test_accuracy = []
#try log_reg for diffrent k nearest neighbor from 1 to 15
neighbors_setting = range(1,15)
for n_neighbors in neighbors_setting:
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train,y_train)
training_accuracy.append(log_reg.score(X_train, y_train))
test_accuracy.append(log_reg.score(X_test, y_test))
plt.plot(neighbors_setting,training_accuracy, label='Accuracy of the training set')
plt.plot(neighbors_setting,test_accuracy, label='Accuracy of the test set')
plt.ylabel('Accuracy')
plt.xlabel('Number of Neighbors')
plt.legend()
X_train, X_test, y_train, y_test = train_test_split(cancerdata.data, cancerdata.target, stratify=cancerdata.target, train_size=0.20, test_size=0.80, random_state=42)
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train, y_train)
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_train,y_train)))
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_test,y_test)))
training_accuracy = []
test_accuracy = []
#try log_reg for diffrent k nearest neighbor from 1 to 15
neighbors_setting = range(1,15)
for n_neighbors in neighbors_setting:
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train,y_train)
training_accuracy.append(log_reg.score(X_train, y_train))
test_accuracy.append(log_reg.score(X_test, y_test))
plt.plot(neighbors_setting,training_accuracy, label='Accuracy of the training set')
plt.plot(neighbors_setting,test_accuracy, label='Accuracy of the test set')
plt.ylabel('Accuracy')
plt.xlabel('Number of Neighbors')
plt.legend()
X_train, X_test, y_train, y_test = train_test_split(cancerdata.data, cancerdata.target, stratify=cancerdata.target, train_size=0.30, test_size=0.70, random_state=42)
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train, y_train)
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_train,y_train)))
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_test,y_test)))
training_accuracy = []
test_accuracy = []
#try log_reg for diffrent k nearest neighbor from 1 to 15
neighbors_setting = range(1,15)
for n_neighbors in neighbors_setting:
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train,y_train)
training_accuracy.append(log_reg.score(X_train, y_train))
test_accuracy.append(log_reg.score(X_test, y_test))
plt.plot(neighbors_setting,training_accuracy, label='Accuracy of the training set')
plt.plot(neighbors_setting,test_accuracy, label='Accuracy of the test set')
plt.ylabel('Accuracy')
plt.xlabel('Number of Neighbors')
plt.legend()
X_train, X_test, y_train, y_test = train_test_split(cancerdata.data, cancerdata.target, stratify=cancerdata.target, train_size=0.40, test_size=0.60, random_state=42)
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train, y_train)
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_train,y_train)))
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_test,y_test)))
training_accuracy = []
test_accuracy = []
#try log_reg for diffrent k nearest neighbor from 1 to 15
neighbors_setting = range(1,15)
for n_neighbors in neighbors_setting:
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train,y_train)
training_accuracy.append(log_reg.score(X_train, y_train))
test_accuracy.append(log_reg.score(X_test, y_test))
plt.plot(neighbors_setting,training_accuracy, label='Accuracy of the training set')
plt.plot(neighbors_setting,test_accuracy, label='Accuracy of the test set')
plt.ylabel('Accuracy')
plt.xlabel('Number of Neighbors')
plt.legend()
X_train, X_test, y_train, y_test = train_test_split(cancerdata.data, cancerdata.target, stratify=cancerdata.target, train_size=0.50, test_size=0.50, random_state=42)
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train, y_train)
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_train,y_train)))
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_test,y_test)))
training_accuracy = []
test_accuracy = []
#try log_reg for diffrent k nearest neighbor from 1 to 15
neighbors_setting = range(1,15)
for n_neighbors in neighbors_setting:
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train,y_train)
training_accuracy.append(log_reg.score(X_train, y_train))
test_accuracy.append(log_reg.score(X_test, y_test))
plt.plot(neighbors_setting,training_accuracy, label='Accuracy of the training set')
plt.plot(neighbors_setting,test_accuracy, label='Accuracy of the test set')
plt.ylabel('Accuracy')
plt.xlabel('Number of Neighbors')
plt.legend()
X_train, X_test, y_train, y_test = train_test_split(cancerdata.data, cancerdata.target, stratify=cancerdata.target, train_size=0.60, test_size=0.40, random_state=42)
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train, y_train)
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_train,y_train)))
print('Accuracy on the training set: {:.3f}'.format(log_reg.score(X_test,y_test)))
training_accuracy = []
test_accuracy = []
#try log_reg for diffrent k nearest neighbor from 1 to 15
neighbors_setting = range(1,15)
for n_neighbors in neighbors_setting:
log_reg = LogisticRegression(max_iter=100000)
log_reg.fit(X_train,y_train)
training_accuracy.append(log_reg.score(X_train, y_train))
test_accuracy.append(log_reg.score(X_test, y_test))
plt.plot(neighbors_setting,training_accuracy, label='Accuracy of the training set')
plt.plot(neighbors_setting,test_accuracy, label='Accuracy of the test set')
plt.ylabel('Accuracy')
plt.xlabel('Number of Neighbors')
plt.legend()
#Feature importance
from sklearn.ensemble import RandomForestClassifier
X_train, X_test, y_train, y_test = train_test_split(cancerdata.data, cancerdata.target, random_state=0)
forest = RandomForestClassifier(n_estimators=100, random_state=0)
forest.fit(X_train,y_train)
n_feature = cancerdata.data.shape[1]
plt.barh(range(n_feature), forest.feature_importances_, align='center')
plt.yticks(np.arange(n_feature), cancerdata.feature_names)
plt.xlabel('Feature Importance')
plt.ylabel('Feature')
plt.show()
#----------------- Decision Tree
from sklearn.tree import DecisionTreeClassifier #Decision Tree
X_train, X_test, y_train, y_test = train_test_split(cancerdata.data, cancerdata.target, random_state=42)
training_accuracy = []
test_accuracy = []
max_dep = range(1,15)
for md in max_dep:
tree = DecisionTreeClassifier(max_depth=md,random_state=0)
tree.fit(X_train,y_train)
training_accuracy.append(tree.score(X_train, y_train))
test_accuracy.append(tree.score(X_test, y_test))
plt.plot(max_dep,training_accuracy, label='Accuracy of the training set')
plt.plot(neighbors_setting,test_accuracy, label='Accuracy of the test set')
plt.ylabel('Accuracy')
plt.xlabel('Max Depth')
plt.legend()
###Output
_____no_output_____ |
data_processing.ipynb | ###Markdown
 Limpieza Inicial de Datos, Unión de Tablas y Formateo de FechasSe requiere de un conjunto de datos limpio, es decir, que no se presenten entradas nulas o NaN’s, que el formato de fechas sea el mismo para todos los valores y que los atributos estén en forma de columnas i.e. que cada variable meteorológica o de contaminantes estén en una columna separada, entre otras propiedades que se describirán a continuación.El proceso de limpieza de datos consiste en hacer un conjunto de manipulaciones a la tablas para generar un dataset óptimo. A continuación, se muestra el diagrama de la limpieza de datos realizada:__Pasos y descripción general del notebook__1. __Descarga de Tablas:__ Los datos de contaminantes y meteorología son descargados por separado. Los datos usados para el entrenamiento son verificados de manera manual por la SEDEMA. En este notebook vamos a juntar los archivos de contaminación y meoteorología de cada año en un solo archivo, también se eliminan las entradas vacías. 2. __Convertir a tabla con variables por columna__: Se pasa de tener una columna que indica el atributo medido y otro el valor de la medición a una columna por cada atribute que indica el valor de la medición.3. __Formateo de Fechas:__ se arreglará el formato de fechas al formato **YY/m/d hh:mm** con horas de 0 a 23 y también vamos a generar columnas de información temporal con parámetros como hora, día y mes para cada medición - __Datos recibidos:__ [Meteorología,](http://www.aire.cdmx.gob.mx/default.php?opc='aKBhnmI='&opcion=Zw==)[Contamianción](http://www.aire.cdmx.gob.mx/default.php?opc='aKBhnmI='&opcion=Zg==)- __Responsable:__ Daniel Bustillos- __Contacto:__ [email protected]___
###Code
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import pandas as pd
import matplotlib
import seaborn as sns
from datetime import datetime, timedelta
from datetime import timedelta
import datetime as dt
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Presentación de los datos utilizadosEl Sistema de Monitoreo Atmosférico de la Ciudad de México presenta de forma horaria desde el año 1986 las condiciones meteorológicas y de contaminación que describen la atmósfera de la zona metropolitana. La información descrita se presenta de dos formas: puede ser una base de datos revisada por expertos de la SEDEMA para descartar mediciones de fuentes atípicas de contaminación tales como incendios o desperfectos en las estaciones de monitoreo, o no revisada, obteniendo directamente la medición como se midió en la estación de monitoreo. Esta falta de consistencia de la información puede generar valores erróneos en el pronóstico generado, limitando el desempeño de los modelos. Por este motivo, los datos de monitoreo usados para el entrenamiento de los modelos son los datos revisados por los expertos.Para el entrenamiento de los modelos los datos usados abarcan el periodo de enero 2014 hasta diciembre 2018, accesibles en la sección de datos Horarios por contaminante y de datos horarios de Meteorología. Las variables meteorológicas y de contaminación utilizadas para el desarrollo del modelo se muestra en la siguiente tabla:Las estaciones en operación se distribuyen en el área metropolitana, concentrándose en la zona central de la CDMX. En la siguiente figura se muestra la posición geográfica de las estaciones.Como parte del proceso de la generación de los modelos de pronóstico de contamianción, es necesario realizar un conjunto de operaciones a los datos obtenidos de la página de [Monitoreo de Calidad del Aire de la Ciudad de México](http://www.aire.cdmx.gob.mx/default.php). Como se mencionó en el archivo de metodología, los datos a usar son los datos verificados por los expertos de la SEDEMA. Los datos para meteorología y contaminanción se pueden obtener acontinuación:- [Meteorología](http://www.aire.cdmx.gob.mx/default.php?opc='aKBhnmI='&opcion=Zw==)- [Contamianción](http://www.aire.cdmx.gob.mx/default.php?opc='aKBhnmI='&opcion=Zg==)Juntaremos los dataframes con una PivotTable y las agruparemos por el momento de la medición Definimos tres funciones para formatear el formato de las fechas:Convertir el formato de 1 a 24 horas al formato de 0 a 23 horas. Por defecto python trabaja con el formato de 0 a 23 horas, es conveniente trabajar en este formato debido a que muchas de las funciones implementadas en python u otras librerias suponen que este es el formato de las fechas.El formato original de las fechas, es d/m/YY h:m y el formato despuésde aplicar la función es YY/m/d hh:mm.
###Code
def time_converter(x):
x0 = x.split(" ")[0]
x0 = x0.split("/")
x1 = x.split(" ")[1]
if x1[:].endswith("24:00"):
# Notemos que cuando la hora es 24, es necesario convertirla a 00 sin embargo también es necesario
# esta fecha se desplazará al siguiente día, es deicr, si se tiene '19-05-01 24:00', al terminar con "24",
# se sustituirá por '19-05-02 00:00'
# Considerando esto, se aplica lasiguiente función:
fecha_0 = x0[2]+"-"+x0[1]+"-"+x0[0]+" 23:00"
date = datetime.strptime(fecha_0, "%Y-%m-%d %H:%M")
new_time = date + timedelta(hours=1)
return new_time.strftime('%Y-%m-%d %H:%M')
else:
return x0[2]+"-"+x0[1]+"-"+x0[0]+" "+ x1[:]
###Output
_____no_output_____
###Markdown
Definamos el año a limpiar:
###Code
target = "meteorologia"
target = "contaminantes"
anio = "2020"
###Output
_____no_output_____
###Markdown
A continuación se define una función que realizará los siguientes procesos: - Leer el archivo de contaminantes o meteorología del año seleccionado. - Eliminar las entradas vacías - Hacer una tabla pivote para pasar de una columna con el nombre del atributo y su valor a una columna por atributo. - Convertir la columna fecha de d/m/yy hh:mm a yy/mm/dd hh:mm y pasar del formato de horas de 1..24 a 0...23.
###Code
met_2018 = pd.read_csv(str('./datasets/' + target + "/" + target + "_" + str(anio) + ".csv"),header=10) # leer archivo
if "cve_station" in met_2018.columns or "cve_parameter" in met_2018.columns:
met_2018.rename(columns={'cve_station': 'id_station', 'cve_parameter': 'id_parameter'}, inplace=True) # checar nombre columbas
met_2018['hora'] = met_2018['date'].astype(str).str[-5:-3].astype(int)
met_2018 = met_2018.dropna(subset=["value"]).reset_index(drop=True)#PM25
sns.distplot(met_2018["hora"], bins=24, kde=False, rug=True);
for hora in tqdm(range(1,25)): # valores por estación
estaciones.loc[:,hora] = met_2018[met_2018["hora"]==hora]["id_station"].value_counts().values
###Output
_____no_output_____
###Markdown
Juntemos este proceso en una función, se aplicará a meteorología y contaminantes
###Code
def formateo_csv(target, anio):
#leemos el archivo
met_2018 = pd.read_csv(str('./data/raw/' + target + "/" + target + "_" + str(anio) + ".csv"),header=10)
if "cve_station" in met_2018.columns or "cve_parameter" in met_2018.columns:
met_2018.rename(columns={'cve_station': 'id_station', 'cve_parameter': 'id_parameter'}, inplace=True)
#eliminamos las entradas vacías
met_2018 = met_2018.dropna(how='any')
met_2018 = met_2018.drop(['unit'], axis=1)
met_ACO = met_2018
met_ACO = met_ACO.reset_index(drop=False)
met_ACO = met_ACO[["date","id_station","id_parameter","value"]] # nos quedamos con las siguientes columnas:
#Hacer una tabla pivote para pasar de una columna con el nombre del atributo
# y su valor a una columna por atributo.
met_ACO_hour = pd.pivot_table(met_ACO,index=["date","id_station"],columns=["id_parameter"])
met_ACO_hour = met_ACO_hour.reset_index(drop=False)
met_ACO_hour.columns = met_ACO_hour.columns.droplevel()
met_ACO_hour["id_station"] = met_ACO_hour.iloc[:,1]
met_ACO_hour["date"] = met_ACO_hour.iloc[:,0]
#eliminamos la columna vacía
met_ACO_hour = met_ACO_hour.drop([""],axis=1)
# Convertir la columna fecha de d/m/yy hh:mm a yy/mm/dd hh:mm y pasar del formato de horas de 1..24 a 0...23.
met_ACO_hour['date'] = met_ACO_hour.apply(lambda row: time_converter(row['date']), axis=1)
met_ACO_hour['date'] = pd.to_datetime(met_ACO_hour['date'], format='%Y-%m-%d %H:%M')
met_ACO_hour = met_ACO_hour.rename(columns={'date': 'fecha'})
return(met_ACO_hour)
###Output
_____no_output_____
###Markdown
Ejecutamos la función anterior para los datos de metereología y contaminantes:
###Code
target1 = "meteorologia"
anio = "2019"
meteorologia = formateo_csv(target1, anio)
target2 = "contaminantes"
contaminacion = formateo_csv(target2, anio)
meteorologia.head()
###Output
_____no_output_____
###Markdown
Merge de Dataframes Juntamos los dataframes generados, así podremos trabajar con ambos archivos a la vez:
###Code
data_hour_merge = pd.merge(meteorologia, contaminacion, on=["fecha","id_station"],how="outer")
###Output
_____no_output_____
###Markdown
Generamos 3 columnas con la información temporal del momento en que se tomó la mediciónen la columna de fecha se elimina la información de hora y minuto.
###Code
data_hour_merge['hora'] = data_hour_merge['fecha'].astype(str).str[10:13].astype(int)
data_hour_merge['dia'] = data_hour_merge['fecha'].astype(str).str[8:10].astype(int)
data_hour_merge['mes'] = data_hour_merge['fecha'].astype(str).str[5:7].astype(int)
# data_hour_merge['fecha'] = data_hour_merge['fecha'].astype(str).str[0:10]
data_hour_merge.head(5)
###Output
_____no_output_____
###Markdown
Una vez que corroboramos el correcto funcionamiento del proceso, podemos juntar los pasos anteriores en una función y así agilizar el proceso de la limpieza de cada año:
###Code
def data_parser(anio_1):
print(anio_1)
target1 = "meteorologia"
meteorologia = formateo_csv(target1, anio_1)
target2 = "contaminantes"
contaminacion = formateo_csv(target2, anio_1)
data_hour_merge = pd.merge(meteorologia, contaminacion, on=["fecha","id_station"],how="outer")
data_hour_merge['hora'] = data_hour_merge['fecha'].astype(str).str[10:13]
data_hour_merge['dia'] = data_hour_merge['fecha'].astype(str).str[8:10]
data_hour_merge['mes'] = data_hour_merge['fecha'].astype(str).str[5:7]
# data_hour_merge['fecha'] = data_hour_merge['fecha'].astype(str).str[0:10]
data_hour_merge.to_csv(str("./data/processed/met_cont_hora/cont_hora"+
str(anio_1) +".csv"), index=False)
###Output
_____no_output_____
###Markdown
Corremos la función desde el 2012 al 2019:
###Code
[data_parser(str(anio)) for anio in range(2019,2021)]
###Output
2019
2020
###Markdown
Feature Selection
###Code
for datum in ride_data2:
datum.pop("VendorID")
datum.pop("RatecodeID")
datum.pop("store_and_fwd_flag")
datum.pop("payment_type")
datum.pop("fare_amount")
datum.pop("extra")
datum.pop("mta_tax")
datum.pop("tip_amount")
datum.pop("tolls_amount")
datum.pop("improvement_surcharge")
datum.pop("total_amount")
datum.pop("congestion_surcharge")
amount_of_data_current = len(ride_data2)* len(ride_data2[0])
print("Current data: " + str(amount_of_data_current))
percentage_cut = round(amount_of_data_current * 100/amount_of_data_initial, 2)
print("Percentage of data removed: " + str(100 - percentage_cut) + "%")
print("Percentage of data left: " + str(percentage_cut) + "%")
###Output
Current data: 8218590
Percentage of data removed: 66.67%
Percentage of data left: 33.33%
###Markdown
NEED TO REMOVE DATA THAT HAS 0 DISTANCE
###Code
for datum in ride_data2:
if float(datum["trip_distance"]) < 0.5:
del datum
print(len(ride_data2))
###Output
1369765
###Markdown
NOW CLEAN THE DATA
###Code
import math
from datetime import datetime
def format_dates(date_begin: str, to_format: str):
first1 = datetime.fromisoformat(date_begin)
second1 = datetime.fromisoformat(to_format)
rounded = round((second1 - first1).total_seconds())
base = 125
return round(rounded/base)
format_dates('2021-01-01 00:00:00', '2021-01-31 23:59:59')
from datetime import datetime
print(len(ride_data2))
ride_data3 = []
for datum in ride_data2:
first = datetime.fromisoformat(datum['tpep_pickup_datetime'])
second = datetime.fromisoformat(datum['tpep_dropoff_datetime'])
total_seconds = round((second-first).total_seconds())
pick_time = format_dates('2021-01-01 00:00:00' , datum['tpep_pickup_datetime'])
drop_time = format_dates('2021-01-01 00:00:00', datum["tpep_dropoff_datetime"])
if drop_time > pick_time >= 0:
datum["pickup_time"] = pick_time
datum["dropoff_time"] = drop_time
ride_data3.append(datum)
print(len(ride_data3))
for datum in ride_data3:
datum.pop("tpep_pickup_datetime")
datum.pop("tpep_dropoff_datetime")
datum.pop("passenger_count")
amount_of_data_current = len(ride_data3)* len(ride_data3[0])
print("Current data: " + str(amount_of_data_current))
percentage_cut = round(amount_of_data_current * 100/amount_of_data_initial, 2)
print("Percentage of data removed: " + str(100 - percentage_cut) + "%")
print("Percentage of data left: " + str(percentage_cut) + "%")
###Output
Current data: 6732980
Percentage of data removed: 72.69%
Percentage of data left: 27.31%
###Markdown
Test data
###Code
from operator import itemgetter
new_list = sorted(ride_data3, key=itemgetter('pickup_time'))
ride_data3 = new_list
def who_being_picked_up(pickup_time):
drivers = []
for datum2 in ride_data2:
if datum2['pickup_time'] == pickup_time:
drivers.append(datum2)
return drivers
def run_some_iterations(number):
beans = []
segment = 10
for i in range(1, number):
total = round(number/segment)
if i % total == 0:
print(round(i * 100 / number), "%")
goat = who_being_picked_up(i)
beans.append(goat)
print(beans)
def find_last_time():
return ride_data3[-1]["pickup_time"]
print(find_last_time())
run_some_iterations(10)
###Output
10 %
###Markdown
CSV
###Code
import csv
def make_csv():
# open the file in the write mode
file = open('data/cleandata/clean_data2.csv', 'w+')
writer = csv.writer(file)
writer.writerow(ride_data3[0].keys())
for datum3 in ride_data3:
writer.writerow(datum3.values())
make_csv()
###Output
_____no_output_____
###Markdown
Data ProcessingFirst rule in data science:Garbage in -> Garbage outIt is very important to clean the data and ensure it is possible to work with it
###Code
import pandas as pd # used for data manipulation, Python Data Analysis Library
import numpy as np # used for numerical calculus, also pandas is built using numpy, Numerical Python
from sklearn.preprocessing import Binarizer, MinMaxScaler, StandardScaler # for feature extraction
###Output
_____no_output_____
###Markdown
Missing valuesMissing values are tricky to dealing with. A missing value is missing information, sometimes we can afford to lose that information if our data base is large, in that situation we can choose to delete the missing values.
###Code
# reading data
data = pd.read_csv(r"data/iris-with-errors.csv", header = 0)
# header is the row to bu used as the headr
print(f'Rows:\t{data.shape[0]:2.0f}\nCols:\t{data.shape[1]:2.0f}')
# Observe the first rows
# there is some Not a Number (NaN) values wich may be a problem
# also, there is some duplicate rows
data.head(6)
# before solving the NaN problem
# note that the second line contains a ? value, we have to change it to NaN too
data = data.replace("?",np.nan)
# now we can solve the NaN problem
data = data.dropna()
data.head(6)
# solving the duplicated problem
data.duplicated() # tell us the duplicated rows
data = data.drop_duplicates()
data.head(5)
###Output
_____no_output_____
###Markdown
Next stepAfter removing duplicate and NaN rows we can work with the dataAlways, always be sure that your data is in good condition no the machine learning analysis
###Code
# first, we will work only with the length
data.columns # access the dataframe columns
# we can drop columns using the index or the names
# I'll go with the names
data = data.drop(['sepal_width','petal_width'], axis = 1)
data = data.drop(data.index[[0,2]], axis = 0)
###Output
_____no_output_____
###Markdown
Replacing Missing valuesSometimes we can't afford to delete missing values so we replace them with something that won't harm our algorithm performance. We can then replace the values with:- Mean- Median- Other Measure
###Code
# let us reload the data again
data = pd.read_csv(r"data/iris-with-errors.csv", header = 0)
# let's replace the ? values with NaN
data.replace('?', np.nan, inplace = True)
print(data.shape)
data.head(5)
# the thing is, we have to estimate the mean value of the columns that have NaN to substitute them
# how do we do it? using numpy
# the array without the last column
X = np.array(data[data.columns[0:data.shape[1]-1]], dtype = float)
avrgs = np.nanmean(X, axis = 0)
for i in np.arange(0,X.shape[0]):
for j in np.arange(0,X.shape[1]):
if np.isnan(X[i,j]) == True:
X[i,j] = avrgs[j]
# we chosed the mean value to replace, but we could use median or any other measurement
# reading file
data = pd.read_csv(r'data/iris.csv', header = 0)
print(f'shape = {data.shape}')
X = np.array(data[data.columns[0:data.shape[1]-1]], dtype = float)
Z = np.array(data[data.columns[0:data.shape[1]-1]], dtype = float)
# print('\nOriginal:')
# for i in range(X.shape[1]):
# print(f"Coluna {i} Maior: {max(X[:,i])}")
# print(f"Coluna {i} Menor: {min(X[:,i])}\n")
## functions to trasnform the data
# Normalizing
scaler = MinMaxScaler(feature_range = (0,1))
X = scaler.fit_transform(X)
# print('\n\nNormalized:')
# for i in range(X.shape[1]):
# print(f"Coluna {i} Maior: {max(X[:,i])}")
# print(f"Coluna {i} Menor: {min(X[:,i])}\n")
# Padronizing
scaler = StandardScaler().fit(Z)
Z = scaler.transform(Z)
# print('\n\nPadronized:')
# for i in range(Z.shape[1]):
# print(f"Coluna {i} Maior: {max(Z[:,i])}")
# print(f"Coluna {i} Menor: {min(Z[:,i])}\n")
## Binarization
X = np.array(data[data.columns[0:data.shape[1]-1]], dtype = float)
T = 0.2
# print('Limiar:', T)
# print('---------------------')
# change scale
scaler = MinMaxScaler(feature_range = (0,1))
X_norm = scaler.fit_transform(X)
X_norm
(min(X_norm[:,i]))
# binarization
binarizer = Binarizer(threshold = T).fit(X_norm)
binaryX = binarizer.transform(X_norm)
# binaryX
###Output
_____no_output_____
###Markdown
Users
###Code
import pandas as pd
users = pd.read_csv('data_old/sc_report_user.csv')
print(users.shape)
users.head()
del users['user_type']
users.head()
users.to_csv('data_table/user.csv', index = False)
###Output
_____no_output_____
###Markdown
Project
###Code
projects = pd.read_csv('data_old/sc_report_projects.csv')
print(projects.shape)
projects
###Output
(10, 2)
###Markdown
Geo maps in Data StudioA Data Studio geo map requires you to provide 3 pieces of information:- a geographic dimension, such as Country, City, Region, etc.- a metric, such as Sessions, Units Sold, Population, etc.- the map's zoom area
###Code
location = pd.read_csv('data_old/AdWords_API_Location_Criteria.csv')
location[location['Name'] == 'Bangkok'].head()
###Output
_____no_output_____
###Markdown
Latitude & Longitude Phaya Thai- Samsen Nai, Bangkok 10400, "13.774123, 100.538318"Lumphini- Pathum Wan, Bangkok 10330, "13.733438, 100.547931"Khlong Ton Sai- Khlong San, Bangkok 10600, "13.724766, 100.504329"Huai Khwang- Bangkok 10310, "13.760450, 100.568187"Suan Luang- Bangkok, "13.744777, 100.632045"Bang Yi Khan- Bang Phlat, Bangkok 10700, "13.769454, 100.491626"Chom Phon- Chatuchak, Bangkok 10900, "13.818133, 100.569217"Phaya Thai- Samsen Nai, Bangkok 10400, "13.778457, 100.546215"
###Code
location_list = ["13.774123, 100.538318",
"13.733438, 100.547931",
"13.724766, 100.504329",
"13.760450, 100.568187",
"13.744777, 100.632045",
"13.769454, 100.491626",
"13.818133, 100.569217"]
import numpy as np
from random import randint
location = []
for i in range(len(projects)):
location.append(location_list[randint(0,len(location_list)-1)])
projects['location'] = location
projects
projects.to_csv('data_table/project.csv', index = False)
###Output
_____no_output_____
###Markdown
Article
###Code
articles = pd.read_csv('data_old/sc_report_topics1.csv')
print(articles.shape)
articles.head()
if 'read_lenght' in articles:
del articles['read_lenght']
articles.head()
articles.to_csv('data_table/article.csv', index = False)
articles['id'].unique()
articles.min()
###Output
_____no_output_____
###Markdown
User-Project
###Code
user_project = pd.read_csv('data_old/user_project.csv')
print(user_project.shape)
user_project.head()
user_project_grouped = user_project.groupby(['user_id','project_id']).count()
user_project_grouped.head()
new_user_project = user_project.drop_duplicates(subset=['user_id', 'project_id'], keep='first')
new_user_project.groupby(['user_id','project_id']).count().head()
new_user_project.to_csv('data_table/user_project.csv', index = False)
###Output
_____no_output_____
###Markdown
Article-Project
###Code
article_project = pd.read_csv('data_old/article_project.csv')
print(article_project.shape)
article_project.head()
new_article_project = article_project.drop_duplicates(subset=['article_id', 'project_id'], keep='first')
new_article_project.shape
new_article_project.to_csv('data_table/article_project.csv', index = False)
###Output
_____no_output_____
###Markdown
Seen
###Code
import random
import time
def strTimeProp(start, end, format, prop):
"""Get a time at a proportion of a range of two formatted times.
start and end should be strings specifying times formated in the
given format (strftime-style), giving an interval [start, end].
prop specifies how a proportion of the interval to be taken after
start. The returned time will be in the specified format.
"""
stime = time.mktime(time.strptime(start, format))
etime = time.mktime(time.strptime(end, format))
ptime = stime + prop * (etime - stime)
return time.strftime(format, time.localtime(ptime))
def randomDate(start, end, prop):
return strTimeProp(start, end, '%Y-%m-%d %H:%M:%S', prop)
seen = pd.DataFrame(columns=['article_id','user_id','seen_at'])
seen
from random import randint
for i in range(36500):
article_id = randint(1,100)
user_id = randint(1,100)
seen_at = randomDate("2017-04-04 04:00:00", "2018-04-11 00:00:00", random.random())
seen_row = pd.DataFrame([[article_id, user_id, seen_at]], columns=['article_id','user_id','seen_at'])
seen = seen.append(seen_row, ignore_index=True)
print(seen.shape)
seen.head()
###Output
(36500, 3)
###Markdown
คนจะเห็นข่าวได้ต้องอยู่ในโครงการที่ข่าวประกาศไป จะได้จำนวนคนที่มีโอกาสเห็นข่าวจากแต่ละโครงการจากการเอา ตาราง **article_project** ไป merge กับ **user_project**
###Code
article_project_user = pd.merge(new_article_project, new_user_project, on='project_id')
article_project_user.groupby('project_id')['user_id'].count()
###Output
_____no_output_____
###Markdown
ตัวเลขที่เห็นนี้คือจำนวนคนที่มีโอกาสเห็นข่าวจากแต่ละโครงการรวม All time
###Code
article_project_user.head()
###Output
_____no_output_____
###Markdown
เมื่อผนวกข้อมูลว่าข่าวประกาศไปโครงการไหนบ้าง กับใครอยู่ในโครงการนั้นบ้าง แล้วคนในโครงการใครบ้างที่เห็นข่าว จะได้จำนวนดังนั้นคนเห็นข่าวจริงๆ ดังนี้
###Code
is_seen = pd.merge(article_project_user, seen, how='inner', left_on=['article_id','user_id'], right_on = ['article_id','user_id'])
print(is_seen.shape)
is_seen.head()
###Output
(57730, 5)
###Markdown
ลบแถวที่ซ้ำออก (อย่างอื่นซ้ำหมดแต่มี seen_at ไม่ตรงกัน)
###Code
is_seen = is_seen.drop_duplicates(subset=['article_id', 'project_id', 'user_id', 'user_type'], keep='first')
print(is_seen.shape)
is_seen.head()
is_seen.to_csv('data_table/seen.csv', index = False)
###Output
_____no_output_____
###Markdown
Click click อาจจะเป็น sampling ของ seen (กรณีที่เข้าได้จากทาง app อย่างเดียว) เอาสัก 40%
###Code
click = seen.sample(frac=0.4)
print(click.shape)
click.head()
###Output
(14600, 3)
###Markdown
แล้วทำแบบ Seen
###Code
is_click = pd.merge(article_project_user, click, how='inner', left_on=['article_id','user_id'], right_on = ['article_id','user_id'])
print(is_click.shape)
is_click.head()
###Output
(23176, 5)
###Markdown
เพิ่มเวลา click ให้หน่อย
###Code
from datetime import datetime
from datetime import timedelta
from random import randint
def addRandomMinute(time,max_minute):
now = datetime.strptime(time,'%Y-%m-%d %H:%M:%S')
now_plus_minute = now + timedelta(minutes = randint(1,max_minute))
return now_plus_minute.strftime('%Y-%m-%d %H:%M:%S')
addRandomMinute('2018-01-02 11:59:00',10)
new_is_click = is_click[:]
new_is_click['seen_at'] = is_click['seen_at'].apply(addRandomMinute, args=(10,))
new_is_click = new_is_click.rename(columns={'seen_at': 'clicked_at'})
new_is_click.head()
new_is_click.to_csv('data_table/click.csv', index = False)
###Output
_____no_output_____
###Markdown
Read read เป็น sampling ของ click เอาสัก 70%
###Code
read = click.sample(frac=0.7)
print(read.shape)
read.head()
###Output
(10220, 3)
###Markdown
แล้วทำแบบ Seen
###Code
is_read = pd.merge(article_project_user, read, how='inner', left_on=['article_id','user_id'], right_on = ['article_id','user_id'])
print(is_read.shape)
is_read.head()
###Output
(16227, 5)
###Markdown
เพิ่ม column read_time(0-1200 วินาที), read_length(0-100%)
###Code
import numpy as np
len(is_read)
read_times = np.array([])
read_lengths = np.array([])
for i in range(len(is_read)):
read_times = np.append(read_times,[randint(0,1200)])
read_lengths = np.append(read_lengths,[randint(0,100)])
is_read['read_time'] = read_times
is_read['read_length'] = read_lengths
if 'seen_at' in is_read:
del is_read['seen_at']
is_read.head()
is_read.to_csv('data_table/read.csv', index = False)
###Output
_____no_output_____
###Markdown
Add Column is_readนับว่าอ่านเมื่อ$$\frac{\text{read time user}}{\text{read time article}} >= 0.7$$ และ $$\text{read length} > 70$$
###Code
articles.head()
new_read = pd.merge(is_read, articles[['id','read_time']], how='inner', left_on=['article_id'], right_on = ['id'])
print(new_read.shape)
new_read.head()
new_read['is_read'] = (new_read['read_time_x']/new_read['read_time_y'] > 0.7).astype(int)
if 'id' in new_read:
del new_read['id']
if 'read_time_y' in new_read:
del new_read['read_time_y']
new_read = new_read.rename(columns={'read_time_x':'read_time'})
new_read.head()
###Output
_____no_output_____
###Markdown
Action action เป็น sampling ของ read เอาสัก 20%
###Code
action = is_read.sample(frac=0.2)
print(action.shape)
action.head()
###Output
(3245, 6)
###Markdown
แล้วทำแบบ Seen
###Code
new_action = pd.merge(article_project_user, action[['article_id','user_id']], how='inner', left_on=['article_id','user_id'], right_on = ['article_id','user_id'])
print(new_action.shape)
new_action.head()
###Output
(8122, 4)
###Markdown
เพิ่ม Column `save_at, share_at,love_at,sad_at,angry_at`
###Code
import numpy as np
save_at = np.array([])
share_at = np.array([])
love_at = np.array([])
sad_at = np.array([])
angry_at = np.array([])
for i in range(len(new_action)):
save_data = randomDate("2017-04-04 04:00:00", "2018-04-11 00:00:00", random.random()) if randint(0,1) == 1 else ""
save_at = np.append(save_at,[save_data])
share_data = randomDate("2017-04-04 04:00:00", "2018-04-11 00:00:00", random.random()) if randint(0,1) == 1 else ""
share_at = np.append(share_at,[share_data])
action_id = randint(1,3)
if action_id == 1:
love_at = np.append(love_at,[randomDate("2017-04-04 04:00:00", "2018-04-11 00:00:00", random.random())])
sad_at = np.append(sad_at,[""])
angry_at = np.append(angry_at,[""])
elif action_id == 2:
love_at = np.append(love_at,[""])
sad_at = np.append(sad_at,[randomDate("2017-04-04 04:00:00", "2018-04-11 00:00:00", random.random())])
angry_at = np.append(angry_at,[""])
else:
love_at = np.append(love_at,[""])
sad_at = np.append(sad_at,[""])
angry_at = np.append(angry_at,[randomDate("2017-04-04 04:00:00", "2018-04-11 00:00:00", random.random())])
new_action['save_at'] = save_at
new_action['share_at'] = share_at
new_action['love_at'] = love_at
new_action['sad_at'] = sad_at
new_action['angry_at'] = angry_at
new_action.head()
new_action.to_csv('data_table/action.csv', index = False)
from random import randint
randint(1,3)
###Output
_____no_output_____
###Markdown
Data Manipulation and Machine Learning Using Pandas - Python Abedin Sherifi Imports
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import os
import seaborn as sns
from scipy.stats import linregress
from scipy.stats import zscore
###Output
_____no_output_____
###Markdown
Directory change to current directory list of files get current directory
###Code
current_dir = os.chdir('/home/dino/Documents/Python_Tutorials/Data Processing/data')
list_files = os.listdir()
print(list_files)
os.getcwd()
###Output
_____no_output_____
###Markdown
Reading csv file showing the first 5 rows of the file and all the columns info regarding the data file shape of file, meaning number of rows by number of columns
###Code
df = pd.read_csv('auto-mpg.csv')
df.head()
df.info()
df.shape
df.describe()
###Output
_____no_output_____
###Markdown
A box plot is a method for graphically depicting groups of numerical data through their quartiles. The box extends from the Q1 to Q3 quartile values of the data, with a line at the median (Q2).
###Code
plt.figure(figsize=(15,15))
df.boxplot(by='cylinders', column=['mpg'], grid=False);
plt.style.use('seaborn') #seaborn, default, ggplot
plt.title('MPG Grouped By Origin')
plt.xlabel('MPG')
plt.ylabel('Origin')
###Output
_____no_output_____
###Markdown
A pie plot is a proportional representation of the numerical data in a column.
###Code
df.groupby('cylinders')["mpg"].count().plot(kind='pie')
plt.title('MPG Grouped By Cylinders')
plt.xlabel('Cylinders')
plt.ylabel('MPG')
###Output
_____no_output_____
###Markdown
A heatmap contains values representing various shades of the same color for each value to be plotted. Usually the darker shades of the chart represent higher values than the lighter shade. The varying intensity of color represents the measure of correlation.
###Code
plt.figure(figsize=(8,6))
corr = df.corr()
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values)
plt.show()
plt.figure(figsize=(15,10))
ax =sns.boxplot(data=df)
for label in ax.get_xticklabels():
label.set_ha("right")
label.set_rotation(45)
df.values
df.insert(9,'test',(df['mpg']/df['cylinders']).astype(float))
df.columns
df.index
###Output
_____no_output_____
###Markdown
Sorting a column in ascending order Indexing a dataframe based on a column threshold Adding column to dataframe
###Code
df.sort_values('cylinders', ascending = False)
df['name']
df[df['mpg'] > 20]
df['Dino'] = 2 * df['cylinders']
print(df)
###Output
mpg cylinders displacement horsepower weight acceleration year \
0 18.0 8 307.0 130 3504 12.0 70
1 15.0 8 350.0 165 3693 11.5 70
2 18.0 8 318.0 150 3436 11.0 70
3 16.0 8 304.0 150 3433 12.0 70
4 17.0 8 302.0 140 3449 10.5 70
.. ... ... ... ... ... ... ...
393 27.0 4 140.0 86 2790 15.6 82
394 44.0 4 97.0 52 2130 24.6 82
395 32.0 4 135.0 84 2295 11.6 82
396 28.0 4 120.0 79 2625 18.6 82
397 31.0 4 119.0 82 2720 19.4 82
origin name test Dino
0 1 chevrolet chevelle malibu 2.250 16
1 1 buick skylark 320 1.875 16
2 1 plymouth satellite 2.250 16
3 1 amc rebel sst 2.000 16
4 1 ford torino 2.125 16
.. ... ... ... ...
393 1 ford mustang gl 6.750 8
394 2 vw pickup 11.000 8
395 1 dodge rampage 8.000 8
396 1 ford ranger 7.000 8
397 1 chevy s-10 7.750 8
[398 rows x 11 columns]
###Markdown
Min of a column Dropping duplicates on a column
###Code
df.mpg.min()
df['mpg'].cumsum
df.drop_duplicates(subset='name')
###Output
_____no_output_____
###Markdown
Value counts for a specific column Normalized value counts Aggregate min,max,sum for specific column
###Code
df['name'].value_counts()
df['name'].value_counts(normalize=True)
df.groupby('name')['mpg'].agg([min, max, sum])
###Output
_____no_output_____
###Markdown
Looking up specific values within a column Dataframe sorting
###Code
df[df['name'].isin(['vw rabbit custom', 'amc concord'])]
df.sort_index()
###Output
_____no_output_____
###Markdown
Histogram plot of specific column Different use of plot styles such as fivethirtyeight, seaborn, default, ggplot Line plot Scatter plot
###Code
df['cylinders'].hist()
plt.style.use('fivethirtyeight') #seaborn, default, ggplot
df.plot(x='mpg', y='cylinders', kind='scatter', alpha=0.5)
###Output
_____no_output_____
###Markdown
Any row of any column is na Dropping na on any row for any column Filling any na with value 0
###Code
df.isna().any()
df.dropna()
df.fillna(0)
###Output
_____no_output_____
###Markdown
Dataframe to csv file
###Code
df.to_csv('Dino_Test.csv')
for col in df.columns:
print(col, df[col].nunique(), len(df))
df
df.drop(['name'], axis=1, inplace=True)
df[['mpg', 'cylinders']].sort_values(by='cylinders').tail(10)
origin_map = {1: 'X', 2: 'Y', 3: 'Z'}
df['origin'].replace(origin_map, inplace=True)
df.head()
df.groupby('origin').mean().plot(kind='bar')
df.dtypes
df['mpg'].describe()
mpg_std = df['mpg'].std()
print(mpg_std)
plt.figure().set_size_inches(8, 6)
plt.semilogx(df['mpg'])
plt.semilogx(df['mpg'], df['mpg'] + mpg_std, 'b--')
plt.semilogx(df['mpg'], df['mpg'] - mpg_std, 'b--')
plt.fill_between(df['mpg'], df['mpg'] + mpg_std, df['mpg'] - mpg_std)
plt.ylabel('CV score +/- std error')
plt.xlabel('alpha')
plt.axhline(np.max(df['mpg']), linestyle='--', color='.5')
from scipy.stats import linregress
x = df['acceleration']
y = df['mpg']
stats = linregress(x, y)
m = stats.slope
b = stats.intercept
plt.scatter(x, y)
plt.plot(x, m*x + b, color="red") # I've added a color argument here
plt.savefig("figure.png")
mpgg = df['mpg']
accel = df['acceleration']
sns.kdeplot(data=mpgg)
plt.savefig('dino.pdf')
sns.distplot(df['mpg'])
import glob
print(glob.glob('*.csv'))
df_list = []
for file in glob.glob('*.csv'):
df = pd.read_csv(file)
df_list.append(df)
df = pd.concat(df_list)
df.shape
df.iloc[0:5, 0:3]
df[:1]
df[-1:]
df[df['name'].apply(lambda state: state[0] == 'p')].head()
#scatter plot grlivarea/saleprice
var = 'mpg'
data = pd.concat([df['weight'], df[var]], axis=1)
data.plot.scatter(x=var, y='weight', ylim=(0,10000), alpha=0.3);
var = 'cylinders'
data = pd.concat([df['weight'], df[var]], axis=1)
f, ax = plt.subplots(figsize=(8, 6))
fig = sns.boxplot(x=var, y="weight", data=data)
fig.axis(ymin=0, ymax=10000);
corrmat = df.corr()
f, ax = plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=.8, square=True);
sns.set()
cols = ['mpg', 'acceleration', 'weight']
sns.pairplot(df[cols], size = 2.5)
plt.show();
#histogram and normal probability plot
from scipy import stats
sns.distplot(df['mpg'], fit=stats.norm);
fig = plt.figure()
res = stats.probplot(df['mpg'], plot=plt)
df.head()
from sklearn import preprocessing
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import shutil
import os
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name, x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to a single dummy variable. The new columns (which do not replace the old) will have a 1
# at every location where the original column (name) matches each of the target_values. One column is added for
# each target value.
def encode_text_single_dummy(df, name, target_values):
for tv in target_values:
l = list(df[name].astype(str))
l = [1 if str(x) == str(tv) else 0 for x in l]
name2 = "{}-{}".format(name, tv)
df[name2] = l
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df, name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df, name, mean=None, sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name] - mean) / sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert all missing values in the specified column to the default
def missing_default(df, name, default_value):
df[name] = df[name].fillna(default_value)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df, target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
dummies = pd.get_dummies(df[target])
return pd.DataFrame(df,columns=result).to_numpy(dtype=np.float32), dummies.to_numpy(dtype=np.float32)
else:
# Regression
return pd.DataFrame(df,columns=result).to_numpy(dtype=np.float32), pd.DataFrame(df,columns=[target]).to_numpy(dtype=np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
# Regression chart.
def chart_regression(pred,y,sort=True):
t = pd.DataFrame({'pred' : pred, 'y' : y.flatten()})
if sort:
t.sort_values(by=['y'],inplace=True)
a = plt.plot(t['y'].tolist(),label='expected')
b = plt.plot(t['pred'].tolist(),label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
# Remove all rows where the specified column is +/- sd standard deviations
def remove_outliers(df, name, sd):
drop_rows = df.index[(np.abs(df[name] - df[name].mean()) >= (sd * df[name].std()))]
df.drop(drop_rows, axis=0, inplace=True)
# Encode a column to a range between normalized_low and normalized_high.
def encode_numeric_range(df, name, normalized_low=-1, normalized_high=1,
data_low=None, data_high=None):
if data_low is None:
data_low = min(df[name])
data_high = max(df[name])
df[name] = ((df[name] - data_low) / (data_high - data_low)) \
* (normalized_high - normalized_low) + normalized_low
os.getcwd()
###Output
_____no_output_____
###Markdown
In a neural network, the activation function is responsible for transforming the summed weighted input from the node into the activation of the node or output for that input. The rectified linear activation function or ReLU for short is a piecewise linear function that will output the input directly if it is positive, otherwise, it will output zero. It has become the default activation function for many types of neural networks because a model that uses it is easier to train and often achieves better performance.
###Code
from keras.models import Sequential
from keras.layers.core import Dense, Activation
import pandas as pd
import io
import requests
import numpy as np
from sklearn import metrics
from keras.layers import Dropout
filename_read = "auto-mpg.csv"
df = pd.read_csv(filename_read,na_values=['NA','?'])
print(df.head())
cars = df['name']
df.drop('name',1,inplace=True)
missing_median(df, 'horsepower')
x,y = to_xy(df,"mpg")
model = Sequential()
model.add(Dense(100, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(10, activation='relu')) # Hidden 2
model.add(Dense(1)) # Output
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x,y,verbose=2,epochs=100)
pred = model.predict(x)
print("Shape: {}".format(pred.shape))
print(pred)
chart_regression(pred.flatten(),y, sort=False)
###Output
_____no_output_____
###Markdown
Root Mean Square Error is the square root of the average of the squared differences between the estimated and the actual value of the variable/feature.
###Code
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y))
print("Final score (RMSE): {}".format(score))
# Sample predictions
for i in range(10):
print("{}. Car name: {}, MPG: {}, predicted MPG: {}".format(i+1,cars[i],y[i],pred[i]))
###Output
1. Car name: chevrolet chevelle malibu, MPG: [18.], predicted MPG: [15.265084]
2. Car name: buick skylark 320, MPG: [15.], predicted MPG: [14.315575]
3. Car name: plymouth satellite, MPG: [18.], predicted MPG: [15.7501745]
4. Car name: amc rebel sst, MPG: [16.], predicted MPG: [16.1137]
5. Car name: ford torino, MPG: [17.], predicted MPG: [15.271221]
6. Car name: ford galaxie 500, MPG: [15.], predicted MPG: [9.591824]
7. Car name: chevrolet impala, MPG: [14.], predicted MPG: [9.530954]
8. Car name: plymouth fury iii, MPG: [14.], predicted MPG: [9.54981]
9. Car name: pontiac catalina, MPG: [14.], predicted MPG: [9.451338]
10. Car name: amc ambassador dpl, MPG: [15.], predicted MPG: [12.510649]
###Markdown
Ayiti Analytics Data Processing Bootcamp Ayiti Analytics Data wants to expand its training centers throughout all the communes of the country. Your role as a data analyst is to help them realize this dream.Its objective is to know which three communes of the country will be the most likely to expand its training centers.Knowing that each cohort must have 30 students * How many applications must be made to select 25% women for each on average* What are the most effective communication channels (Alumni, Facebook, WhatsApp, Friend ...) that will allow a student to be susceptible to selection * What is the average number of university students who should participate in this program* What will be the average number of applications per week that we could have* How many weeks should we extend the application process to select 60 students per commune?* If we were to do all the bootcamp online, who would be the best communes and how many applications would we need to select 30 student and what percentage of students would have a laptop, an internet connection, both at the same time* What are the most effective communication channels (Alumni, Facebook, WhatsApp, Friend ...) that will allow a women to be susceptible to selection NB Use the same framework of the BA project to complete this project
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import date
commune=pd.read_excel(r"commune.xlsx")
enroll = pd.read_csv(r"enroll.csv")
quest = pd.read_csv(r"quest.csv")
industry = pd.read_csv(r"industry.csv")
ord = pd.read_csv(r"ord.csv")
study_domain = pd.read_csv(r"study_domain.csv")
transaction = pd.read_csv(r"transaction.csv")
technology = pd.read_csv(r"technology.csv")
study_domain1 = pd.get_dummies(data=study_domain[["quest_id", "values"]], columns=['values'], prefix="", prefix_sep="")
study_domain2=study_domain1.groupby("quest_id").sum()
#study_domain= study_domain.drop(columns="key")
#study_domain.set_index('quest_id')
#study_domain
technologyy = pd.get_dummies(data=technology[["key", "quest_id", "values"]], columns=['values'], prefix="", prefix_sep="")
technologyyy=technologyy.groupby("quest_id").sum()
industry1=pd.get_dummies(data=industry[["quest_id","key","values"]], columns= ["values"], prefix="", prefix_sep="")
industry2= industry1.groupby("quest_id").sum()
#industry2
#quest1=quest.groupby("quest_id").sum()
quest['department'] = quest['department'].apply(lambda x : str(x))
quest['department']= quest['department'].apply(lambda x : x.upper())
quest['commune']= quest['commune'].apply(lambda x : x.upper())
quest
merge5=pd.merge(quest,commune, how = 'left', left_on=['department','commune'], right_on=['ADM1_PCODE','Commune_Id'])
#mergee=merge5.isna().sum()
#merge5=merge5.drop(columns=['Commune_en', 'modified_at'])
merge5['created_at'] =merge5['created_at'].apply(lambda x : str(x).split("T")[0])
merge11=pd.merge(left=merge5, right=study_domain2, how = 'left',on='quest_id')
transaction['Payment Method'] = 'Moncash'
ord['Payment Method'] = 'Credit Card/Paypal'
x = transaction.loc[:,['Payment Method','user_id']]
y = ord.loc[:,['Payment Method','user_id']]
trans_ord= pd.concat([x,y],axis=0)
enroll1=pd.merge(enroll,trans_ord, how = 'left',on = ['user_id'] )
#enroll1.shape
#enrol=enroll.groupby('user_id').sum()
enroll11= enroll1.loc[:,['Payment Method','user_id','quest_id']]
moy_enroll=enroll1['percentage_completed'].value_counts(ascending=True).mean()
moy_enroll
moy_enroll= moy_enroll/10
en=enroll1[enroll1['percentage_completed'] > moy_enroll]
en['percentage_completed'].to_frame
prob_category(data=merge200,top_n =4 ,col="Commune_FR",abs_value ="Total",rel_value ="Percent",show_plot=True, title="",figsize=(10,5))
merge200=pd.merge(left=en, right=merge5, how = 'left',on='quest_id')
prob_category(data=merge200,top_n =4 ,col="hear_AA_1",abs_value ="Total",rel_value ="Percent",show_plot=True, title="",figsize=(10,5))
hearr.sort_values(by=('count','female'),ascending=False).head(5)
merge20=pd.merge(left=merge11, right=, how = 'left',on='quest_id')
merge20['created_at'].isnull().sum
#merge20['dob'] = pd.to_datetime(merge20['dob'])
final_merge.set_index('quest_id')
final_merge
final_merge['dob'] = final_merge['dob'].astype(str)
final_merge['dob'].replace({'3 aout 1977':'03/08/1977'},inplace = True)
final_merge['dob'] = pd.to_datetime(final_merge['dob'])
def Calculate_Age(born) :
today = date(2021, 6, 18)
return today.year - born.year - ((today.month,today.day)< (born.month,born.day))
final_merge['Age'] = final_merge['dob'].apply(Calculate_Age)
final_merge.reset_index()
#check_for_nan = final_merge['Age'].isnull().sum()
#check_for_nan
move = final_merge.pop('Age')
final_merge.insert(3,'Age',move)
final_merge['Age'] = final_merge['Age'].fillna(final_merge['Age'].mean())
final_merge['Age'] = final_merge['Age'].astype(int)
final_merge['quest_id']
final_merge.columns
final_merge['Age'].isnull().sum()
for col in final_merge.columns:
print(f"{col} ->{final_merge[col].nunique()}")
g=pd.isnull(final_merge['Age'])
final_merge[g]
male = final_merge[final_merge.gender=="male"]
female = final_merge[final_merge.gender == "female"]
final_merge.reset_index()
final_merge.reset_index()
final_merge
result3 = pd.pivot_table(final_merge,'quest_id',index = ['gender'],columns=['hear_AA_1'],aggfunc=['count'],fill_value = 0,margins=True)
plt.figure(figsize=(20,15))
ax = result3.sort_index().T.plot(kind='bar',figsize=(15,6))
ylab = ax.set_ylabel('Number of Applicants')
xlab = ax.set_xlabel('Channel')
result3
def generate_barchart(data, title ="",abs_value ="Total",rel_value="Percent",figsize =(10,6)):
plt.figure(figsize=figsize)
axes = sns.barplot(data=data,y=data.index,x=abs_value)
i=0
for tot, perc in zip(data[abs_value],data[rel_value]):
axes.text(tot/2,
i,
str(np.round(perc*100,2))+ "%",
fontdict=dict(color='White',fontsize=12,horizontalalignment="center")
)
axes.text(tot+3,
i,
str(tot),
fontdict=dict(color='blue',fontsize=12,horizontalalignment="center")
)
i+=1
plt.title(title)
plt.show()
def prob_category(data,top_n,col="Pclass_letter", abs_value ="Total",rel_value ="Percent",show_plot=False, title="",figsize=()):
# absolute value
res1 = data[col].value_counts().to_frame()
res1.columns = [abs_value]
res2 = data[col].value_counts(normalize=True).to_frame()
res2.columns = [rel_value]
if not show_plot:
return pd.concat([res1,res2],axis=1).head(top_n)
else:
result = pd.concat([res1,res2],axis=1).head(top_n)
generate_barchart(data=result, title =title,abs_value =abs_value,rel_value=rel_value,figsize =figsize)
return result
#fifi=pd.pivot_table(final_merge,'quest_id',index = ['gender'],columns=['Commune_FR'],aggfunc=['count'],fill_value = 0,margins=True)
prob_category(data=final_merge,top_n =4 ,col="Commune_FR",abs_value ="Total",rel_value ="Percent",show_plot=True, title="",figsize=(10,5))
#fifi=pd.pivot_table(final_merge,'quest_id',index = ['Commune_FR'],columns=['gender'],aggfunc=['count'],fill_value = 0)
#fifi=fifi.iloc[]
#fifi=fifi.sort_values(by=('count','female','male'),ascending = False)
#prob_category(data=fifi,top_n =4 ,col="gender",abs_value ="Total",rel_value ="Percent",show_plot=True, title="",figsize=(10,5))
prob_category(data=final_merge ,top_n=7, col="education_level",abs_value ="Total",rel_value ="Percent",show_plot=True, title="",figsize=(10,10))
result999 =final_merge[(final_merge['education_level'] =='Bachelors (bacc +4)') | (final_merge['education_level'] =='Masters') | (final_merge['education_level'] =='Doctorate (PhD, MD, JD)') ]
result999
result999.shape[0]/final_merge.shape[0]
result2 = pd.pivot_table(final_merge,'quest_id',index = ['gender'],columns=['Commune_FR'],aggfunc=['count'],fill_value=0)
#res=result2.sort_values(by=('count','male'),ascending=False)
#res=res.iloc[:5,:]
#generate_barchart(data=res,title="Total et Percent By Sex",abs_value="Total",rel_value="Percent")
plt.figure(figsize=(10,6))
ax = result2.sort_index().T.plot(kind='bar',figsize=(15,6))
ylab = ax.set_ylabel('Number of Applicants')
xlab = ax.set_xlabel('Commune')
#prob_category(data=final_merge,top_n =4 ,col="hear_AA_1",abs_value ="Total",rel_value ="Percent",show_plot=True, title="",figsize=(10,5))
result2
result4 = pd.pivot_table(final_merge,'quest_id',index = ['gender'],columns=['university'],aggfunc=['count'],fill_value = 0,margins=True)
rresult4
final_merge
quest.set_index('gender')
a=quest.loc[:,['university']]
a
quest.columns
#gg= pd.get_dummies(data=quest[["quest_id", "gender",'university']], columns=['gender'], prefix="", prefix_sep="")
#gg=gg.groupby("university").sum()
def generate_barchar(data, title ="",abs_value ="Total",rel_value="Percent",figsize =(10,6)):
plt.figure(figsize=figsize)
axes = sns.barplot(data=data,y=data.index,x=abs_value)
i=0
for tot, perc in zip(data[abs_value],data[rel_value]):
axes.text(tot/2,
i,
str(np.round(perc,2))+ "%",
fontdict=dict(color='White',fontsize=12,horizontalalignment="center")
)
axes.text(tot+3,
i,
str(tot),
fontdict=dict(color='blue',fontsize=12,horizontalalignment="center")
)
i+=1
plt.title(title)
plt.show()
e = pd.pivot_table(final_merge,'quest_id',index='Commune_FR',columns=['internet_at_home','have_computer_home'],aggfunc = ['count'],fill_value=0)
#app = e.sort_values(by=('count','Yes','Yes'),ascending = False)
e
g=e.iloc[:,3:4]
g
#prob_category(data=g ,top_n=7, col='internet_at_home','have_computer_home',abs_value ="Total",rel_value ="Percent",show_plot=True, title="",figsize=(10,10))
both=g.sort_values(by=('count','Yes','Yes'),ascending = False)
#g['Percent'] = g[('count','Yes','Yes')]/g.shape[0]
#prob_category(data=g ,top_n=5, col=('count','Yes','Yes'),abs_value ="Total",rel_value ="Percent",show_plot=True, title="",figsize=(10,15))
#generate_barchart(g, title ="",abs_value =('count','Yes','Yes'),rel_value="Percent",figsize =(10,6))
both=both.iloc[:4,:]
both['Percent'] = both[('count','Yes','Yes')]/g.shape[0]
generate_barchar(both, title ="",abs_value =('count','Yes','Yes'),rel_value="Percent",figsize =(10,6))
both
resss=pd.pivot_table(final_merge,'quest_id',index = ['gender'],columns=['education_level'],aggfunc=['count'],fill_value = 0)
resss
final_merge.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 250 entries, 0 to 249
Data columns (total 58 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 250 non-null int64
1 gender 250 non-null object
2 dob 244 non-null datetime64[ns]
3 Age 250 non-null int32
4 commune 250 non-null object
5 created_at 250 non-null object
6 department 250 non-null object
7 education_level 250 non-null object
8 university 250 non-null object
9 study_domain 250 non-null object
10 current_employed 250 non-null object
11 formal_sector_job 250 non-null object
12 have_computer_home 250 non-null object
13 internet_at_home 250 non-null object
14 hear_AA_1 250 non-null object
15 after_AA 250 non-null object
16 quest_id 250 non-null object
17 Commune_FR 248 non-null object
18 Commune_Id 248 non-null object
19 Departement 248 non-null object
20 ADM1_PCODE 248 non-null object
21 Accounting 244 non-null float64
22 Computer Science 244 non-null float64
23 Economics 244 non-null float64
24 Electrical Engineering 244 non-null float64
25 Law 244 non-null float64
26 Management 244 non-null float64
27 Medicine 244 non-null float64
28 Statistics 244 non-null float64
29 other_x 244 non-null float64
30 Payment Method 65 non-null object
31 user_id 102 non-null float64
32 Bash 244 non-null float64
33 Excel 244 non-null float64
34 Git 244 non-null float64
35 Java 244 non-null float64
36 JavaScript 244 non-null float64
37 PHP 244 non-null float64
38 PowerBI or Tableau 244 non-null float64
39 Python 244 non-null float64
40 R 244 non-null float64
41 SQL 244 non-null float64
42 VBA 244 non-null float64
43 other_y 244 non-null float64
44 Communications 246 non-null float64
45 Consulting 246 non-null float64
46 Education 246 non-null float64
47 Energy 246 non-null float64
48 Finance 246 non-null float64
49 Healthcare 246 non-null float64
50 Insurance 246 non-null float64
51 Manufacturing 246 non-null float64
52 Marketing 246 non-null float64
53 Public Sector/ Non-Profit Agencies 246 non-null float64
54 Retail/ E-Commerce 246 non-null float64
55 Technology (Software/ Internet) 246 non-null float64
56 Transportation 246 non-null float64
57 other 246 non-null float64
dtypes: datetime64[ns](1), float64(36), int32(1), int64(1), object(19)
memory usage: 124.3+ KB
###Markdown
Preparació dades PRA2 Visualització de dades raw_Space_Corrected.csv
###Code
import pandas as pd
import numpy as np
# Open file1
df_launch = pd.read_csv('raw_Space_Corrected.csv', dtype={'Rocket':np.float64})
print(df_launch.columns)
print(df_launch.shape)
# Drop columns
df_launch = df_launch.drop(['Unnamed: 0', 'Unnamed: 0.1'], axis=1)
# Country column
df_launch["Country"] = df_launch["Location"].apply(lambda location: location.split(", ")[-1])
# Replace wrong countries
df_launch['Country'] = df_launch['Country'].replace(['Yellow Sea'], 'China')
# https://en.wikipedia.org/wiki/Yellow_Sea
df_launch['Country'] = df_launch['Country'].replace(['Shahrud Missile Test Site'], 'Iran')
# https://www.shymkent.info/space/spaceports/shahrud-missile-test-site/
df_launch['Country'] = df_launch['Country'].replace(['Pacific Missile Range Facility'], 'USA')
# https://en.wikipedia.org/wiki/Pacific_Missile_Range_Facility
df_launch['Country'] = df_launch['Country'].replace(['Gran Canaria'], 'USA')
# https://nextspaceflight.com/launches/details/228
df_launch['Country'] = df_launch['Country'].replace(['Barents Sea'], 'Russia')
# https://nextspaceflight.com/launches/details/1344
df_launch['Country'] = df_launch['Country'].replace(['Pacific Ocean'], 'Ukraine')
# All launches from LP Odyssey (Pacific Ocean) used Zenit-3SL rocket model
# https://en.wikipedia.org/wiki/Zenit-3SL
df_launch['Country'] = df_launch['Country'].replace(['New Mexico'], 'USA')
# Time info
df_launch['Datum'] = pd.to_datetime(df_launch['Datum'])
df_launch['Year'] = df_launch['Datum'].apply(lambda datetime: datetime.year)
df_launch['Month'] = df_launch['Datum'].apply(lambda datetime: datetime.month)
df_launch['Hour'] = df_launch['Datum'].apply(lambda datetime: datetime.hour)
# Extract Cargo, Model and Serie from Detail
df_launch["Cargo"] = df_launch["Detail"].apply(lambda detail: detail.split(" | ")[1])
df_launch["Model"] = df_launch["Detail"].apply(lambda detail: detail.split(" | ")[0])
df_launch["Serie"] = df_launch["Model"].apply(lambda detail: detail.split(" ")[0])
df_launch["Serie"] = df_launch["Serie"].apply(lambda detail: detail.split("-")[0])
df_launch["Serie"] = df_launch["Serie"].apply(lambda detail: detail.split("/")[0])
df_launch['Serie'] = df_launch['Serie'].replace(["Shtil'"], 'Shtil')
df_launch = df_launch.drop(['Detail'], axis=1)
# Replace some Series
df_launch['Serie'] = df_launch['Serie'].replace(['Commercial'], 'Commercial Titan')
df_launch['Serie'] = df_launch['Serie'].replace(['Black'], 'Black Arrow')
df_launch['Serie'] = df_launch['Serie'].replace(['Blue'], 'Blue Scout')
df_launch['Serie'] = df_launch['Serie'].replace(['Feng'], 'Feng Bao')
df_launch['Serie'] = df_launch['Serie'].replace(['GSLV'], 'GSLV Mk')
df_launch['Serie'] = df_launch['Serie'].replace(['Long'], 'Long March')
df_launch['Serie'] = df_launch['Serie'].replace(['Space'], 'Space Shuttle')
# Replace some Company
df_launch['Company Name'] = df_launch['Company Name'].replace(["Arm??e de l'Air"], "Armè de l'Air")
# Reorder columns
df_launch = df_launch[['Country', 'Location', 'Year', 'Month', 'Hour', 'Datum',
'Company Name', 'Model', 'Serie', 'Cargo', 'Status Mission', ' Rocket', 'Status Rocket']]
df_launch.rename(columns={" Rocket": "Price_launch", "Company Name": "Company"}, inplace=True)
df_launch.head(3)
###Output
_____no_output_____
###Markdown
------- raw_all_rockets_from_1957.csv
###Code
# Open file
df_rockets = pd.read_csv('raw_all_rockets_from_1957.csv', dtype={'Payload to LEO': float, 'Payload to GTO':float})
print(df_rockets.columns)
print(df_rockets.shape)
# Liftoff Thrust
df_rockets['Liftoff Thrust'] = df_rockets['Liftoff Thrust'].str.replace(',','')
df_rockets['Liftoff Thrust'] = df_rockets['Liftoff Thrust'].fillna("0")
# Rocket Height
df_rockets['Rocket Height'] = df_rockets['Rocket Height'].str.replace(' m','')
# Stages
df_rockets['Stages'] = df_rockets['Stages'].apply(str).str.replace('.0','')
# Strap-ons
df_rockets['Strap-ons'] = df_rockets['Strap-ons'].apply(str).str.replace('.0','')
# Price
df_rockets['Price'] = df_rockets['Price'].str.replace(' million','').str.replace('$','').str.replace(',','')
df_rockets[df_rockets['Price'] == "5,000.0"] = np.nan # https://en.wikipedia.org/wiki/Energia#Development
# Fairing Diameter
df_rockets['Fairing Diameter'] = df_rockets['Fairing Diameter'].str.replace(' m','')
# Fairing Height
df_rockets['Fairing Height'] = df_rockets['Fairing Height'].str.replace(' m','')
df_rockets.rename(columns={"Name": "Model"}, inplace=True)
df_rockets = df_rockets.drop(['Unnamed: 0'], axis=1)
df_rockets.head()
df_rockets.isna().sum()
###Output
_____no_output_____
###Markdown
Unió dels dos datasets
###Code
df = pd.merge(left=df_launch, right=df_rockets, left_on='Model', right_on='Model', how='left')
df.head()
df.columns
# Reorder columns
df = df[['Country', 'Location', 'Year', 'Month', 'Hour', 'Datum', 'Status Mission', 'Price_launch',
'Company', 'Model', 'Serie', 'Cargo', 'Price', 'Status Rocket', 'Status', 'Liftoff Thrust',
'Stages', 'Strap-ons', 'Rocket Height', 'Fairing Diameter', 'Fairing Height',
'Payload to LEO', 'Payload to GTO' ,'Wiki']]
# Remove NaN
df = df.loc[pd.notnull(df['Stages'])]
df.isnull().sum()
df.dtypes
df.to_csv('launches_rockets.csv', header=True, index=False)
###Output
_____no_output_____
###Markdown
for coherence measurement in fildtrip
###Code
trial_len = 2 # second
remove_first = 0.5 # second
delay = np.arange(-5,5.25,0.25) / 10
features = ['envelop','lipaparature']
# export to matlab filedtrip format
D = np.round(abs(delay *resample_freq),decimals=0)
for s in range(0,len(subject_name)):
save_path = data_path + '/python/data/preprocessed/'+subject_name[s]+'_eegEMAdownsampled_'\
+str(resample_freq)+'.pkl'
data = pd.read_pickle(save_path)
EEG = []
EMA = []
A = []
for i in range(0,data.shape[0]):
t = np.argmin(abs(data.iloc[i]['eegTime'] - 0))
eeg = data.iloc[i]['eeg'][:,t:]
ema = np.stack(data.iloc[i][features].get_values())
if(eeg.shape[1]>ema.shape[1]):
ema = np.pad(ema, ((0,0),(0,eeg.shape[1]-ema.shape[1])), 'constant')
elif(ema.shape[1]>eeg.shape[1]):
eeg = np.pad(eeg, ((0,0),(0,ema.shape[1]-eeg.shape[1])), 'constant')
EEG.append(eeg)
EMA.append(ema)
A.append(eeg.shape[1])
B = np.zeros((data.shape[0],59,max(A)))
C = np.zeros((data.shape[0],len(features),max(A)))
for i in range(0,B.shape[0]):
B[i,:,:EEG[i].shape[1]] = EEG[i]
C[i,:,:EMA[i].shape[1]] = EMA[i]
# all data
EEG = B
EMA = C
# remove first
EEG = EEG[:,:,int(remove_first*100):]
EMA = EMA[:,:,int(remove_first*100):]
for d in range(0,len(delay)):
t = int(D[d])
if(delay[d]<0):
ema = EMA[:,:,t:]
eeg = EEG[:,:,:-t]
elif(delay[d]==0):
ema = EMA
eeg = EEG
else:
ema = EMA[:,:,:-t]
eeg = EEG[:,:,t:]
A = np.concatenate((eeg,ema),axis=1)
if(trial_len*resample_freq<=A.shape[2]):
A = A[:,:,:trial_len*resample_freq+1]
save_path = data_path + '/python/data/coherence_analysis_matlab/'+subject_name[s]+\
'-trialLength'+str(trial_len)+'-delay'+str(delay[d])+'.mat'
scipy.io.savemat(save_path, {'data':A,'label':np.stack(info.ch_names)})
else:
print('---error-----trial length is bigger')
###Output
_____no_output_____
###Markdown
Loading the Keras packageWe begin by loading keras and the other packages
###Code
import keras
import os
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import preprocessing, svm, linear_model
%matplotlib inline
###Output
_____no_output_____
###Markdown
First, define `symbol_to_path` function to return the path of csv, and then define a function `get_data` to get the dataframe's column of 'Adj Close'.
###Code
#Define a function to return the path of csv
def symbol_to_path(symbol, base_dir="NASDAQ"):
return os.path.join(base_dir, "{}.csv".format(str(symbol)))
#Define a function to get DataFrame 'Adj Close'
def get_data(symbol):
df = pd.read_csv(symbol_to_path(symbol), usecols=['Adj Close'])
data0 = np.array(df)
if len(data0 == 188):
data1 = data0
return data1
###Output
_____no_output_____
###Markdown
Plot the normalized price of 5 different stocks and SPDR S&P 500 ETF (SPY)
###Code
symbols = ['AAPL', 'GOOG', 'TSLA', 'TURN', 'FLWS']
def get_stock(symbols, dates):
df=pd.DataFrame(index=dates)
df_temp = pd.read_csv('SPY.csv', index_col = 'Date', parse_dates = True, usecols=['Date','Adj Close'], na_values=['nan'])
df_temp = df_temp.rename(columns = {'Adj Close':'SPY'})
df = df.join(df_temp)
df = df.dropna(subset=["SPY"])
for symbol in symbols:
df_temp = pd.read_csv(symbol_to_path(symbol), index_col = 'Date', parse_dates = True, usecols=['Date','Adj Close'], na_values=['nan'])
df_temp = df_temp.rename(columns = {'Adj Close':symbol})
df = df.join(df_temp)
return df
def plot_data(df, title = "Stock price"):
ax = df.plot(title = title, fontsize = 10, grid = True)
ax.set_xlabel("Date")
ax.set_ylabel("Price")
plt.show()
def normalize_data(df):
df = df/df.iloc[0,:]
return df
def plot_stock():
start_date='2017-01-01'
end_date='2017-9-29'
dates=pd.date_range(start_date,end_date)
df = get_stock(symbols, dates)
df = normalize_data(df)
plot_data(df)
plot_stock()
###Output
_____no_output_____
###Markdown
For future use, we read stock data from folder 'NASDAQ'. Because we only predict the adjust close price of the stock, we only use this line of each stock.
###Code
import glob
path_lst=glob.glob(r'NASDAQ/*.csv')
stocks = []
for path in path_lst:
df = pd.read_csv(path)
df = df.fillna(method='ffill')
stk = np.array(df['Adj Close'])
stk = stk[np.logical_not(np.isnan(stk))]
#stk_preprocessing = preprocessing.scale(stk)
if(len(stk)==188):
stocks.append(stk)
###Output
_____no_output_____
###Markdown
print several stock prices in figure. slice the stock data into a 30 day circle. For each slice, first 29 days' price is considered to be the input while the 30th day's price is considered to be the output. Remember, the last day of the data should not be involved, because it is considered to be 'future'!
###Code
'''
slice_len = 30 ## pretend we have all stock data except the last day.
stocks_sliced = []
for i in range(len(stocks)):
pointer = 0
stk = stocks[i]
while(pointer + slice_len < len(stk)):
stocks_sliced.append(stk[pointer:pointer+slice_len])
pointer = pointer+slice_len
stocks_sliced = np.array(stocks_sliced)
print(np.shape(stocks_sliced))
'''
stocks_sliced = []
for i in range(len(stocks)):
pointer = 0
stk = stocks[i]
while(pointer + 30 < len(stk)):
stocks_sliced.append(stk[pointer:pointer+30])
pointer = pointer+30
stocks_sliced = np.array(stocks_sliced)
X_tr = stocks_sliced[:,0:29]
lastday = stocks_sliced[:,29]
day_before_lastday = stocks_sliced[:,28]
y_tr = np.array([])
for i in range(len(lastday)):
if(lastday[i]>day_before_lastday[i]):
y_tr = np.append(y_tr,[1])
else:
y_tr = np.append(y_tr,[0])
X_tr = np.array(X_tr)
X_tr = preprocessing.scale(X_tr)
import random
X_ts_pre = random.sample(stocks, 2000)
X_ts = []
y_ts = np.array([])
k = 0
for i in X_ts_pre:
if len(i) > 29:
X_ts.append(i[len(i)-30:len(i)-1])
if i[len(i)-1]>i[len(i)-2]:
y_ts = np.append(y_ts,[1])
else:
y_ts = np.append(y_ts,[0])
X_ts = np.array(X_ts)
X_ts = preprocessing.scale(X_ts)
###Output
_____no_output_____
###Markdown
Next, we run a logistic regression model. The parameter `C` states the level of regularization. And then fit the model.
###Code
# logistic regression
logreg = linear_model.LogisticRegression(C=1e5)
logreg.fit(X_tr, y_tr)
###Output
_____no_output_____
###Markdown
We can next calculate the accuracy on the training data.
###Code
y_ts_pred = logreg.predict(X_ts)
acc = np.mean(y_ts == y_ts_pred)
print("Accuracy on training data = %f" % acc)
###Output
Accuracy on training data = 0.533000
###Markdown
For the use of SVM model, we transform the label to 1, -1, we use 1 to represent the stock price increased comparing to the yesterday, and -1 means the stock price decreased comparing to yesterday.
###Code
X_tr_svm = stocks_sliced[:,0:29]
lastday = stocks_sliced[:,29]
day_before_lastday = stocks_sliced[:,28]
y_tr_svm = np.array([])
for i in range(len(lastday)):
if(lastday[i]>day_before_lastday[i]):
y_tr_svm = np.append(y_tr_svm,[1])
else:
y_tr_svm = np.append(y_tr_svm,[-1])
X_tr_svm = np.array(X_tr_svm)
import random
X_ts_pre = random.sample(stocks, 2000)
X_ts_svm = []
y_ts_svm = np.array([])
k = 0
for i in X_ts_pre:
if len(i) > 29:
X_ts_svm.append(i[len(i)-30:len(i)-1])
if i[len(i)-1]>i[len(i)-2]:
y_ts_svm = np.append(y_ts_svm,[1])
else:
y_ts_svm = np.append(y_ts_svm,[-1])
X_ts_svm = np.array(X_ts_svm)
print(X_ts_svm.shape)
stocks_sliced = []
for i in range(len(stocks)):
pointer = 0
stk = stocks[i]
while(pointer + 30 < len(stk)):
stocks_sliced.append(stk[pointer:pointer+30])
pointer = pointer+30
stocks_sliced = np.array(stocks_sliced)
###Output
_____no_output_____
###Markdown
Market momentum is measured by continually taking price differences for a fixed time interval. To construct a 10-day momentum line, simply divide the closing price 10 days ago from the last closing price, and minus 1, we can get the 10-day momentum percentage of increase or decrease of the price comparing to 10 days ago. And finally we calculate the mean value of 20 10-day momentum percentage as 1 feature.
###Code
#10-day momentum
def momentum(stocks_sliced):
mt = []
for i in range(20):
mt.append(stocks_sliced[:,i+10]/stocks_sliced[:,i]-1)
mt = np.array(mt).T
mt = np.mean(mt, axis=1)[:,None]
return mt
###Output
_____no_output_____
###Markdown
The simplest form of a moving average, appropriately known as a simple moving average (SMA), is calculated by taking the arithmetic mean of a given set of values.Calculated by taking the arithmetic mean of a given set of values. For example, to calculate a basic 10-day moving average you would add up the closing prices from the past 10 days and then divide the result by 10. And finally we calculate the mean value of 20 10-day simple moving average as 1 feature.
###Code
#10-day simple moving average
def sma(stocks_sliced):
mean = []
for i in range(20):
mean.append(np.mean(stocks_sliced[:,i:i+10], axis=1))
mean = np.array(mean).T
sma1 = (mean/stocks_sliced[:,0:20])-1
sma = np.mean(sma1, axis=1)[:,None]
return sma
def sma1(stocks_sliced):
mean = []
for i in range(20):
mean.append(np.mean(stocks_sliced[:,i:i+10], axis=1))
mean = np.array(mean).T
sma1 = (mean/stocks_sliced[:,0:20])-1
return sma1[:,0:20]
###Output
_____no_output_____
###Markdown
There are three lines that compose Bollinger Bands: A simple moving average (middle band) and an upper and lower band. These bands move with the price, widening or narrowing as volatility increases or decreases, respectively. The position of the bands and how the price acts in relation to the bands provides information about how strong the trend is and potential bottom or topping signals.* Middle Band = 10-day simple moving average (SMA)* Upper Band = 10-day SMA + (10-day standard deviation of price x 2)* Lower Band = 10-day SMA – (10-day standard deviation of price x 2)In our code we calculate 10 day stock value minus 10-day SMA and then divdie 2 times 10-day standard deviation of price and minus 1 to get the percentage of bollinger brands.
###Code
#bollinger brands
def bb(stocks_sliced):
std = []
for i in range(20):
std.append(np.std(stocks_sliced[:,i:i+10], axis=1))
std = np.array(std).T
bb = (stocks_sliced[:,0:20]-sma1(stocks_sliced))/2*std-1
bb = np.mean(bb, axis=1)[:,None]
return bb
mt = momentum(stocks_sliced)
sma = sma(stocks_sliced)
bb = bb(stocks_sliced)
Xtr = np.column_stack((mt,sma,bb))
ntr = 15000
nts = Xtr.shape[0]-ntr
X_tr = Xtr[:ntr,:]
ytr = y_tr_svm[:ntr]
X_ts = Xtr[ntr:ntr+nts,:]
yts = y_tr_svm[ntr:ntr+nts]
###Output
(2466, 3)
###Markdown
Next, we run a SVM model. and construct the SVC with the parameters.
###Code
svc = svm.SVC(probability=False,kernel="rbf",C=2.8,gamma=.0073,verbose=10)
svc.fit(X_tr,ytr)
###Output
[LibSVM]
###Markdown
We can next calculate the accuracy on the training data.
###Code
yhat_ts = svc.predict(X_ts)
acc = np.mean(yhat_ts == yts)
print('Accuaracy = {0:f}'.format(acc))
X_tr = stocks_sliced[:,0:slice_len -1]
lastday = stocks_sliced[:,slice_len -1]
day_before_lastday = stocks_sliced[:,slice_len -2]
y_tr = np.array([])
for i in range(len(lastday)):
if(lastday[i]>day_before_lastday[i]):
y_tr = np.append(y_tr,[1])
else:
y_tr = np.append(y_tr,[0])
X_tr = np.array(X_tr)
import random
X_ts_pre = random.sample(stocks, 200)
X_ts = []
y_ts = np.array([])
k = 0
for i in X_ts_pre:
if len(i) > slice_len -1:
X_ts.append(i[len(i)-slice_len:len(i)-1])
if i[len(i)-1]>i[len(i)-2]:
y_ts = np.append(y_ts,[1])
else:
y_ts = np.append(y_ts,[0])
X_ts = np.array(X_ts)
X_ts = np.expand_dims(X_ts, axis=2)
X_tr = np.expand_dims(X_tr, axis=2)
###Output
_____no_output_____
###Markdown
then, we clear the backend of keras
###Code
import keras.backend as K
K.clear_session()
###Output
_____no_output_____
###Markdown
input subpackets
###Code
from keras.models import Model, Sequential
from keras.layers import Dense, Activation
from keras.layers import Conv1D, Flatten, Dropout
model = Sequential()
model.add(Conv1D(input_shape = (slice_len -1,1),filters = 4,kernel_size=5,activation='relu',name = 'conv1D1'))
model.add(Conv1D(filters = 2,kernel_size=3,activation='relu',name = 'conv1D2'))
model.add(Flatten())
model.add(Dense(60, input_shape=(nin,), activation='sigmoid', name='hidden'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid', name='output'))
model.summary()
from keras import optimizers
opt = optimizers.Adam(lr=0.001)
model.compile(optimizer=opt,
loss='binary_crossentropy',
metrics=['accuracy'])
model.fit(X_tr, y_tr, epochs=20, batch_size=100, validation_data=(X_ts,y_ts))
###Output
Train on 2911 samples, validate on 200 samples
Epoch 1/20
2911/2911 [==============================] - 0s - loss: 0.7445 - acc: 0.5002 - val_loss: 0.6958 - val_acc: 0.5050
Epoch 2/20
2911/2911 [==============================] - 0s - loss: 0.7217 - acc: 0.5191 - val_loss: 0.6972 - val_acc: 0.5300
Epoch 3/20
2911/2911 [==============================] - 0s - loss: 0.6963 - acc: 0.5407 - val_loss: 0.7008 - val_acc: 0.5500
Epoch 4/20
2911/2911 [==============================] - 0s - loss: 0.6931 - acc: 0.5335 - val_loss: 0.7001 - val_acc: 0.5100
Epoch 5/20
2911/2911 [==============================] - 0s - loss: 0.6821 - acc: 0.5589 - val_loss: 0.7045 - val_acc: 0.5250
Epoch 6/20
2911/2911 [==============================] - 0s - loss: 0.6771 - acc: 0.5575 - val_loss: 0.7057 - val_acc: 0.5050
Epoch 7/20
2911/2911 [==============================] - 0s - loss: 0.6750 - acc: 0.5654 - val_loss: 0.7143 - val_acc: 0.4750
Epoch 8/20
2911/2911 [==============================] - 0s - loss: 0.6720 - acc: 0.5658 - val_loss: 0.7084 - val_acc: 0.5050
Epoch 9/20
2911/2911 [==============================] - 0s - loss: 0.6706 - acc: 0.5730 - val_loss: 0.7154 - val_acc: 0.5000
Epoch 10/20
2911/2911 [==============================] - 0s - loss: 0.6672 - acc: 0.5782 - val_loss: 0.7157 - val_acc: 0.5000
Epoch 11/20
2911/2911 [==============================] - 0s - loss: 0.6630 - acc: 0.5919 - val_loss: 0.7181 - val_acc: 0.4600
Epoch 12/20
2911/2911 [==============================] - 0s - loss: 0.6662 - acc: 0.5857 - val_loss: 0.7143 - val_acc: 0.5100
Epoch 13/20
2911/2911 [==============================] - 0s - loss: 0.6620 - acc: 0.5895 - val_loss: 0.7175 - val_acc: 0.5150
Epoch 14/20
2911/2911 [==============================] - 0s - loss: 0.6561 - acc: 0.5891 - val_loss: 0.7230 - val_acc: 0.4750
Epoch 15/20
2911/2911 [==============================] - 0s - loss: 0.6574 - acc: 0.5970 - val_loss: 0.7259 - val_acc: 0.4650
Epoch 16/20
2911/2911 [==============================] - 0s - loss: 0.6574 - acc: 0.5929 - val_loss: 0.7206 - val_acc: 0.5400
Epoch 17/20
2911/2911 [==============================] - 0s - loss: 0.6538 - acc: 0.5995 - val_loss: 0.7232 - val_acc: 0.4800
Epoch 18/20
2911/2911 [==============================] - 0s - loss: 0.6519 - acc: 0.5995 - val_loss: 0.7284 - val_acc: 0.4700
Epoch 19/20
2911/2911 [==============================] - 0s - loss: 0.6545 - acc: 0.6091 - val_loss: 0.7282 - val_acc: 0.4850
Epoch 20/20
2911/2911 [==============================] - 0s - loss: 0.6515 - acc: 0.6067 - val_loss: 0.7306 - val_acc: 0.4800
###Markdown
ranking part
###Code
y_ranking_tr = lastday/day_before_lastday
import random
X_ranking_ts_pre = random.sample(stocks, 200)
X_ranking_ts = []
y_ranking_ts = np.array([])
k = 0
for i in X_ranking_ts_pre:
if len(i) > slice_len -1:
X_ranking_ts.append(i[len(i)-slice_len:len(i)-1])
y_ranking_ts = np.append(y_ranking_ts,i[len(i)-1]/i[len(i)-2])
X_ranking_ts = np.array(X_ranking_ts)
X_ranking_ts = np.expand_dims(X_ranking_ts, axis=2)
# X_tr = np.expand_dims(X_tr, axis=2)
print(X_ranking_ts[1])
K.clear_session()
model_r = Sequential()
model_r.add(Conv1D(input_shape = (slice_len -1,1),filters = 4,kernel_size=5,activation='relu',name = 'conv1D1'))
model_r.add(Conv1D(filters = 2,kernel_size=3,activation='relu',name = 'conv1D2'))
model_r.add(Flatten())
model_r.add(Dense(60, input_shape=(nin,), activation='sigmoid', name='hidden'))
model_r.add(Dropout(0.5))
model_r.add(Dense(1, activation='linear', name='output'))
model_r.summary()
opt_r = optimizers.Adam(lr=0.001)
model_r.compile(optimizer=opt_r,
loss='mean_squared_error')
model_r.fit(X_tr, y_ranking_tr, epochs=10, batch_size=100, validation_data=(X_ranking_ts,y_ranking_ts))
###Output
Train on 2911 samples, validate on 200 samples
Epoch 1/10
2911/2911 [==============================] - 0s - loss: 15.2629 - val_loss: 4.3344
Epoch 2/10
2911/2911 [==============================] - 0s - loss: 15.1241 - val_loss: 4.2732
Epoch 3/10
2911/2911 [==============================] - 0s - loss: 15.2994 - val_loss: 4.3203
Epoch 4/10
2911/2911 [==============================] - 0s - loss: 15.3500 - val_loss: 4.3218
Epoch 5/10
2911/2911 [==============================] - 0s - loss: 15.3498 - val_loss: 4.3330
Epoch 6/10
2911/2911 [==============================] - 0s - loss: 15.1777 - val_loss: 4.3666
Epoch 7/10
2911/2911 [==============================] - 0s - loss: 15.2919 - val_loss: 4.2962
Epoch 8/10
2911/2911 [==============================] - 0s - loss: 15.1247 - val_loss: 4.3282
Epoch 9/10
2911/2911 [==============================] - 0s - loss: 15.0422 - val_loss: 4.3600
Epoch 10/10
2911/2911 [==============================] - 0s - loss: 14.9919 - val_loss: 4.3034
###Markdown
Data processing pipeline for eBird data > Walkthrough for data processing steps to build the Birds of a Feather birding partner recommender from eBird observation data. Contents:1. Read relevant columns from eBird raw data (obtainable on https://ebird.org/science/download-ebird-data-products) [step 1]2. Group observation by user and extract features for that user [step 2]3. Extract pairs of users [step 3]4. Create georeferenced shapefile with users [step 4]5. Find user names with the eBird API [step 5]6. Scrape user profiles from eBird with a webbot [step 6] 1. Read raw eBird data > Reads eBird data *.txt* by chunks using pandas and write chunks to a *.csv* with observations on rows and a subset of columns used for feature extraction by the data processing script. Usage:
###Code
!python utils/data_processing/read_ebird.py -h
###Output
usage: eBird database .txt file muncher [-h] [--input_txt INPUT_TXT]
[--period PERIOD] [--output OUTPUT]
optional arguments:
-h, --help show this help message and exit
--input_txt INPUT_TXT, -i INPUT_TXT
path to eBird database file
--period PERIOD, -p PERIOD
start year to end year separated by a dash
--output OUTPUT, -o OUTPUT
path to output csv file
###Markdown
2. Process eBird data > Reads oservations *.csv* from previous step, sorts observations by the **OBSERVER ID** column, chunks observations by **OBSERVER ID** and compiles all observation rows for a user into a single row with features for that user. Finding the centroid for a user takes $O(n^{2})$; be advised this may take a considerable time for users with > 100000 observations. See usage below:
###Code
!python utils/data_processing/process_ebird.py -h
###Output
usage: Script to process eBird observations into user data [-h]
[--input_csv INPUT_CSV]
[--cores CORES]
[--output OUTPUT]
optional arguments:
-h, --help show this help message and exit
--input_csv INPUT_CSV, -i INPUT_CSV
path to observations .csv file
--cores CORES, -c CORES
number of cores for parallel processing
--output OUTPUT, -o OUTPUT
path to output csv file
###Markdown
3. Extract pairs of users > Reads observation *.csv* file from step 1 and user features from step 2 to create a *.csv* with a subset of users that have paired eBird activity. Pairs are found looking for users that share a unique **GROUP IDENTIFIER** from the observations data. Usage:
###Code
!python utils/data_processing/extract_pairs.py -h
###Output
usage: Script to get all pairs of users within observations
[-h] [--input_obs INPUT_OBS] [--input_users INPUT_USERS]
[--cores CORES] [--output OUTPUT]
optional arguments:
-h, --help show this help message and exit
--input_obs INPUT_OBS, -i INPUT_OBS
path to observations .csv file
--input_users INPUT_USERS, -u INPUT_USERS
path to users .csv file
--cores CORES, -c CORES
number of cores for parallel processing
--output OUTPUT, -o OUTPUT
path to output csv file
###Markdown
4. Create georeferenced dataset > Converts latitude and longitude columns from step 2 *.csv* with user features DataFrame into shapely Points. Writes new data frame as *.shp* file readable by GIS software and geopandas. Used to filter matches by distance in the app. See usage:
###Code
!python utils/data_processing/get_shapefile.py -h
###Output
usage: copies a .csv dataframe with latitude and longitude columns into a GIS shapefile
[-h] [--input_csv INPUT_CSV] [--output_shp OUTPUT_SHP]
optional arguments:
-h, --help show this help message and exit
--input_csv INPUT_CSV, -i INPUT_CSV
Path to .csv file.
--output_shp OUTPUT_SHP, -o OUTPUT_SHP
path to output shapefile
###Markdown
5. Find user names using eBird API > Uses checklist identifiers from user features (step 2) to find user profile names with the eBird API and add them to the georeferenced dataset (step 4). See usage:
###Code
!python utils/data_processing/add_user_names.py -h
###Output
usage: Adds users ID column to users shapefile [-h] [--users_shp USERS_SHP]
[--counties_shp COUNTIES_SHP]
optional arguments:
-h, --help show this help message and exit
--users_shp USERS_SHP, -u USERS_SHP
path to users shapefile
--counties_shp COUNTIES_SHP, -c COUNTIES_SHP
path to counties shapefile
###Markdown
6. Scrape user profiles from eBird > Uses webbot, checklist identifiers from step 2 and user profile names from step 5 to find links to public user profile for each user. Defaults to the unique checklist IDs when profiles are not found (only ~25% of eBird users currently have public profiles). Profile column added to *.shp* file from step 4 and is provided to recommendations. See usage:
###Code
!python utils/data_processing/get_ebird_profile.py -h
###Output
usage: Uses webbot to extract user profile urls from ebird [-h]
[--input_users INPUT_USERS]
[--output_txt OUTPUT_TXT]
optional arguments:
-h, --help show this help message and exit
--input_users INPUT_USERS, -i INPUT_USERS
path to users dataframe with checklist IDs to search
for profiles
--output_txt OUTPUT_TXT, -o OUTPUT_TXT
path to .txt file where user profile urls will be
written to
###Markdown
Ayiti Analytics Data Processing Bootcamp Ayiti Analytics Data wants to expand its training centers throughout all the communes of the country. Your role as a data analyst is to help them realize this dream.Its objective is to know which three communes of the country will be the most likely to expand its training centers.Knowing that each cohort must have 30 students * How many applications must be made to select 25% women for each on average* What are the most effective communication channels (Alumni, Facebook, WhatsApp, Friend ...) that will allow a student to be susceptible to selection * What is the average number of university students who should participate in this program* What will be the average number of applications per week that we could have* How many weeks should we extend the application process to select 60 students per commune?* If we were to do all the bootcamp online, who would be the best communes and how many applications would we need to select 30 student and what percentage of students would have a laptop, an internet connection, both at the same time* What are the most effective communication channels (Alumni, Facebook, WhatsApp, Friend ...) that will allow a women to be susceptible to selection NB Use the same framework of the BA project to complete this project Data Analysis Steps Retreve Dataset Data cleansing Data Processing Data Analysis Univariate Data Analysis Multivariate Here are the libraries used for the project
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Retrieving and Cleaning Data for Commune Dataset
###Code
#Import Data from Commune Dataset File
commune_path = "commune.xlsx"
commune_data = pd.read_excel(commune_path)
commune_data.head()
# Removing a repeated column and
drop_cols=['Commune_en']
commune_data.drop(columns=drop_cols, inplace=True)
#Rename the columns for a better use and set Commune_id to Index of the Dataset
commune_cols=['commune','commune_id', 'departement','departement_id']
commune_data.columns=commune_cols
commune_data=commune_data.set_index('commune_id')
commune_data.head()
#Function to check for null values in Dataset
def check_null(data):
null=data.isna().sum()
return null
#Check for null values in Commune Dataset
check_null(commune_data)
###Output
_____no_output_____
###Markdown
Retrieving and Cleaning Data for Quest Dataset
###Code
#Import Data from Quest Dataset file
quest_path="quest.csv"
quest_data=pd.read_csv(quest_path, index_col=0)
quest_data.head(2)
check_null(quest_data)
# Removing some unnecessary columns
drop_cols=['modified_at','formal_sector_job','after_AA','department']
quest_data.drop(columns=drop_cols, inplace=True)
#Rename the columns of the Dataset for a better use
quest_cols=['gender','dob','commune_id','created_at','education_level','university','study_domain','current_employed','computer_home','internet_home','hear_AA','quest_id']
quest_data.columns=quest_cols
#set the data in quest.commune_id to upper
quest_data['commune_id']=quest_data['commune_id'].str.upper()
#replace a wrong date format in the dataset
quest_data['dob'] = quest_data['dob'].replace(['3 aout 1977'],'03/08/1977')
#set commune_id to index
quest_data=quest_data.set_index('commune_id')
#set dob column to datetime type
quest_data['dob'] = pd.to_datetime(quest_data['dob'])
#This line fill all the null values in dob and departement column
quest_data.dob=quest_data['dob'].fillna(value=quest_data.dob.mean())
#replace null value in study_domain with the mode value
quest_data['study_domain'] = quest_data['study_domain'].replace(['[]'],quest_data['study_domain'].mode())
quest_data.head()
#check nul values
check_null(quest_data)
#Function to check DUplicated
def check_duplicate(data):
duplicate=data.duplicated().sum()
return duplicate
check_duplicate(quest_data)
#shape of dataset before merge
print(quest_data.shape)
print(commune_data.shape)
dataset = pd.merge(left =commune_data,right=quest_data,how="inner",on="commune_id")
dataset
###Output
_____no_output_____
###Markdown
Retrieving and Cleaning Data from Enroll, Transaction & Ord Dataset
###Code
#Import Data from Enroll Dataset file
enroll_path="enroll.csv"
enroll_data=pd.read_csv(enroll_path, index_col=0)
enroll_data.head(2)
#Selected the needed columns
cols=['user_id','quest_id']
enroll_data=enroll_data.loc[:,cols]
enroll_data.head(2)
#Import Data from Transaction Dataset file
trans_path="transaction.csv"
trans_data=pd.read_csv(trans_path, index_col=0)
trans_data.head(2)
#Selected the needed columns
cols=['user_id']
trans_data=trans_data.loc[:,cols]
enroll_trans_data=pd.merge(left =enroll_data,right=trans_data,how="inner",on="user_id")
enroll_trans_data['payment_method']='MonCash'
enroll_trans_data=enroll_trans_data.loc[:,['quest_id','payment_method']]
enroll_trans_data.head(2)
#Import Data from Ord Dataset file
ord_path="ord.csv"
ord_data=pd.read_csv(ord_path, index_col=0)
ord_data.head(2)
#Selected the needed columns
cols=['user_id','quest_id']
ord_data=ord_data.loc[:,cols]
ord_data.head(2)
enroll_ord_data=pd.merge(left =enroll_data,right=ord_data,how="inner",on="user_id")
enroll_ord_data['payment_method']='CreditCard'
enroll_ord_data=enroll_ord_data.loc[:,['quest_id_x','payment_method']]
enroll_ord_data=enroll_ord_data.rename(columns={'quest_id_x':'quest_id'})
enroll_ord_data.head(2)
#Let's concatenate the dataframe
concatenation = pd.concat([enroll_ord_data,enroll_trans_data],axis = 0)
concatenation.head(25)
final_dataset = pd.merge(dataset,concatenation,how = 'left', left_on = 'quest_id', right_on= 'quest_id')
#final.reset_index(inplace = True ,level = 0)
final_dataset['payment_method'] = final_dataset['payment_method'].fillna('No payment')
final_dataset.head()
check_null(final_dataset)
#Function that calculate age from Date of birth
from datetime import datetime, date
def age(dob):
today = date.today()
return today.year - dob.year - ((today.month,today.day)< (dob.month,dob.day))
final_dataset['dob'] = pd.to_datetime(final_dataset['dob'])
final_dataset['age'] = final_dataset['dob'].apply(age)
final_dataset.loc[:,['dob','age']]
plt.figure(figsize=(14,6))
plt.style.use('seaborn-darkgrid')
plt.hist(final_dataset.age,bins=20,alpha =0.5,color="blue")
plt.title("Age Distribution")
plt.show()
gender_total=final_dataset.groupby(by=['gender']).gender.count().to_frame()
#gender_total.rename(columns={"Sex": "Total"},inplace=True)
gender_total.columns=['Total']
gender_total
ax=gender_total.plot(kind='barh')
fig=ax.get_figure()
fig.set_size_inches(7, 7)
#prob_category(data=gender_total, col="gender", abs_value="Total", )
gender_commune=final_dataset.groupby(['commune']).gender.count().to_frame()
gender_commune=gender_commune.sort_values(by=['gender'] ,ascending=False)
gender_commune=gender_commune.iloc[:4,:]
gender_commune
ax=gender_commune.plot(kind='barh')
fig=ax.get_figure()
fig.set_size_inches(7, 7)
my_pivot = pd.pivot_table(data=final_dataset,index="hear_AA",columns="gender",values ="quest_id",aggfunc="count")
my_pivot=my_pivot.sort_values(['female','male'], ascending=False)
my_pivot
ax=my_pivot.plot(kind='barh')
fig = ax.get_figure()
# Change the plot dimensions (width, height)
fig.set_size_inches(7, 7)
def generate_barchart(data="", title ="",abs_value ="",rel_value="",figsize =(10,6)):
plt.figure(figsize=figsize)
axes = sns.barplot(data=data,x=data.index,y=abs_value)
i=0
for tot, perc in zip(data[abs_value],data[rel_value]):
axes.text(i,
tot/2,
str(np.round(perc*100,2))+ "%",
fontdict=dict(color='White',fontsize=12,horizontalalignment="center")
)
axes.text(i,
tot+ 3,
str(tot),
fontdict=dict(color='blue',fontsize=12,horizontalalignment="center")
)
i+=1
plt.title(title)
plt.show()
#generate_barchart(data=my_pivot,title="Total et Percent By Sex",abs_value="gender",rel_value="gender")
my_pivot2 = pd.pivot_table(data=final_dataset,index="commune",columns="gender",values ="quest_id",aggfunc="count")
my_pivot2=my_pivot2.sort_values(['female','male'], ascending=False)
my_pivot2=my_pivot2.iloc[:4,:]
ax=my_pivot2.plot(kind='barh')
fig = ax.get_figure()
# Change the plot dimensions (width, height)
fig.set_size_inches(7, 7)
# kakile frekans absoli ak frekans relativ yon varyab kategorik
def prob_category(data,col="Pclass_letter", abs_value ="Total",rel_value ="Percent",show_plot=False, title=""):
# absolute value
res1 = data[col].value_counts().to_frame()
res1.columns = [abs_value]
res2 = data[col].value_counts(normalize=True).to_frame()
res2.columns = [rel_value]
if not show_plot:
return pd.concat([res1,res2],axis=1)
else:
result = pd.concat([res1,res2],axis=1)
generate_barchart(data=result, title =title,abs_value =abs_value,rel_value=rel_value,figsize =(10,6))
return result
gender=prob_category(final_dataset, col='gender', show_plot=True, title='Distribution')
prob_female = final_dataset[final_dataset.gender == "female"].shape[0] / 0.25
prob_female
final_dataset[final_dataset.gender == "female"].shape[0]
total =dataset.groupby(by=["gender"]).departement.count().to_frame()
total.columns = ["% Per Departement"]
def prob_category(data,col="", abs_value ="",rel_value ="",show_plot=False, title=""):
# absolute value
res1 = data[col].value_counts().to_frame()
res1.columns = [abs_value]
res2 = data[col].value_counts(normalize=True).to_frame()
res2.columns = [rel_value]
if not show_plot:
return pd.concat([res1,res2],axis=1)
else:
result = pd.concat([res1,res2],axis=1)
generate_barchart(data=result, title =title,abs_value =abs_value,rel_value=rel_value,figsize =(10,6))
return result
###Output
_____no_output_____
###Markdown
3.- What is the average number of university students who should participate in this program
###Code
university = pd.pivot_table(data=final_dataset,index="commune",columns="education_level",aggfunc="count",fill_value=0)
university=university.loc[:,['Bachelors (bacc +4)','Masters','Doctorate (PhD, MD, JD)']]
university
#university=university.sort_values(['female','male'], ascending=False)
#university=university.iloc[:4,:]
['Bachelors (bacc +4)','Masters','Doctorate (PhD, MD, JD)']
uni=final_dataset['education_level'].unique()
uni
###Output
_____no_output_____
###Markdown
Procesamiento de los datosLimpieza y transformaciones, la salida estará lista para modelar.
###Code
# settings
import pandas as pd
from itertools import chain
# data path
path_input = "https://raw.githubusercontent.com/yoselalberto/ia_proyecto_final/main/data/celulares.csv"
path_salida = 'work/data/processed/celulares_procesados.csv'
# estos datos tienen el formato adecuado para imprimirlos en pantalla:
path_salida_formato = 'work/data/processed/celulares_formato.csv'
# more dependencies
import janitor
# corrigé un error en el formato de los valores de cada instancia
def replace_string(dataframe, string = ','):
# elimina el caracter molesto
df = dataframe.copy()
# column by column
for columna in df.columns.values:
df[columna] = df[columna].str.replace(string, '')
return df
# lowercase all dataframe
def df_lowercase(dataframe):
# lowercase all columns
df = dataframe.copy()
for columna in df.columns.values:
df[columna] = df[columna].str.lower()
return df
# coerse columns
def df_numeric(dataframe, columns):
df = dataframe.copy()
df[columns] = df[columns].apply(pd.to_numeric, errors='coerce')
return df
# agrupo las funciones anteriores
def df_clean(dataframe, string, columns_to_numeric):
df = dataframe.copy()
#
df_2 = replace_string(dataframe, string)
df_3 = df_lowercase(df_2)
df_4 = df_numeric(df_3, columns = columns_to_numeric)
return df_4
# limpieza parcial
def df_clean_parcial(dataframe, string, columns_to_numeric):
df = dataframe.copy()
#
df_2 = replace_string(dataframe, string)
df_3 = df_numeric(df_2, columns = columns_to_numeric)
return df_3
# los pasos los meto en funciones
def clean_tecnologia(dataframe):
df = dataframe.copy()
# tabla de soporte
tabla_tecnologias = pd.DataFrame(
{'tecnologia' : ['2g/3g/4g/4glte/5g', '4glte', '4g/gsm', '2g/3g/4g/4glte/gsm', '4g', '5g', '3g/4g/gsm', '4g/4glte/gsm/lte', '2g/3g/lte', '3g/lte'],
'tecnologia_mejor' : ['5g', '4glte', '4g', '4glte', '4g', '5g', '4g', '4glte', '4glte', '4glte']}
)
# sustitución
df_salida = df.merge(tabla_tecnologias, how = "left").drop(columns = {'tecnologia'}).rename(columns = {'tecnologia_mejor': 'tecnologia'})
# salida
return df_salida
# procesador
def clean_procesador(dataframe):
df = dataframe.copy()
#
df['procesador'] = df.procesador.str.split().str.get(0).str.replace('\d+', '')
# salida
return df
# clean operative systems
def clean_os(dataframe):
df = dataframe.copy()
#
df['sistema_operativo']= df.sistema_operativo.str.extract(r'(android|ios)', expand = False)
# salida
return df
# chain steps
def df_procesamiento(dataframe):
df = dataframe.copy()
# steps
df_tecnologia = clean_tecnologia(df)
df_procesador = clean_procesador(df_tecnologia)
df_os = clean_os(df_procesador)
# resultado
return df_os
df_prueba = pd.read_csv(path_input, dtype = 'str')
df_prueba.head(1)
# data loading
df_raw = pd.read_csv(path_input, dtype = 'str').clean_names()
df_raw
# renombro columnas
nombres = {"nombre_del_producto": 'producto_nombre', 'memoria_interna': 'memoria'}
df_inicio = df_raw.rename(columns = nombres)
# limpieza inicial
columns_numeric = ['peso', 'camara_trasera', 'camara_frontal', 'ram', 'memoria', 'precio']
#
df_limpio = df_clean(df_inicio, ',', columns_numeric).drop_duplicates().reset_index(drop = True)
df_limpio
# transformación de las columnas
df_procesado = df_procesamiento(df_limpio)
# salvado
df_procesado.to_csv(path_salida, index = False)
###Output
_____no_output_____
###Markdown
Recomendación a mostrarEl siguiente procesamiento le da formato al dataframe a mostrar.
###Code
# limpieza
df_limpio_parcial_inicio = df_clean_parcial(df_inicio, ',', columns_numeric).drop_duplicates().reset_index(drop = True)
df_limpio_parcial = clean_procesador(df_limpio_parcial_inicio)
# reordenamiento
df_limpio_parcial_orden = df_limpio_parcial[['producto_nombre', 'marca', 'color', 'sistema_operativo', 'memoria', 'ram', 'precio', 'camara_trasera', 'camara_frontal', 'pantalla', 'tecnologia', 'procesador', 'peso']]
# nombres
df_limpio_parcial_orden.columns = ['Nombre', 'Marca', 'Color', 'Sistema operativo', 'Memoria', 'Ram', 'Precio', 'Camara Trasera', 'Camara Frontal', 'Pantalla', 'Tecnologia', 'Procesador', 'Peso']
df_limpio_parcial_orden['Peso'] = df_limpio_parcial_orden['Peso'] * 1000
# lowercase al nombre de los productos
df_limpio_parcial_orden['producto_nombre'] = df_limpio_parcial_orden['Nombre'].str.lower()
df_limpio_parcial_orden
# salvado de los datos con el formato bonito
df_limpio_parcial_orden.to_csv(path_salida_formato, index = False)
###Output
_____no_output_____
###Markdown
Data Preprocessing:here we select two overlaping 5 second audio segments from the start and end of the audio segment, assuming that the bird audio is likely to be present during the beginning and end of the audio filethen we split the dataset into training and test set
###Code
import os
import random
import pandas as pd
from sklearn.model_selection import train_test_split
import numpy as np
import librosa
import soundfile
''' list of bird samples in path'''
path = os.path.join(os.getcwd(),'train_short_audio')
bird_samples = [name for name in os.listdir(path)]
bird_sample_numbers = [(name,len([name_1 for name_1 in os.listdir(os.path.join(path, name)) if os.path.isfile(os.path.join( os.path.join(path,name), name_1)) ])) for name in bird_samples ]
bird_sample_numbers
class SplitAudio():
''' split the audio file to four 5 second snippets (2 clips in the
beginning and 2 in the end with overlap)'''
def __init__(self,sig_path,time_sample_size,sr = 32000,overlap_min = 0.05,overlap_max = 0.5):
self.sig_path = sig_path
self.time_sample_size = time_sample_size
self.overlap_min = overlap_min
self.overlap_max = overlap_max
self.sr = sr
def __call__ (self,save_path,bird,name):
x,sr = librosa.load(os.path.join(self.sig_path,bird,name),sr = self.sr)
total_duration = len(x)
#seg = int(np.floor(total_duration/(img_time_diff*self.sr)))
overlap = random.uniform(self.overlap_min,self.overlap_max)
save_path_2 = os.path.join(save_path,name[:-4])
seg_list = [0]
if total_duration > (2 - overlap) * self.time_sample_size * self.sr:
seg_list = seg_list + [int(np.ceil((1-overlap)*self.time_sample_size*self.sr))]
if total_duration > 2*self.time_sample_size*self.sr:
seg_list = seg_list + [int(np.floor(total_duration - ((1 - overlap)*self.time_sample_size + self.time_sample_size)*self.sr)),int(np.floor(total_duration - ( self.time_sample_size)*self.sr))]
if not os.path.exists(save_path_2):
os.makedirs(save_path_2)
j = 0
for i in seg_list:
# Get start and stop sample
s_start = i #int(max(0,(second - time_sample_size) * 32000))
s_end = i + self.time_sample_size*self.sr#int( min(second * 32000,total_duration))
out = os.path.join(save_path_2,"mel_"+str(j)+"_"+name[:-4]+".ogg")
j+=1
soundfile.write(out,x[s_start:s_end],samplerate = self.sr)
###Output
_____no_output_____
###Markdown
Generate Audio chunks
###Code
segmented_audio_path = os.getcwd() + '\\train_samples'
sig_path = os.getcwd() + '\\train_short_audio'
if not os.path.exists(sig_path):
os.makedirs(sig_path)
time_sample_size = 5
split_audio = SplitAudio(sig_path,time_sample_size)
for bird in bird_samples:
save_path = os.path.join(segmented_audio_path,bird)
if not os.path.exists(save_path):
os.makedirs(save_path)
file_list = [name for name in os.listdir(os.path.join(sig_path, bird)) ]
for name in file_list:
split_audio(save_path,bird,name)
# Compute the spectrogram and apply the mel scale
'''clip nocall files from train soundscapes. These files would be added later for audio augmentation as a source of noise'''
sc_list = pd.read_csv('train_soundscape_labels.csv')
sc_list = sc_list[sc_list.birds == 'nocall']
sc_list["fileprefix"] = sc_list["audio_id"].apply(str)+"_"+sc_list["site"].apply(str)
path = os.getcwd() + '\\train_soundscapes'
def getprefix(x):
x = x.split("_")
return x[0]+"_"+x[1]
sc_audio_names = pd.DataFrame(data = [name for name in os.listdir(path)],columns = ["filename"])
sc_audio_names["fileprefix"] = sc_audio_names.apply(lambda x: getprefix(x[0]) ,axis = 1)
i = 0
outpath = os.path.join(os.getcwd(),"train_samples")
if not os.path.exists(outpath):
os.makedirs(outpath)
for _,row in sc_audio_names.iterrows():
y,_ = librosa.load(os.path.join(path,row[0]),sr = 32000)
out_path_1 = os.path.join(outpath,'nocall',row[1])
if not os.path.exists(out_path_1):
os.makedirs(out_path_1)
for _,subrow in sc_list[sc_list.fileprefix == row[1]].iterrows():
s_start = (subrow[3] - 5)*32000 #int(max(0,(second - time_sample_size) * 32000))
s_end = subrow[3]*32000
out = os.path.join(out_path_1,subrow[0]+".ogg")
soundfile.write(out,y[s_start:s_end],samplerate = 32000)
###Output
filename 10534_SSW_20170429.ogg
fileprefix 10534_SSW
Name: 0, dtype: object
filename 11254_COR_20190904.ogg
fileprefix 11254_COR
Name: 1, dtype: object
filename 14473_SSW_20170701.ogg
fileprefix 14473_SSW
Name: 2, dtype: object
filename 18003_COR_20190904.ogg
fileprefix 18003_COR
Name: 3, dtype: object
filename 20152_SSW_20170805.ogg
fileprefix 20152_SSW
Name: 4, dtype: object
filename 21767_COR_20190904.ogg
fileprefix 21767_COR
Name: 5, dtype: object
filename 26709_SSW_20170701.ogg
fileprefix 26709_SSW
Name: 6, dtype: object
filename 26746_COR_20191004.ogg
fileprefix 26746_COR
Name: 7, dtype: object
filename 2782_SSW_20170701.ogg
fileprefix 2782_SSW
Name: 8, dtype: object
filename 28933_SSW_20170408.ogg
fileprefix 28933_SSW
Name: 9, dtype: object
filename 31928_COR_20191004.ogg
fileprefix 31928_COR
Name: 10, dtype: object
filename 42907_SSW_20170708.ogg
fileprefix 42907_SSW
Name: 11, dtype: object
filename 44957_COR_20190923.ogg
fileprefix 44957_COR
Name: 12, dtype: object
filename 50878_COR_20191004.ogg
fileprefix 50878_COR
Name: 13, dtype: object
filename 51010_SSW_20170513.ogg
fileprefix 51010_SSW
Name: 14, dtype: object
filename 54955_SSW_20170617.ogg
fileprefix 54955_SSW
Name: 15, dtype: object
filename 57610_COR_20190904.ogg
fileprefix 57610_COR
Name: 16, dtype: object
filename 7019_COR_20190904.ogg
fileprefix 7019_COR
Name: 17, dtype: object
filename 7843_SSW_20170325.ogg
fileprefix 7843_SSW
Name: 18, dtype: object
filename 7954_COR_20190923.ogg
fileprefix 7954_COR
Name: 19, dtype: object
###Markdown
Arange files and split into test and training set
###Code
segmented_audio_path = os.getcwd() + '\\train_samples'
sig_path = os.getcwd() + '\\train_short_audio'
#create list of images with label
birds = [name for name in os.listdir(segmented_audio_path)]
bird_numbers = [[(name,name_1) for name_1 in os.listdir(os.path.join(segmented_audio_path, name)) ]
for name in birds ]
bird_numbers = [name for sublist in bird_numbers for name in sublist]
bird_numbers = [[(bird,name,name_1) for name_1 in os.listdir(os.path.join(segmented_audio_path,bird, name)) ]
for bird,name in bird_numbers]
bird_numbers = [name for sublist in bird_numbers for name in sublist]
train_metadata_1 = pd.DataFrame(data = bird_numbers,columns = ['primary_label','folder','filename'])
train_metadata_1['key'] = train_metadata_1['primary_label']+train_metadata_1['folder']+'.ogg'
train_metadata_2 = pd.read_csv('train_metadata.csv')
train_metadata_2['key'] = train_metadata_2['primary_label'].astype(str)+train_metadata_2['filename'].astype(str)
train_metadata = train_metadata_1.set_index(['key']).join(train_metadata_2.set_index(['key']),on = 'key',lsuffix = '',rsuffix='_y',how = 'left').reset_index()[['primary_label','folder','secondary_labels','filename']]
train_metadata.replace(np.nan,'[]',inplace = True)
#create train_dev and test set
train_metadata['secondary_labels'] = train_metadata['secondary_labels'].apply(lambda x: x.replace("[","").replace("]","").replace("'","").replace(" ","").split(","))
valid_labels = train_metadata.primary_label.unique()
train_metadata['secondary_labels'] = train_metadata['secondary_labels'].apply(lambda x: list(set(x) & set(valid_labels)))
metadata_to_split = train_metadata.loc[:,['folder','primary_label']].drop_duplicates()
x_train_dev,x_test,y_train_dev,y_test = train_test_split(metadata_to_split['folder'],metadata_to_split['primary_label'],test_size = 0.05,stratify = metadata_to_split['primary_label'])
train_dev = train_metadata[train_metadata['folder'].isin(x_train_dev.to_list())]
test = train_metadata[train_metadata['folder'].isin(x_test.to_list())]
#save train and test csv's
train_dev.reset_index(inplace = True)
test.reset_index(inplace = True)
#split train_dev to train and dev sets
metadata_to_split = train_dev.loc[:,['folder','primary_label']].drop_duplicates()
x_train,x_dev,y_train,y_dev = train_test_split(metadata_to_split['folder'],metadata_to_split['primary_label'],test_size = 0.1,stratify = metadata_to_split['primary_label'])
train = train_dev[train_dev['folder'].isin(x_train.to_list())]
dev = train_dev[train_dev['folder'].isin(x_dev.to_list())]
#save train and test csv's
train.reset_index(inplace = True)
dev.reset_index(inplace = True)
bird
base_dir = os.getcwd() + '\\train_test_dev_set'
copy_dir = os.getcwd() + '\\train_samples'
os.makedirs(os.path.join(base_dir,'train'))
os.makedirs(os.path.join(base_dir,'test'))
os.makedirs(os.path.join(base_dir,'dev'))
train.to_csv(os.path.join(base_dir,'train','train.csv'))
test.to_csv(os.path.join(base_dir,'test','test.csv'))
dev.to_csv(os.path.join(base_dir,'dev','dev.csv'))
import shutil
for bird in birds:
train_bird_to = os.path.join(base_dir,'train',bird)
test_bird_to = os.path.join(base_dir,'test',bird)
dev_bird_to = os.path.join(base_dir,'dev',bird)
os.makedirs(train_bird_to)
os.makedirs(test_bird_to)
os.makedirs(dev_bird_to)
copy_files_from = os.path.join(copy_dir,bird)
train_copy = train[train['primary_label']==bird].loc[:,['folder','filename']]
test_copy = test[test['primary_label']==bird].loc[:,['folder','filename']]
dev_copy = dev[dev['primary_label']==bird].loc[:,['folder','filename']]
for i,train_row in train_copy.iterrows():
shutil.copy(os.path.join(copy_files_from,train_row[0],train_row[1]),train_bird_to)
for i,test_row in test_copy.iterrows():
shutil.copy(os.path.join(copy_files_from,test_row[0],test_row[1]),test_bird_to)
for i,dev_row in dev_copy.iterrows():
shutil.copy(os.path.join(copy_files_from,dev_row[0],dev_row[1]),dev_bird_to)
###Output
_____no_output_____
###Markdown
Imports
###Code
from IPython.display import clear_output
!pip install path.py
!pip install pytorch3d
clear_output()
import numpy as np
import math
import random
import os
import plotly.graph_objects as go
import plotly.express as px
import torch
from torch.utils.data import Dataset, DataLoader, Subset
from torchvision import transforms, utils
from path import Path
random.seed = 42
!wget http://3dvision.princeton.edu/projects/2014/3DShapeNets/ModelNet10.zip
!unzip -q ModelNet10.zip
path = Path("ModelNet10")
folders = [dir for dir in sorted(os.listdir(path)) if os.path.isdir(path/dir)]
clear_output()
classes = {folder: i for i, folder in enumerate(folders)}
classes
def default_transforms():
return transforms.Compose([
PointSampler(1024),
Normalize(),
RandomNoise(),
ToSorted(),
ToTensor()
])
!gdown https://drive.google.com/uc?id=1CVwVxdfUfP6TRcVUjjJvQeRcgCGcnSO_
from helping import *
clear_output()
###Output
_____no_output_____
###Markdown
Data Preprocessing (optional)
###Code
with open(path/"dresser/train/dresser_0001.off", 'r') as f:
verts, faces = read_off(f)
i, j, k = np.array(faces).T
x, y, z = np.array(verts).T
# len(x)
# visualize_rotate([go.Mesh3d(x=x, y=y, z=z, color='lightpink', opacity=0.50, i=i,j=j,k=k)]).show()
# visualize_rotate([go.Scatter3d(x=x, y=y, z=z, mode='markers')]).show()
# pcshow(x, y, z)
pointcloud = PointSampler(1024)((verts, faces))
# pcshow(*pointcloud.T)
norm_pointcloud = Normalize()(pointcloud)
# pcshow(*norm_pointcloud.T)
noisy_pointcloud = RandomNoise()(norm_pointcloud)
# pcshow(*noisy_pointcloud.T)
rot_pointcloud = RandomRotation_z()(noisy_pointcloud)
# pcshow(*rot_pointcloud.T)
sorted_pointcloud = ToSorted()(rot_pointcloud)
# pcshow(*sorted_pointcloud.T)
tensor_pointcloud = ToTensor()(sorted_pointcloud)
###Output
_____no_output_____
###Markdown
Creating Loaders for Final Progress Report Redefine classes
###Code
class PointCloudData(Dataset):
def __init__(self, root_dir, valid=False, folder="train", transform=default_transforms(), folders=None):
self.root_dir = root_dir
if not folders:
folders = [dir for dir in sorted(os.listdir(root_dir)) if os.path.isdir(root_dir/dir)]
self.classes = {folder: i for i, folder in enumerate(folders)}
self.transforms = transform
self.valid = valid
self.pcs = []
for category in self.classes.keys():
new_dir = root_dir/Path(category)/folder
for file in os.listdir(new_dir):
if file.endswith('.off'):
sample = {}
with open(new_dir/file, 'r') as f:
verts, faces = read_off(f)
sample['pc'] = (verts, faces)
sample['category'] = category
self.pcs.append(sample)
def __len__(self):
return len(self.pcs)
def __getitem__(self, idx):
pointcloud = self.transforms(self.pcs[idx]['pc'])
category = self.pcs[idx]['category']
return pointcloud, self.classes[category]
class PointCloudDataPre(Dataset):
def __init__(self, root_dir, valid=False, folder="train", transform=default_transforms(), folders=None):
self.root_dir = root_dir
if not folders:
folders = [dir for dir in sorted(os.listdir(root_dir)) if os.path.isdir(root_dir/dir)]
self.classes = {folder: i for i, folder in enumerate(folders)}
self.transforms = transform
self.valid = valid
self.pcs = []
for category in self.classes.keys():
new_dir = root_dir/Path(category)/folder
for file in os.listdir(new_dir):
if file.endswith('.off'):
sample = {}
with open(new_dir/file, 'r') as f:
verts, faces = read_off(f)
sample['pc'] = self.transforms((verts, faces))
sample['category'] = category
self.pcs.append(sample)
def __len__(self):
return len(self.pcs)
def __getitem__(self, idx):
pointcloud = self.pcs[idx]['pc']
category = self.pcs[idx]['category']
return pointcloud, self.classes[category]
class PointCloudDataBoth(Dataset):
def __init__(self, root_dir, valid=False, folder="train", static_transform=default_transforms(), later_transform=None, folders=None):
self.root_dir = root_dir
if not folders:
folders = [dir for dir in sorted(os.listdir(root_dir)) if os.path.isdir(root_dir/dir)]
self.classes = {folder: i for i, folder in enumerate(folders)}
self.static_transform = static_transform
self.later_transform = later_transform
self.valid = valid
self.pcs = []
for category in self.classes.keys():
new_dir = root_dir/Path(category)/folder
for file in os.listdir(new_dir):
if file.endswith('.off'):
sample = {}
with open(new_dir/file, 'r') as f:
verts, faces = read_off(f)
sample['pc'] = self.static_transform((verts, faces))
sample['category'] = category
self.pcs.append(sample)
def __len__(self):
return len(self.pcs)
def __getitem__(self, idx):
pointcloud = self.pcs[idx]['pc']
if self.later_transform is not None:
pointcloud = self.later_transform(pointcloud)
category = self.pcs[idx]['category']
return pointcloud, self.classes[category]
!mkdir drive/MyDrive/Thesis/dataloaders/final
###Output
_____no_output_____
###Markdown
Overfitting - all augmentations applied before training
###Code
BATCH_SIZE = 48
trs = transforms.Compose([
PointSampler(1024),
ToSorted(),
Normalize(),
ToTensor()
])
beds_train_dataset = PointCloudDataPre(path, folders=['bed'], transform=trs)
beds_valid_dataset = PointCloudDataPre(path, folder='test', folders=['bed'], transform=trs)
beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)
beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True)
!mkdir dataloader_beds_pre
torch.save(beds_train_loader, 'dataloader_beds_pre/trainloader.pth')
torch.save(beds_valid_loader, 'dataloader_beds_pre/validloader.pth')
!mkdir drive/MyDrive/Thesis/dataloaders/final
!cp -r dataloader_beds_pre drive/MyDrive/Thesis/dataloaders/final
###Output
mkdir: cannot create directory ‘dataloader_beds_pre’: File exists
mkdir: cannot create directory ‘drive/MyDrive/Thesis/dataloaders/final’: File exists
###Markdown
Underfitting - all augmentations applied during training
###Code
BATCH_SIZE = 48
trs = transforms.Compose([
PointSampler(1024),
ToSorted(),
Normalize(),
RandomNoise(),
ToTensor()
])
beds_train_dataset = PointCloudData(path, folders=['bed'], transform=trs)
beds_valid_dataset = PointCloudData(path, folder='test', folders=['bed'], transform=trs)
beds_train_loader = DataLoader(dataset=beds_train_dataset, num_workers=4, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)
beds_valid_loader = DataLoader(dataset=beds_valid_dataset, num_workers=4, batch_size=BATCH_SIZE, drop_last=True)
!mkdir dataloader_beds_dur
torch.save(beds_train_loader, 'dataloader_beds_dur/trainloader.pth')
torch.save(beds_valid_loader, 'dataloader_beds_dur/validloader.pth')
!cp -r dataloader_beds_dur drive/MyDrive/Thesis/dataloaders/final
###Output
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning:
This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
###Markdown
Both - static and dynamic transformations
###Code
BATCH_SIZE = 48
static_trs = transforms.Compose([
PointSampler(1024),
ToSorted(),
Normalize(),
])
dynamic_trs = transforms.Compose([
RandomNoise(),
ToTensor()
])
beds_train_dataset = PointCloudDataBoth(path, folders=['bed'], static_transform=static_trs, later_transform=dynamic_trs)
beds_valid_dataset = PointCloudDataBoth(path, folder='test', folders=['bed'], static_transform=static_trs)
beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)
beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True)
!mkdir dataloader_beds_both
torch.save(beds_train_loader, 'dataloader_beds_both/trainloader.pth')
torch.save(beds_valid_loader, 'dataloader_beds_both/validloader.pth')
!cp -r dataloader_beds_both drive/MyDrive/Thesis/dataloaders/final
###Output
mkdir: cannot create directory ‘dataloader_beds_both’: File exists
###Markdown
Two classes: beds and tables
###Code
BATCH_SIZE = 48
static_trs = transforms.Compose([
PointSampler(1024),
ToSorted(),
Normalize(),
])
dynamic_trs = transforms.Compose([
RandomNoise(),
ToTensor()
])
beds_train_dataset = PointCloudDataBoth(path, folders=['bed', 'table'], static_transform=static_trs, later_transform=dynamic_trs)
beds_valid_dataset = PointCloudDataBoth(path, folder='test', folders=['bed', 'table'], static_transform=trs)
beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)
beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True)
!mkdir dataloader_beds_tables
torch.save(beds_train_loader, 'dataloader_beds_tables/trainloader.pth')
torch.save(beds_valid_loader, 'dataloader_beds_tables/validloader.pth')
!cp -r dataloader_beds_tables drive/MyDrive/Thesis/dataloaders/final
###Output
_____no_output_____
###Markdown
For 512
###Code
!mkdir drive/MyDrive/Thesis/dataloaders/final512
###Output
_____no_output_____
###Markdown
Overfitting - all augmentations applied before training
###Code
BATCH_SIZE = 48
trs = transforms.Compose([
PointSampler(512),
ToSorted(),
Normalize(),
ToTensor()
])
beds_train_dataset = PointCloudDataPre(path, folders=['bed'], transform=trs)
beds_valid_dataset = PointCloudDataPre(path, folder='test', folders=['bed'], transform=trs)
beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)
beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True)
!mkdir dataloader_beds_pre
torch.save(beds_train_loader, 'dataloader_beds_pre/trainloader.pth')
torch.save(beds_valid_loader, 'dataloader_beds_pre/validloader.pth')
!mkdir drive/MyDrive/Thesis/dataloaders/final
!cp -r dataloader_beds_pre drive/MyDrive/Thesis/dataloaders/final512
###Output
mkdir: cannot create directory ‘dataloader_beds_pre’: File exists
mkdir: cannot create directory ‘drive/MyDrive/Thesis/dataloaders/final’: File exists
###Markdown
Underfitting - all augmentations applied during training
###Code
BATCH_SIZE = 48
trs = transforms.Compose([
PointSampler(512),
ToSorted(),
Normalize(),
ToTensor()
])
beds_train_dataset = PointCloudData(path, folders=['bed'], transform=trs)
beds_valid_dataset = PointCloudData(path, folder='test', folders=['bed'], transform=trs)
beds_train_loader = DataLoader(dataset=beds_train_dataset, num_workers=4, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)
beds_valid_loader = DataLoader(dataset=beds_valid_dataset, num_workers=4, batch_size=BATCH_SIZE, drop_last=True)
!mkdir dataloader_beds_dur
torch.save(beds_train_loader, 'dataloader_beds_dur/trainloader.pth')
torch.save(beds_valid_loader, 'dataloader_beds_dur/validloader.pth')
!cp -r dataloader_beds_dur drive/MyDrive/Thesis/dataloaders/final512
###Output
/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py:481: UserWarning:
This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
###Markdown
Both - static and dynamic transformations
###Code
BATCH_SIZE = 48
static_trs = transforms.Compose([
PointSampler(512),
ToSorted(),
Normalize(),
])
dynamic_trs = transforms.Compose([
RandomNoise(),
ToTensor()
])
beds_train_dataset = PointCloudDataBoth(path, folders=['bed'], static_transform=static_trs, later_transform=dynamic_trs)
beds_valid_dataset = PointCloudDataBoth(path, folder='test', folders=['bed'], static_transform=static_trs)
beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)
beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True)
!mkdir dataloader_beds_both
torch.save(beds_train_loader, 'dataloader_beds_both/trainloader.pth')
torch.save(beds_valid_loader, 'dataloader_beds_both/validloader.pth')
!cp -r dataloader_beds_both drive/MyDrive/Thesis/dataloaders/final512
###Output
mkdir: cannot create directory ‘dataloader_beds_both’: File exists
###Markdown
Two classes: beds and tables
###Code
BATCH_SIZE = 48
static_trs = transforms.Compose([
PointSampler(512),
ToSorted(),
Normalize(),
])
dynamic_trs = transforms.Compose([
RandomNoise(),
ToTensor()
])
beds_train_dataset = PointCloudDataBoth(path, folders=['bed', 'table'], static_transform=static_trs, later_transform=dynamic_trs)
beds_valid_dataset = PointCloudDataBoth(path, folder='test', folders=['bed', 'table'], static_transform=trs)
beds_train_loader = DataLoader(dataset=beds_train_dataset, shuffle=True, batch_size=BATCH_SIZE, drop_last=True)
beds_valid_loader = DataLoader(dataset=beds_valid_dataset, batch_size=BATCH_SIZE, drop_last=True)
!mkdir dataloader_beds_tables
torch.save(beds_train_loader, 'dataloader_beds_tables/trainloader.pth')
torch.save(beds_valid_loader, 'dataloader_beds_tables/validloader.pth')
!cp -r dataloader_beds_tables drive/MyDrive/Thesis/dataloaders/final
###Output
_____no_output_____
###Markdown
Loading data
###Code
data_path = Path('data')
item_categories = pd.read_csv(data_path / 'item_categories.csv')
items = pd.read_csv(data_path / 'items.csv')
shops = pd.read_csv(data_path / 'shops.csv')
train = pd.read_csv(data_path / 'sales_train.csv')
test = pd.read_csv(data_path / 'test.csv')
groupby_cols = ['date_block_num', 'shop_id', 'item_id']
###Output
_____no_output_____
###Markdown
Outliers
###Code
train = train[train.item_price < 100000]
train = train[train.item_cnt_day < 1001]
median = train[(train.shop_id == 32) & (train.item_id == 2973) & (train.date_block_num == 4) & (
train.item_price > 0)].item_price.median()
train.loc[train.item_price < 0, 'item_price'] = median
train.loc[train.shop_id == 0, 'shop_id'] = 57
test.loc[test.shop_id == 0, 'shop_id'] = 57
train.loc[train.shop_id == 1, 'shop_id'] = 58
test.loc[test.shop_id == 1, 'shop_id'] = 58
train.loc[train.shop_id == 10, 'shop_id'] = 11
test.loc[test.shop_id == 10, 'shop_id'] = 11
test['date_block_num'] = 34
###Output
_____no_output_____
###Markdown
Add new features
###Code
category = items[['item_id', 'item_category_id']].drop_duplicates()
category.set_index(['item_id'], inplace=True)
category = category.item_category_id
train['category'] = train.item_id.map(category)
item_categories['meta_category'] = item_categories.item_category_name.apply(lambda x: x.split(' ')[0])
item_categories['meta_category'] = pd.Categorical(item_categories.meta_category).codes
item_categories.set_index(['item_category_id'], inplace=True)
meta_category = item_categories.meta_category
train['meta_category'] = train.category.map(meta_category)
shops['city'] = shops.shop_name.apply(lambda x: str.replace(x, '!', '')).apply(lambda x: x.split(' ')[0])
shops['city'] = pd.Categorical(shops['city']).codes
city = shops.city
train['city'] = train.shop_id.map(city)
year = pd.concat([train.date_block_num, train.date.apply(lambda x: int(x.split('.')[2]))], axis=1).drop_duplicates()
year.set_index(['date_block_num'], inplace=True)
year = year.date.append(pd.Series([2015], index=[34]))
month = pd.concat([train.date_block_num, train.date.apply(lambda x: int(x.split('.')[1]))], axis=1).drop_duplicates()
month.set_index(['date_block_num'], inplace=True)
month = month.date.append(pd.Series([11], index=[34]))
all_shops_items = []
for block_num in train['date_block_num'].unique():
unique_shops = train[train['date_block_num'] == block_num]['shop_id'].unique()
unique_items = train[train['date_block_num'] == block_num]['item_id'].unique()
all_shops_items.append(np.array(list(itertools.product([block_num], unique_shops, unique_items)), dtype='int32'))
df = pd.DataFrame(np.vstack(all_shops_items), columns=groupby_cols, dtype='int32')
df = df.append(test, sort=True)
df['ID'] = df.ID.fillna(-1).astype('int32')
df['year'] = df.date_block_num.map(year)
df['month'] = df.date_block_num.map(month)
df['category'] = df.item_id.map(category)
df['meta_category'] = df.category.map(meta_category)
df['city'] = df.shop_id.map(city)
train['category'] = train.item_id.map(category)
###Output
_____no_output_____
###Markdown
Aggregations data
###Code
%%time
gb = train.groupby(by=groupby_cols, as_index=False).agg({'item_cnt_day': ['sum']})
gb.columns = [val[0] if val[-1] == '' else '_'.join(val) for val in gb.columns.values]
gb.rename(columns={'item_cnt_day_sum': 'target'}, inplace=True)
df = pd.merge(df, gb, how='left', on=groupby_cols)
gb = train.groupby(by=['date_block_num', 'item_id'], as_index=False).agg({'item_cnt_day': ['sum']})
gb.columns = [val[0] if val[-1] == '' else '_'.join(val) for val in gb.columns.values]
gb.rename(columns={'item_cnt_day_sum': 'target_item'}, inplace=True)
df = pd.merge(df, gb, how='left', on=['date_block_num', 'item_id'])
gb = train.groupby(by=['date_block_num', 'shop_id'], as_index=False).agg({'item_cnt_day': ['sum']})
gb.columns = [val[0] if val[-1] == '' else '_'.join(val) for val in gb.columns.values]
gb.rename(columns={'item_cnt_day_sum': 'target_shop'}, inplace=True)
df = pd.merge(df, gb, how='left', on=['date_block_num', 'shop_id'])
gb = train.groupby(by=['date_block_num', 'category'], as_index=False).agg({'item_cnt_day': ['sum']})
gb.columns = [val[0] if val[-1] == '' else '_'.join(val) for val in gb.columns.values]
gb.rename(columns={'item_cnt_day_sum': 'target_category'}, inplace=True)
df = pd.merge(df, gb, how='left', on=['date_block_num', 'category'])
gb = train.groupby(by=['date_block_num', 'item_id'], as_index=False).agg({'item_price': ['mean', 'max']})
gb.columns = [val[0] if val[-1] == '' else '_'.join(val) for val in gb.columns.values]
gb.rename(columns={'item_price_mean': 'target_price_mean', 'item_price_max': 'target_price_max'}, inplace=True)
df = pd.merge(df, gb, how='left', on=['date_block_num', 'item_id'])
df['target_price_mean'] = np.minimum(df['target_price_mean'], df['target_price_mean'].quantile(0.99))
df['target_price_max'] = np.minimum(df['target_price_max'], df['target_price_max'].quantile(0.99))
df.fillna(0, inplace=True)
df['target'] = df['target'].clip(0, 20)
df['target_zero'] = (df['target'] > 0).astype('int32')
###Output
_____no_output_____
###Markdown
Mean encoded features
###Code
%%time
for enc_cols in [['shop_id', 'category'], ['shop_id', 'item_id'], ['shop_id'], ['item_id']]:
col = '_'.join(['enc', *enc_cols])
col2 = '_'.join(['enc_max', *enc_cols])
df[col] = np.nan
df[col2] = np.nan
for d in tqdm_notebook(df.date_block_num.unique()):
f1 = df.date_block_num < d
f2 = df.date_block_num == d
gb = df.loc[f1].groupby(enc_cols)[['target']].mean().reset_index()
enc = df.loc[f2][enc_cols].merge(gb, on=enc_cols, how='left')[['target']].copy()
enc.set_index(df.loc[f2].index, inplace=True)
df.loc[f2, col] = enc['target']
gb = df.loc[f1].groupby(enc_cols)[['target']].max().reset_index()
enc = df.loc[f2][enc_cols].merge(gb, on=enc_cols, how='left')[['target']].copy()
enc.set_index(df.loc[f2].index, inplace=True)
df.loc[f2, col2] = enc['target']
###Output
_____no_output_____
###Markdown
Downcast
###Code
def downcast_dtypes(df):
float32_cols = [c for c in df if df[c].dtype == 'float64']
int32_cols = [c for c in df if df[c].dtype in ['int64', 'int16', 'int8']]
df[float32_cols] = df[float32_cols].astype(np.float32)
df[int32_cols] = df[int32_cols].astype(np.int32)
return df
df.fillna(0, inplace=True)
df = downcast_dtypes(df)
###Output
_____no_output_____
###Markdown
Lag features
###Code
%%time
shift_range = [1, 2, 3, 4, 5, 12]
shifted_columns = [c for c in df if 'target' in c]
for shift in tqdm_notebook(shift_range):
shifted_data = df[groupby_cols + shifted_columns].copy()
shifted_data['date_block_num'] = shifted_data['date_block_num'] + shift
foo = lambda x: '{}_lag_{}'.format(x, shift) if x in shifted_columns else x
shifted_data = shifted_data.rename(columns=foo)
df = pd.merge(df, shifted_data, how='left', on=groupby_cols).fillna(0)
df = downcast_dtypes(df)
del shifted_data
gc.collect()
sleep(1)
###Output
_____no_output_____
###Markdown
Features Interaction
###Code
df['target_trend_1_2'] = df['target_lag_1'] - df['target_lag_2']
df['target_predict_1_2'] = df['target_lag_1'] * 2 - df['target_lag_2']
df['target_trend_3_4'] = df['target_lag_1'] + df['target_lag_2'] - df['target_lag_3'] - df['target_lag_4']
df['target_predict_3_4'] = (df['target_lag_1'] + df['target_lag_2']) * 2 - df['target_lag_3'] - df['target_lag_4']
df['target_item_trend_1_2'] = df['target_item_lag_1'] - df['target_item_lag_2']
df['target_item_trend_3_4'] = df['target_item_lag_1'] + df['target_item_lag_2'] - df['target_item_lag_3'] - df['target_item_lag_4']
df['target_shop_trend_1_2'] = df['target_shop_lag_1'] - df['target_shop_lag_2']
df['target_shop_trend_3_4'] = df['target_shop_lag_1'] + df['target_shop_lag_2'] - df['target_shop_lag_3'] - df['target_shop_lag_4']
###Output
_____no_output_____
###Markdown
Save processed data
###Code
df = downcast_dtypes(df)
df.to_pickle('df.pkl')
###Output
_____no_output_____
###Markdown
Show and Tell: A Neural Image Caption Generator Data processing
###Code
from keras import backend as K
from keras.models import Model, Sequential
from keras.layers import Input, Dense, LSTM, Embedding, Dropout
from keras.utils import to_categorical
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.image import load_img, img_to_array
from keras.applications.vgg19 import VGG19, preprocess_input
import numpy as np
import h5py
import string
import pickle
from os import listdir
from os.path import join, isdir, isfile, exists
meta_info = {
'image_dir': 'Flicker8k_Dataset/',
'train_list': 'Flickr8k_text/Flickr_8k.trainImages.txt',
'dev_list': 'Flickr8k_text/Flickr_8k.devImages.txt',
'test_list': 'Flickr8k_text/Flickr_8k.testImages.txt',
'text_dir': 'Flickr8k_text/'
}
print(listdir(meta_info['image_dir'])[:5])
###Output
['1000268201_693b08cb0e.jpg', '1001773457_577c3a7d70.jpg', '1002674143_1b742ab4b8.jpg', '1003163366_44323f5815.jpg', '1007129816_e794419615.jpg']
###Markdown
Image preprocessing
###Code
""" feature extract CNN model
This paper used GoogLeNet (InceptionV1) which got good grades in ImageNet 2014
but for convenience of implementation, I used various models including InceptionV3 in built-in module of keras.
My model has the best performance at VGG19.
"""
def model_select(model_name):
if model_name == 'VGG16':
from keras.applications.vgg16 import VGG16, preprocess_input
model = VGG16() # 4096
elif model_name == 'VGG19':
from keras.applications.vgg19 import VGG19, preprocess_input
model = VGG19() # 4096
elif model_name == 'ResNet50':
from keras.applications.resnet50 import ResNet50, preprocess_input
model = ResNet50() # 4096
elif model_name == 'InceptionV3':
from keras.applications.inception_v3 import InceptionV3, preprocess_input
model = InceptionV3() # 2048,
elif model_name == 'InceptionResNetV2':
from keras.applications.inception_resnet_v2 import InceptionResNetV2, preprocess_input
model = InceptionResNetV2() # 1536,
return model
model_name = 'VGG19'
base_model = model_select(model_name)
# using FC2 layer output
cnn_model = Model(inputs=base_model.inputs, outputs=base_model.layers[-2].output)
cnn_model.summary()
###Output
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv4 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv4 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv4 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
=================================================================
Total params: 139,570,240
Trainable params: 139,570,240
Non-trainable params: 0
_________________________________________________________________
###Markdown
Image to feature
###Code
"""
Usually training set is the bigger,
so I prefer to testing with validation set first.
"""
dev_features = {}
dev_h5 = 'dev_features.h5'
with h5py.File(dev_h5, 'w') as h5f:
with open(meta_info['dev_list']) as f:
c = 0 # count
contents = f.read()
for line in contents.split('\n'):
if line == '': # last line or error line
print(c)
continue
if c % 100 == 0:
print(c)
# Unlike other models, inception models use the larger image sizes.
if model_name.find('Inception') != -1:
target_size = (299, 299)
else:
target_size = (224, 224)
img_path = line
img = load_img(meta_info['image_dir'] + img_path, target_size=target_size)
img = img_to_array(img)
img = img.reshape((1, img.shape[0], img.shape[1], img.shape[2]))
img = preprocess_input(img)
feature = cnn_model.predict(img)
h5f.create_dataset(img_path.split('.')[0], data=feature)
c += 1
# feature test
with h5py.File('dev_features.h5', 'r') as h5f:
print(h5f['2090545563_a4e66ec76b'][:])
print(h5f['2090545563_a4e66ec76b'][:].shape)
train_features = {}
train_h5 = 'train_features.h5'
with h5py.File(train_h5, 'w') as h5f:
with open(meta_info['train_list']) as f:
c = 0 # count
contents = f.read()
for line in contents.split('\n'):
if line == '': # last line or error line
print(c)
continue
if c % 1000 == 0:
print(c)
if model_name.find('Inception') != -1:
target_size = (299, 299)
else:
target_size = (224, 224)
img_path = line
img = load_img(meta_info['image_dir'] + img_path, target_size=target_size)
img = img_to_array(img)
img = img.reshape((1, img.shape[0], img.shape[1], img.shape[2]))
img = preprocess_input(img)
feature = cnn_model.predict(img)
h5f.create_dataset(img_path.split('.')[0], data=feature)
c += 1
test_features = {}
test_h5 = 'test_features.h5'
with h5py.File(test_h5, 'w') as h5f:
with open(meta_info['test_list']) as f:
c = 0 # count
contents = f.read()
for line in contents.split('\n'):
if line == '': # last line or error line
print(c)
continue
if c % 100 == 0:
print(c)
if model_name.find('Inception') != -1:
target_size = (299, 299)
else:
target_size = (224, 224)
img_path = line
img = load_img(meta_info['image_dir'] + img_path, target_size=target_size)
img = img_to_array(img)
img = img.reshape((1, img.shape[0], img.shape[1], img.shape[2]))
img = preprocess_input(img)
feature = cnn_model.predict(img)
h5f.create_dataset(img_path.split('.')[0], data=feature)
c += 1
###Output
0
100
200
300
400
500
600
700
800
900
1000
###Markdown
Text preprocessing
###Code
""" full captions to dictionary
The dictionary has full dataset(training, validation, and test captions),
and numbers are eliminated from all captions.
Removing numbers improves performance (by about 3 points for bleu-1)
"""
captions = dict()
words = set()
with open(join(meta_info['text_dir'], 'Flickr8k.token.txt')) as f:
contents = f.read()
n_captions = 0
for line in contents.split('\n'):
if line == '':
print(n_captions)
continue
if n_captions % 10000 == 0:
print(n_captions)
file, caption = line.split('\t')
table = str.maketrans('', '', string.punctuation)
caption2 = []
for word in caption.split():
# remove number
if word.isalpha():
caption2.append(word.translate(table))
caption = ' '.join(caption2)
img_id = file.split('.')[0]
if img_id in captions.keys():
captions[img_id].append(caption)
else:
captions[img_id] = [caption]
n_captions += 1
[words.add(word) for word in caption.split()]
print('number of images: %d' % len(captions))
print('number of catpions: %d' % n_captions)
print('number of words: %d' % len(words))
# train set caption test
print(captions['2513260012_03d33305cf'])
# dev set caption test
print(captions['2090545563_a4e66ec76b'])
# test set caption test
print(captions['3385593926_d3e9c21170'])
""" Only dev captions are taken from the full captions set.
Unlike above caption, this captions has sign of start and end for sequence.
Each [CLS], [SEP], based BERT
keras' tokenizer removes <>, so need to further processing in this process.
"""
dev_captions = dict()
dev_words = set()
with open(join(meta_info['text_dir'], 'Flickr_8k.devImages.txt')) as f:
contents = f.read()
n_dev_captions = 0
for line in contents.split('\n'):
if line == '':
print(n_dev_captions)
continue
if n_dev_captions % 10000 == 0:
print(n_dev_captions)
file = line.split('.')[0]
for caption in captions[file]:
# start sign: [CLS]
# end sign: [SEP]
caption = '[CLS] ' + caption + ' [SEP]'
caption = caption.replace('\n', '')
if file in dev_captions.keys():
dev_captions[file].append(caption)
else:
dev_captions[file] = [caption]
n_dev_captions += 1
[dev_words.add(word) for word in caption.split()]
print('number of catpions: %d' % len(dev_captions))
print('number of catpions: %d' % n_dev_captions)
print('number of words: %d' % len(dev_words))
# dev set caption test
print(dev_captions['2090545563_a4e66ec76b'])
"""
Unlike a dev set, training set must count the maximum number of words in single sentence.
Variable M do that role.
"""
train_captions = dict()
train_words = set()
M = 0 # max length in single sentence
with open(join(meta_info['text_dir'], 'Flickr_8k.trainImages.txt')) as f:
contents = f.read()
n_train_captions = 0
for line in contents.split('\n'):
if line == '':
print(n_train_captions)
continue
if n_train_captions % 10000 == 0:
print(n_train_captions)
file = line.split('.')[0]
for caption in captions[file]:
caption = '[CLS] ' + caption + ' [SEP]'
caption = caption.replace('\n', '')
if file in train_captions.keys():
train_captions[file].append(caption)
else:
train_captions[file] = [caption]
n_train_captions += 1
t = caption.split()
if len(t) > M:
M = len(t)
[train_words.add(word) for word in t]
# n_vocabs = len(train_words) # all word, based str.split()
print('number of catpions: %d' % len(train_captions))
print('number of catpions: %d' % n_train_captions)
print('number of words: %d' % len(train_words))
# print('vocabulary size: %d' % n_vocabs)
print('max number of words in single sentence: %d' % M)
# train set caption test
print(train_captions['2513260012_03d33305cf'])
test_captions = dict()
test_words = set()
with open(join(meta_info['text_dir'], 'Flickr_8k.testImages.txt')) as f:
contents = f.read()
n_test_captions = 0
for line in contents.split('\n'):
if line == '':
print(n_test_captions)
continue
if n_test_captions % 10000 == 0:
print(n_test_captions)
file = line.split('.')[0]
for caption in captions[file]:
caption = '[CLS] ' + caption + ' [SEP]'
caption = caption.replace('\n', '')
if file in test_captions.keys():
test_captions[file].append(caption)
else:
test_captions[file] = [caption]
n_test_captions += 1
[test_words.add(word) for word in caption.split()]
print('number of catpions: %d' % len(test_captions))
print('number of catpions: %d' % n_test_captions)
print('number of words: %d' % len(test_words))
# test set caption test
print(test_captions['3385593926_d3e9c21170'])
""" make tokenizer using keras.
Making tokenizer, only use train captions.
"""
def make_tokenizer(captions):
texts = []
for _, caption_list in captions.items():
for caption in caption_list:
texts.append(caption)
tokenizer = Tokenizer()
tokenizer.fit_on_texts(texts)
return tokenizer
tokenizer = make_tokenizer(train_captions)
n_vocabs = len(tokenizer.word_index) + 1 # because index 0, plus 1
print('number of vocabulary: %d' % n_vocabs)
# print(tokenizer.word_index)
with open('tokenizer.pkl', 'wb') as f:
pickle.dump(tokenizer, f, protocol=pickle.HIGHEST_PROTOCOL)
with open('tokenizer.pkl', 'rb') as f:
tokenizer = pickle.load(f)
# print(len(tokenizer.word_index))
""" Make sequence, Make next word based ground truth.
If single sentence consisting of N words, N + 1(because nd sign) sequences are created.
Ex) Hi, I am a boy.
sequence -> next word
[] [] [] [] [Hi] -> I
[] [] [] [Hi] [I] -> am
[] [] [Hi] [I] [am] -> a
...
[Hi] [I] [am] [a] [boy] -> '[SEP]' (end sign)
"""
train_sequences = list()
train_next_word = list()
c = 0
train_sequences_h5 = 'train_sequences.h5'
train_next_word_h5 = 'train_next_word.h5'
h5f1 = h5py.File(train_sequences_h5, 'w')
h5f2 = h5py.File(train_next_word_h5, 'w')
for img_id, captions in train_captions.items():
# print(img_id)
Xtrain = list()
ytrain = list()
for caption in captions:
sequence = tokenizer.texts_to_sequences([caption])[0]
for i in range(1, len(sequence)): # except start sign
if c % 100000 == 0:
print(c)
train_sequences.append(pad_sequences([sequence[:i]], M)[0])
Xtrain.append(pad_sequences([sequence[:i]], M)[0])
train_next_word.append(to_categorical([sequence[i]], num_classes=n_vocabs)[0])
ytrain.append(to_categorical([sequence[i]], num_classes=n_vocabs)[0])
c += 1
h5f1.create_dataset(img_id, data=Xtrain)
h5f2.create_dataset(img_id, data=ytrain)
h5f1.close()
h5f2.close()
print(c)
# test sequences and next word
print(train_sequences[0])
print(train_next_word[0])
print(train_sequences[1])
print(train_next_word[1])
dev_sequences = list()
dev_next_word = list()
c = 0
dev_sequences_h5 = 'dev_sequences.h5'
dev_next_word_h5 = 'dev_next_word.h5'
h5f1 = h5py.File(dev_sequences_h5, 'w')
h5f2 = h5py.File(dev_next_word_h5, 'w')
for img_id, captions in dev_captions.items():
# print(img_id)
Xdev = list()
ydev = list()
for caption in captions:
text = tokenizer.texts_to_sequences([caption])[0]
for i in range(1, len(text)):
if c % 10000 == 0:
print(c)
dev_sequences.append(pad_sequences([text[:i]], M)[0])
Xdev.append(pad_sequences([text[:i]], M)[0])
dev_next_word.append(to_categorical([text[i]], num_classes=n_vocabs)[0])
ydev.append(to_categorical([text[i]], num_classes=n_vocabs)[0])
c += 1
h5f1.create_dataset(img_id, data=Xdev)
h5f2.create_dataset(img_id, data=ydev)
h5f1.close()
h5f2.close()
print(c)
test_sequences = list()
test_next_word = list()
c = 0
test_sequences_h5 = 'test_sequences.h5'
test_next_word_h5 = 'test_next_word.h5'
h5f1 = h5py.File(test_sequences_h5, 'w')
h5f2 = h5py.File(test_next_word_h5, 'w')
for img_id, captions in test_captions.items():
# print(img_id)
Xtest = list()
ytest = list()
for caption in captions:
text = tokenizer.texts_to_sequences([caption])[0]
for i in range(1, len(text)):
if c % 10000 == 0:
print(c)
test_sequences.append(pad_sequences([text[:i]], M)[0])
Xtest.append(pad_sequences([text[:i]], M)[0])
test_next_word.append(to_categorical([text[i]], num_classes=n_vocabs)[0])
ytest.append(to_categorical([text[i]], num_classes=n_vocabs)[0])
c += 1
h5f1.create_dataset(img_id, data=Xtest)
h5f2.create_dataset(img_id, data=ytest)
h5f1.close()
h5f2.close()
print(c)
###Output
0
10000
20000
30000
40000
50000
58389
###Markdown
Data processing end. Bellow code isn't need to look. h5 -> Pickle
###Code
train_sequences = list()
train_next_word = list()
c = 0
train_sequences_pkl = 'train_sequences.pkl'
train_next_word_pkl = 'train_next_word.pkl'
X = dict()
Y = dict()
for img_id, captions in train_captions.items():
# print(img_id)
Xtrain = list()
ytrain = list()
for caption in captions:
text = tokenizer.texts_to_sequences([caption])[0]
for i in range(1, len(text)):
if c % 100000 == 0:
print(c)
train_sequences.append(pad_sequences([text[:i]], M)[0])
Xtrain.append(pad_sequences([text[:i]], M)[0])
train_next_word.append(to_categorical([text[i]], num_classes=n_vocabs)[0])
ytrain.append(to_categorical([text[i]], num_classes=n_vocabs)[0])
c += 1
X[img_id] = Xtrain
Y[img_id] = ytrain
with open(train_sequences_pkl, 'wb') as f:
pickle.dump(X, f, protocol=pickle.HIGHEST_PROTOCOL)
with open(train_next_word_pkl, 'wb') as f:
pickle.dump(Y, f, protocol=pickle.HIGHEST_PROTOCOL)
print(c)
with open(train_sequences_pkl, 'rb') as f:
test = pickle.load(f)
print(test['2513260012_03d33305cf'])
###Output
_____no_output_____
###Markdown
not needed
###Code
train_id_word = dict()
for i, word in enumerate(train_words):
train_id_word[i] = word
train_word_id[word] = i
print(len(train_id_word))
print(len(train_word_id))
dev_id_word = dict()
dev_word_id = dict()
for i, word in enumerate(dev_words):
dev_id_word[i] = word
dev_word_id[word] = i
print(len(dev_id_word))
print(len(dev_word_id))
sequences = list()
nextwords = list()
data = {}
for captions in train_captions.items():
# print(captions)
data[captions[0]] = []
for caption in captions[1]:
t = []
for word in caption.split():
t.append(train_word_id[word])
data[captions[0]].append(t)
# print(data)
print(len(data))
id_seq = {}
id_y = {}
c = 0
for key, value in data.items():
sub_seqs = []
Y = []
for seq in value:
for i in range(1, len(seq)):
if c % 100000 == 0:
print(c)
sub_seqs.append(sequence.pad_sequences([seq[:i]], max_length)[0])
y = to_categorical([seq[i]], num_classes=n_vocab + 1)
Y.append(y[0])
c += 1
id_seq[key] = sub_seqs
id_y[key] = Y
print(c)
# print(id_seq)
h5file_path = 'train_id_seq.h5'
with h5py.File(h5file_path, 'w') as h5f:
for key, value in id_seq.items():
h5f.create_dataset(key, data=value)
# print(feature_np)
# np.squeeze(feature_np)
# print(feature_np.shape)
h5file_path = 'train_id_seq.h5'
with h5py.File(h5file_path, 'r') as h5f:
print(h5f['667626_18933d713e'][:])
# print(feature_np)
# np.squeeze(feature_np)
# print(feature_np.shape)
h5file_path = 'train_id_y.h5'
with h5py.File(h5file_path, 'w') as h5f:
for key, value in id_y.items():
h5f.create_dataset(key, data=value)
# print(feature_np)
# np.squeeze(feature_np)
# print(feature_np.shape)
h5file_path = 'train_id_y.h5'
with h5py.File(h5file_path, 'r') as h5f:
print(h5f['667626_18933d713e'][:])
# print(feature_np)
# np.squeeze(feature_np)
# print(feature_np.shape)
sequences = list()
nextwords = list()
data = {}
for captions in dev_captions.items():
# print(captions)
data[captions[0]] = []
for caption in captions[1]:
t = []
for word in caption.split():
t.append(dev_word_id[word])
data[captions[0]].append(t)
# print(data)
print(len(data))
id_seq = {}
id_y = {}
c = 0
for key, value in data.items():
sub_seqs = []
Y = []
for seq in value:
for i in range(1, len(seq)):
if c % 10000 == 0:
print(c)
sub_seqs.append(sequence.pad_sequences([seq[:i]], max_length, padding='post')[0])
y = to_categorical([seq[i]], num_classes=n_vocab)
Y.append(y[0])
c += 1
id_seq[key] = sub_seqs
id_y[key] = Y
print(c)
# print(id_seq)
h5file_path = 'dev_id_seq.h5'
with h5py.File(h5file_path, 'w') as h5f:
for key, value in id_seq.items():
h5f.create_dataset(key, data=value)
# print(feature_np)
# np.squeeze(feature_np)
# print(feature_np.shape)
h5file_path = 'dev_id_seq.h5'
with h5py.File(h5file_path, 'r') as h5f:
print(h5f['2090545563_a4e66ec76b'][:])
# print(feature_np)
# np.squeeze(feature_np)
# print(feature_np.shape)
h5file_path = 'dev_id_y.h5'
with h5py.File(h5file_path, 'w') as h5f:
for key, value in id_y.items():
h5f.create_dataset(key, data=value)
# print(feature_np)
# np.squeeze(feature_np)
# print(feature_np.shape)
h5file_path = 'dev_id_y.h5'
with h5py.File(h5file_path, 'r') as h5f:
print(h5f['2090545563_a4e66ec76b'][:])
# print(feature_np)
# np.squeeze(feature_np)
# print(feature_np.shape)
###Output
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]]
###Markdown
Data processingThis Jupyter Noterbook helps us to convert binary attribute(s) to +/-1, categorical attributes(s) to onehot.
###Code
import numpy as np
from sklearn.preprocessing import OneHotEncoder
###Output
_____no_output_____
###Markdown
We load the data which were cleaned from the `data cleaning` step.
###Code
Xy = np.loadtxt('data_cleaned.dat', dtype = 'str')
print(Xy.shape)
print(Xy)
###Output
(372, 20)
[['48.0' '80.0' '1.02' ... 'no' 'no' 'ckd']
['7.0' '50.0' '1.02' ... 'no' 'no' 'ckd']
['62.0' '80.0' '1.01' ... 'no' 'yes' 'ckd']
...
['12.0' '80.0' '1.02' ... 'no' 'no' 'notckd']
['17.0' '60.0' '1.025' ... 'no' 'no' 'notckd']
['58.0' '80.0' '1.025' ... 'no' 'no' 'notckd']]
###Markdown
Attributes We find number of unique value for each column, to have an idea about which variables are continuous, which variables are binary, category. It depends on data, however it is likely that nu = 2 --> binary; nu = 3 or 4: --> category, n > 4: continuous. Of course, we have to see data in detail as well.
###Code
X = Xy[:,:-1]
l,n = X.shape
nu = np.array([len(np.unique(X[:,i])) for i in range(n)])
print('number of uniques of each variable:')
print(nu)
###Output
number of uniques of each variable:
[ 74 10 5 6 6 2 2 2 141 111 76 113 42 2 2 2 2 2
2]
###Markdown
We then define variable type, 1: continuous, 2: binary, 3: category.
###Code
variable_type = np.ones(n) # continuous
variable_type[5:8] = 2 # binary
variable_type[13:] = 2 # binary
print(variable_type)
###Output
[1. 1. 1. 1. 1. 2. 2. 2. 1. 1. 1. 1. 1. 2. 2. 2. 2. 2. 2.]
###Markdown
We now convert binary to +/-1, category to onehot.
###Code
def convert_binary_and_category(x,variable_type):
"""
convert binary to +-1, category to one hot; remain continuous.
"""
onehot_encoder = OneHotEncoder(sparse=False,categories='auto')
# create 2 initial columns
x_new = np.zeros((x.shape[0],2))
for i,i_type in enumerate(variable_type):
if i_type == 1: # continuous
x_new = np.hstack((x_new,x[:,i][:,np.newaxis]))
elif i_type == 2: # binary
unique_value = np.unique(x[:,i])
x1 = np.array([-1. if value == unique_value[0] else 1. for value in x[:,i]])
x_new = np.hstack((x_new,x1[:,np.newaxis]))
else: # category
x1 = onehot_encoder.fit_transform(x[:,i].reshape(-1,1))
x_new = np.hstack((x_new,x1))
# drop the 2 initial column
x_new = x_new[:,2:]
return x_new.astype(float)
# convert X
X_new = convert_binary_and_category(X,variable_type)
print(X_new.shape)
print(X_new)
###Output
(372, 19)
[[48. 80. 1.02 ... -1. -1. -1. ]
[ 7. 50. 1.02 ... -1. -1. -1. ]
[62. 80. 1.01 ... 1. -1. 1. ]
...
[12. 80. 1.02 ... -1. -1. -1. ]
[17. 60. 1.025 ... -1. -1. -1. ]
[58. 80. 1.025 ... -1. -1. -1. ]]
###Markdown
Target
###Code
## target
y = Xy[:,-1]
print(np.unique(y,return_counts=True))
# convert taget to 0 and 1
y_new = np.ones(y.shape[0])
y_new[y =='notckd'] = 0
print(np.unique(y_new,return_counts=True))
# combine X and y
Xy_new = np.hstack((X_new,y_new[:,np.newaxis]))
np.savetxt('data_processed.dat',Xy_new,fmt='%f')
###Output
_____no_output_____
###Markdown
Load and prepare data
###Code
df = pd.read_csv(fullpath)
print(df.head())
print(df.columns)
print(df.info())
df['pd_aux'] = pd.to_datetime(df['publish_date'], format = '%Y-%m-%d %H:%M:%S', errors = 'coerce')
date_time_now = datetime.datetime.now()
age = date_time_now - df['pd_aux']
age = age.apply(lambda x: x.days)
df['age'] = age
df = df.drop('pd_aux', axis = 1)
print(df.info)
#print(df[['publish_date', 'age']])
#print(min(df['age']))
print(df.loc[8][:])
###Output
url https://www.goodreads.com/book/show/50209349-u...
title Unti Swanson Novel #7: A Novel
author Peter Swanson
num_ratings 0
num_reviews 0
avg_rating 0
num_pages 320.0
language []
publish_date []
genres []
characters NaN
series []
asin []
rating_histogram NaN
original_publish_year []
isbn []
isbn13 9780062980052.0
awards []
places []
age NaN
Name: 8, dtype: object
###Markdown
Explore data Scatter plot: num_ratings vs age of book
###Code
scatter = plt.figure()
ax = scatter.add_subplot(111)
ax.scatter(df['age'], np.log(df['num_ratings']))
#ax.set(yscale = "log")
#ax.set_ylim(0, 1000000)
plt.show()
###Output
C:\Users\Johannes Heyn\Anaconda3\lib\site-packages\pandas\core\series.py:679: RuntimeWarning: divide by zero encountered in log
result = getattr(ufunc, method)(*inputs, **kwargs)
###Markdown
Scatter plot: num_reviews vs age of book
###Code
scatter = plt.figure()
ax = scatter.add_subplot(111)
ax.scatter(df['age'], df['num_reviews'])
ax.set_ylim(0, 75000)
plt.show()
###Output
_____no_output_____
###Markdown
There's one remarkable outlier which has > 6x as many ratings as the second highest rated book. This book is "The Hunger Games" by Suzanne Collins and is just a later edition of the 2008 best-seller. For full entry, see below.Unfortunately, there doesn't appear to be an obvious correlation between the age of a book and its number of reviews or ratings.
###Code
print(df.loc[np.argmax(df['num_ratings'])][:])
###Output
url https://www.goodreads.com/book/show/49494289-t...
title The Hunger Games
author Suzanne Collins
num_ratings 6154931
num_reviews 168431
avg_rating 4.33
num_pages 387.0
language English
publish_date 2019-12-19 00:00:00
genres ['Teen', 'Young Adult', 'Fantasy', 'Dystopia',...
characters ['Katniss Everdeen', 'Peeta Mellark', 'Cato (H...
series The Hunger Games #1
asin B002MQYOFW
rating_histogram {'5': 3325309, '4': 1855402, '3': 719581, '2':...
original_publish_year 2008.0
isbn []
isbn13 []
awards ['Locus Award Nominee for Best Young Adult Boo...
places ['District 12, Panem', 'Capitol, Panem', 'Panem']
age 162
Name: 3291, dtype: object
###Markdown
y - inspected value x - data model
###Code
x = dataset.iloc[:,:-1].values
y = dataset.iloc[:,-1].values
###Output
_____no_output_____
###Markdown
Transforming missing values
###Code
from sklearn.preprocessing import Imputer
imputer = Imputer(missing_values='NaN', strategy='mean', axis=0)
cleanResult = imputer.fit(x[:, 1:3])
x[:, 1:3] = cleanResult.transform(x[:, 1:3])
print(x)
###Output
[['France' 44.0 72000.0]
['Spain' 27.0 48000.0]
['Germany' 30.0 54000.0]
['Spain' 38.0 61000.0]
['Germany' 40.0 63777.77777777778]
['France' 35.0 58000.0]
['Spain' 38.77777777777778 52000.0]
['France' 48.0 79000.0]
['Germany' 50.0 83000.0]
['France' 37.0 67000.0]]
###Markdown
Transforming text to index (Encoding categorical data)
###Code
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelEncoder_X = LabelEncoder()
x[:, 0] = labelEncoder_X.fit_transform(x[:,0])
print(x)
###Output
[[0 44.0 72000.0]
[2 27.0 48000.0]
[1 30.0 54000.0]
[2 38.0 61000.0]
[1 40.0 63777.77777777778]
[0 35.0 58000.0]
[2 38.77777777777778 52000.0]
[0 48.0 79000.0]
[1 50.0 83000.0]
[0 37.0 67000.0]]
###Markdown
Transformin indexex to columns with 1 & 0
###Code
oneHotEncoder = OneHotEncoder(categorical_features=[0])
x = oneHotEncoder.fit_transform(x).toarray()
print(y)
labelEncoder_Y = LabelEncoder()
y = labelEncoder_Y.fit_transform(y)
print(y)
###Output
[0 1 0 0 1 1 0 1 0 1]
###Markdown
Splitting dataset Training set and Test set
###Code
from sklearn.model_selection import train_test_split
x_traint, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
Feature scaling
###Code
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
x_traint = sc_x.fit_transform(x_traint)
x_test = sc_x.transform(x_test)
###Output
_____no_output_____
###Markdown
_Dummy variables scale and lose identity?_
###Code
print(x_test)
###Output
[[-1. 2.64575131 -0.77459667 -1.45882927 -0.90166297]
[-1. 2.64575131 -0.77459667 1.98496442 2.13981082]]
###Markdown
Table of Contents1 Load libraries2 Split articles into sentences3 Split audio files into sentences4 Make pairs of audio Load libraries
###Code
import os
import librosa
import IPython.display as ipd
import pysrt
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import pickle
import random
###Output
_____no_output_____
###Markdown
Split articles into sentences
###Code
data_dir = "./data/"
articles = []
for article_file in next(os.walk(data_dir + "article/"))[2]:
with open(data_dir + "article/" + article_file, encoding='utf-8') as f:
article = f.read()
articles.append(article.split('. '))
articles[6]
import re
def remove_nonletter_and_lowercase(s):
s = re.sub('\d+', ' ', s)
s = re.sub('[\W]+', ' ', s.lower()).strip()
return s
# Remove any non-word character and digit
for article in articles:
for i in range(len(article)):
article[i] = remove_nonletter_and_lowercase(article[i])
articles[6][1]
###Output
_____no_output_____
###Markdown
Split audio files into sentences
###Code
SEARCH_WORD_RANGE = 15
ACCEPTED_MATCH_RATE_TEMP = 0.4
ACCEPTED_MATCH_RATE_SUB = 0.25
def group_subs_of_each_sentence(subs, sentences, verbose=False):
sentences_start_idx = []
i = 0
for sentence in sentences:
start_idx = None
i_cur = i
match_count = 0
for word in sentence.split(' '):
for j in range(i, min(i + SEARCH_WORD_RANGE, len(subs))):
if remove_nonletter_and_lowercase(subs[j].text) == word:
match_count = match_count + 1
if start_idx is None:
start_idx = j
i = j + 1
break
if (match_count / len(sentence.split(' ')) < ACCEPTED_MATCH_RATE_TEMP) \
or (match_count / (i - i_cur) < ACCEPTED_MATCH_RATE_SUB):
start_idx = None
i = i_cur
# Debug
if verbose:
if start_idx is None:
print("'" + sentence + "' is missing after:", end='')
if len(sentences_start_idx) > 0:
k = len(sentences_start_idx) - 1
while sentences_start_idx[k] is None:
k = k - 1
# k = 0
# while sentences_start_idx[k] is None:
# k = k + 1
for j in range(sentences_start_idx[k], i):
while k < len(sentences_start_idx) and sentences_start_idx[k] is None:
k = k + 1
if k < len(sentences_start_idx) and j == sentences_start_idx[k]:
print("'")
print("'", end='')
k = k + 1
print(subs[j].text, end=' ');
print("'")
print("")
sentences_start_idx.append(start_idx)
sentences_start_idx.append(len(subs))
sentences_time = []
for i in range(len(sentences_start_idx) - 1):
if sentences_start_idx[i] is None:
sentences_time.append((None, None))
continue
start_time = subs[sentences_start_idx[i]].start.to_time()
j = i + 1
while sentences_start_idx[j] is None:
j = j + 1
end_time = subs[sentences_start_idx[j] - 1].end.to_time()
sentences_time.append((start_time, end_time))
return sentences_time
group_subs_of_each_sentence(pysrt.open(data_dir + "audio/17021218_DoanDinhDung/01.srt"),
articles[0],
verbose=True)
student_audio_segments_dict = {}
audio_dir = data_dir + "audio/"
for student in next(os.walk(audio_dir))[1]:
subscript_dir = audio_dir + student + "/"
articles_audio_segments = []
for file in next(os.walk(subscript_dir))[2]:
if file.endswith(".srt"):
article_id = int(file[0:2]) - 1
audio_segment = group_subs_of_each_sentence(pysrt.open(subscript_dir + file),
articles[article_id])
articles_audio_segments.append(audio_segment)
student_audio_segments_dict[student] = articles_audio_segments
# Save to a file
pickle.dump(student_audio_segments_dict, open(data_dir + "speaker_audio_segments_dict.pkl", 'wb'))
with open(data_dir + "speaker_audio_segments_dict.pkl", 'rb') as f:
student_audio_segments_dict = pickle.load(f)
import datetime
def datetime_time_to_seconds(time):
return time.hour * 3600 + time.minute * 60 + time.second + time.microsecond / 1000000
datetime_time_to_seconds(datetime.time(0, 0, 1, 170000))
DEFAULT_SAMPLING_RATE = 22050
def extract_segments_from_audio(audio_file_path, intervals):
segments = []
sample, sr = librosa.load(audio_file_path)
for interval in intervals:
if interval[0] is None or interval[1] is None:
segments.append(None)
continue
start_idx = int(datetime_time_to_seconds(interval[0]) * sr)
end_idx = int(datetime_time_to_seconds(interval[1]) * sr)
segments.append(sample[start_idx:end_idx + 1])
return segments
segments = extract_segments_from_audio(data_dir + "audio/17021218_DoanDinhDung/01.wav",
student_audio_segments_dict['17021218_DoanDinhDung'][0])
# Let's try play an audio array
ipd.Audio(segments[1], rate=DEFAULT_SAMPLING_RATE)
# Save waveforms to files
for student in next(os.walk(audio_dir))[1]:
subscript_dir = audio_dir + student + "/"
for file in next(os.walk(subscript_dir))[2]:
if file.endswith(".wav"):
article_id = int(file[0:2]) - 1
audio_segments = extract_segments_from_audio(subscript_dir + file,
student_audio_segments_dict[student][article_id])
for i in range(len(audio_segments)):
if audio_segments[i] is not None:
audio_data = np.asarray(audio_segments[i])
waveform_dir = data_dir + "waveform/" + student + "/" + file[0:2] + "/"
if not os.path.exists(waveform_dir):
os.makedirs(waveform_dir)
np.save(waveform_dir + str(i) + ".npy", audio_data)
###Output
_____no_output_____
###Markdown
Make pairs of audio
###Code
students_segments_indices = []
students = list(student_audio_segments_dict.keys())
for student in students:
student_segments_indices = []
for i in range(len(student_audio_segments_dict[student])):
for j in range(len(student_audio_segments_dict[student][i])):
if student_audio_segments_dict[student][i][j] == (None, None):
student_segments_indices.append(None)
else:
student_segments_indices.append((i, j))
students_segments_indices.append(student_segments_indices)
# Different speakers same sentence
DSSS_LEN = 300000
audio_pairs = []
for k in range(len(students_segments_indices[0])):
for i in range(len(students_segments_indices)):
for j in range(i + 1, len(students_segments_indices)):
if students_segments_indices[i][k] is not None\
and students_segments_indices[j][k] is not None:
audio_info_1 = [students[i], students_segments_indices[i][k][0], students_segments_indices[i][k][1]]
audio_info_2 = [students[j], students_segments_indices[j][k][0], students_segments_indices[j][k][1]]
if random.randrange(2) == 1:
audio_info_1, audio_info_2 = audio_info_2, audio_info_1
audio_pairs.append(audio_info_1 + audio_info_2)
random.shuffle(audio_pairs)
audio_pairs_dsss_df = pd.DataFrame(audio_pairs[:DSSS_LEN], columns=['student_I_id', 'article_I_id', 'sentence_I_id',
'student_II_id', 'article_II_id', 'sentence_II_id'])
audio_pairs_dsss_df
# Different speakers different sentences
DSDS_LEN = 100000
audio_pairs = []
for i1 in range(len(students_segments_indices)):
for j1 in range(len(students_segments_indices[i1])):
if students_segments_indices[i1][j1] is not None:
audio_info_1 = [students[i1], students_segments_indices[i1][j1][0], students_segments_indices[i1][j1][1]]
for i2 in range(i1 + 1, len(students_segments_indices)):
for j2 in range(len(students_segments_indices[i2])):
if j1 != j2 and students_segments_indices[i2][j2] is not None:
audio_info_2 = [students[i2], students_segments_indices[i2][j2][0], students_segments_indices[i2][j2][1]]
if random.randrange(2) == 1:
audio_pairs.append(audio_info_1 + audio_info_2)
else:
audio_pairs.append(audio_info_2 + audio_info_1)
random.shuffle(audio_pairs)
audio_pairs_dsds_df = pd.DataFrame(audio_pairs[:DSDS_LEN], columns=['student_I_id', 'article_I_id', 'sentence_I_id',
'student_II_id', 'article_II_id', 'sentence_II_id'])
audio_pairs_dsds_df
# Same speakers different sentences
SSDS_LEN = 600000
audio_pairs = []
for k in range(len(students_segments_indices)):
for i in range(len(students_segments_indices[k])):
for j in range(i + 1, len(students_segments_indices[k])):
if students_segments_indices[k][i] is not None\
and students_segments_indices[k][j] is not None:
audio_info_1 = [students[k], students_segments_indices[k][i][0], students_segments_indices[k][i][1]]
audio_info_2 = [students[k], students_segments_indices[k][j][0], students_segments_indices[k][j][1]]
if random.randrange(2) == 1:
audio_info_1, audio_info_2 = audio_info_2, audio_info_1
audio_pairs.append(audio_info_1 + audio_info_2)
random.shuffle(audio_pairs)
audio_pairs_ssds_df = pd.DataFrame(audio_pairs[:SSDS_LEN], columns=['student_I_id', 'article_I_id', 'sentence_I_id',
'student_II_id', 'article_II_id', 'sentence_II_id'])
audio_pairs_ssds_df
audio_pairs_df = pd.concat([audio_pairs_dsss_df, audio_pairs_dsds_df, audio_pairs_ssds_df], ignore_index=True)
audio_pairs_df
plt.title("Same speaker")
(audio_pairs_df['student_I_id'] == audio_pairs_df['student_II_id']).value_counts().plot.bar()
plt.title("Same sentence")
audio_pairs_df.apply(lambda row: (row['article_I_id'] == row['article_II_id'])\
and (row['sentence_I_id'] == row['sentence_II_id']), axis=1).value_counts().plot.bar()
# Number of tests per speaker
pd.concat([audio_pairs_df['student_I_id'], audio_pairs_df['student_II_id']], ignore_index=True).value_counts().describe()
# Shuffle the dataframe rows
audio_pairs_df = audio_pairs_df.sample(frac=1).reset_index(drop=True)
audio_pairs_df
# Save to csv file
audio_pairs_df.to_csv(data_dir + "audio_sentence_pairs_full.csv", index=False)
###Output
_____no_output_____
###Markdown
DATA 603 Project US COVID-19 Mortality Modelling Imports and Utility
###Code
import re
import mysql.connector
import pandas as pd
from mysql.connector import errorcode
# SQL Query Function
# Reference: MySQL Developer's guide. Accessed November 18
# https://dev.mysql.com/doc/connector-python/en/connector-python-example-cursor-select.html
# https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor.html
def run_sql(query):
df = None
try:
cnx = mysql.connector.connect(option_files=['connection.conf', 'password.conf'])
cur = cnx.cursor()
cur.execute(query)
res = cur.fetchall()
# https://stackoverflow.com/questions/5010042/mysql-get-column-name-or-alias-from-query
col_names = [i[0] for i in cur.description]
df = pd.DataFrame(res, columns=col_names)
cur.close()
except mysql.connector.Error as err:
if err.errno == errorcode.ER_ACCESS_DENIED_ERROR:
print('Something is wrong with your user name or password')
elif err.errno == errorcode.ER_BAD_DB_ERROR:
print('Database does not exist')
else:
print(err)
else:
cnx.close()
return df
###Output
_____no_output_____
###Markdown
SQL Load
###Code
df_raw = run_sql('''select cdc_report_dt,
age_group,
`Race and ethnicity (combined)`,
sex,
count(*) as reported_cases,
sum(case current_status when "Laboratory-confirmed case" then 1 else 0 end) as confirmed_cases,
sum(case hosp_yn when "Yes" then 1 else 0 end) as hosp,
sum(case icu_yn when "Yes" then 1 else 0 end) as icu,
sum(case medcond_yn when "Yes" then 1 else 0 end) as medcond,
sum(case death_yn when "Yes" then 1 else 0 end) as deaths
from covid_19_us
where -- cdc_report_dt >= "2020-04-01" and
age_group != "NA"
and age_group != "Unknown"
and `Race and ethnicity (combined)` != "NA"
and `Race and ethnicity (combined)` != "Unknown"
and `Race and ethnicity (combined)` != "Native Hawaiian/Other Pacific Islander, Non-Hispanic" -- 0.2% of cases
-- and `Race and ethnicity (combined)` != "American Indian/Alaska Native, Non-Hispanic " -- 0.7% of cases
and (sex = "Male" or sex = "Female")
group by cdc_report_dt, `Race and ethnicity (combined)`, age_group, sex;''')
###Output
_____no_output_____
###Markdown
Basic Clean and Save
###Code
df_raw.rename(columns={'cdc_report_dt':'date',
'Race and ethnicity (combined)':'race_ethnicity',
'confirmed_cases':'cases'}, inplace=True)
df_us = df_raw.drop(df_raw[df_raw.deaths == 0].index)
cutoff_date = pd.to_datetime('2020-04-01', format='%Y-%m-%d', errors='coerce')
# drop rows ref:
# https://stackoverflow.com/questions/13851535/delete-rows-from-a-pandas-dataframe-based-on-a-conditional-expression-involving
df_us.drop(df_us[df_us.date < cutoff_date].index, inplace=True)
display(df_us)
df_us.to_csv("us_age_race_sex.csv", index=False)
###Output
_____no_output_____
###Markdown
Advanced Processing
###Code
df_i = df_raw.set_index(['date','age_group','race_ethnicity','sex'])
df_1wk = df_raw[['date','age_group','race_ethnicity','sex','deaths']].copy()
# Subtract days ref:
# https://stackoverflow.com/questions/20480897/pandas-add-one-day-to-column
df_1wk['date'] = df_1wk.date - pd.DateOffset(7)
cutoff_date = pd.to_datetime('2020-04-01', format='%Y-%m-%d', errors='coerce')
df_1wk.drop(df_1wk[df_1wk.date < cutoff_date].index, inplace=True)
df_1wk.set_index(['date','age_group','race_ethnicity','sex'], inplace=True)
df_1wk = df_i.join(df_1wk, lsuffix='_1wk').dropna()
df_1wk.reset_index(inplace=True)
# df_1wk.drop('date', inplace=True)
df_1wk.drop(['date','deaths','reported_cases'], axis=1, inplace=True)
df_1wk.drop(df_1wk[df_1wk.deaths_1wk == 0].index, inplace=True)
display(df_1wk)
df_1wk.to_csv("us_1week_delay.csv", index=False)
df_2wk = df_raw[['date','age_group','race_ethnicity','sex','deaths']].copy()
df_2wk['date'] = df_2wk.date - pd.DateOffset(14)
cutoff_date = pd.to_datetime('2020-04-01', format='%Y-%m-%d', errors='coerce')
df_2wk.drop(df_2wk[df_2wk.date < cutoff_date].index, inplace=True)
df_2wk.set_index(['date','age_group','race_ethnicity','sex'], inplace=True)
df_2wk = df_i.join(df_2wk, rsuffix='_2wk').dropna()
df_2wk.reset_index(inplace=True)
df_2wk.drop(['date','deaths','reported_cases'], axis=1, inplace=True)
df_2wk.drop(df_2wk[df_2wk.deaths_2wk == 0].index, inplace=True)
display(df_2wk)
df_2wk.to_csv("us_2week_delay.csv", index=False)
###Output
_____no_output_____
###Markdown
Rolling Average
###Code
df_roll = df_raw[['date','age_group','race_ethnicity','sex','deaths']].copy()
df_roll['date'] = df_roll.date - pd.DateOffset(14)
df_roll.set_index(['date','age_group','race_ethnicity','sex'], inplace=True)
df_roll = df_roll.groupby(level=[1,2,3], as_index=False, dropna=True).rolling(14)['deaths'].mean().reset_index(level=[0,1,2], drop=True)
df_roll = df_i.join(df_roll, rsuffix='_roll').dropna()
df_roll.reset_index(inplace=True)
cutoff_date = pd.to_datetime('2020-04-01', format='%Y-%m-%d', errors='coerce')
df_roll.drop(df_roll[df_roll.date < cutoff_date].index, inplace=True)
df_roll.drop(['date','deaths','reported_cases'], axis=1, inplace=True)
df_roll.drop(df_roll[df_roll.deaths_roll == 0].index, inplace=True)
display(df_roll)
df_roll.to_csv("us_rolling.csv", index=False)
###Output
_____no_output_____
###Markdown
Sanity Check
###Code
run_sql('''select `Race and ethnicity (combined)`, count(*) from covid_19_us
group by `Race and ethnicity (combined)`;''')
###Output
_____no_output_____
###Markdown
Scratch Code
###Code
df_roll2 = df_raw[['date','age_group','race_ethnicity','sex','deaths']].copy()
df_roll2['date'] = df_roll2.date - pd.DateOffset(12)
df_roll2.set_index(['date','age_group','race_ethnicity','sex'], inplace=True)
df_roll2 = df_roll2.groupby(level=[1,2,3], as_index=False, dropna=False).rolling(7)['deaths'].mean()
df_roll2 = df_roll2.reset_index(level=[0,1,2], drop=True)
df_roll2 = df_i.join(df_roll2, rsuffix='_roll').dropna()
# display(df_roll2)
df_roll2a = df_raw[['date','age_group','race_ethnicity','sex','cases']].copy()
# df_roll2a.drop(['reported_cases'], axis=1, inplace=True)
df_roll2a['date'] = df_roll2a.date - pd.DateOffset(0)
df_roll2a.set_index(['date','age_group','race_ethnicity','sex'], inplace=True)
df_roll2a = df_roll2a.groupby(level=[1,2,3], as_index=False, dropna=True).rolling(7)['cases'].mean()
df_roll2a = df_roll2a.reset_index(level=[0,1,2], drop=True)
df_roll2 = df_roll2.join(df_roll2a, rsuffix='_roll').dropna()
# display(df_roll2)
df_roll2a = df_raw[['date','age_group','race_ethnicity','sex','hosp']].copy()
df_roll2a['date'] = df_roll2a.date - pd.DateOffset(7)
df_roll2a.set_index(['date','age_group','race_ethnicity','sex'], inplace=True)
df_roll2a = df_roll2a.groupby(level=[1,2,3], as_index=False, dropna=True).rolling(4)['hosp'].mean()
df_roll2a = df_roll2a.reset_index(level=[0,1,2], drop=True)
df_roll2 = df_roll2.join(df_roll2a, rsuffix='_roll').dropna()
# display(df_roll2)
df_roll2a = df_raw[['date','age_group','race_ethnicity','sex','icu']].copy()
df_roll2a['date'] = df_roll2a.date - pd.DateOffset(10)
df_roll2a.set_index(['date','age_group','race_ethnicity','sex'], inplace=True)
df_roll2a = df_roll2a.groupby(level=[1,2,3], as_index=False, dropna=True).rolling(4)['icu'].mean()
df_roll2a = df_roll2a.reset_index(level=[0,1,2], drop=True)
df_roll2 = df_roll2.join(df_roll2a, rsuffix='_roll').dropna()
# display(df_roll2)
df_roll2a = df_raw[['date','age_group','race_ethnicity','sex','medcond']].copy()
df_roll2a['date'] = df_roll2a.date - pd.DateOffset(0)
df_roll2a.set_index(['date','age_group','race_ethnicity','sex'], inplace=True)
df_roll2a = df_roll2a.groupby(level=[1,2,3], as_index=False, dropna=True).rolling(7)['medcond'].mean()
df_roll2a = df_roll2a.reset_index(level=[0,1,2], drop=True)
df_roll2 = df_roll2.join(df_roll2a, rsuffix='_roll').dropna()
# display(df_roll2)
df_roll2.reset_index(inplace=True)
cutoff_date = pd.to_datetime('2020-04-01', format='%Y-%m-%d', errors='coerce')
df_roll2.drop(df_roll2[df_roll2.date < cutoff_date].index, inplace=True)
df_roll2.drop(['date','deaths','reported_cases','cases','hosp','icu','medcond'], axis=1, inplace=True)
df_roll2.drop(df_roll2[df_roll2.deaths_roll == 0].index, inplace=True)
df_roll2.drop(df_roll2[df_roll2.cases_roll == 0].index, inplace=True)
df_roll2.drop(df_roll2[df_roll2.hosp_roll == 0].index, inplace=True)
df_roll2.drop(df_roll2[df_roll2.icu_roll == 0].index, inplace=True)
df_roll2.drop(df_roll2[df_roll2.medcond_roll == 0].index, inplace=True)
display(df_roll2)
df_roll2.to_csv("us_roll_all.csv", index=False)
df_roll2 = df_raw[['date','age_group','race_ethnicity','sex','deaths']].copy()
df_roll2['date'] = df_roll2.date - pd.DateOffset(12)
df_roll2.set_index(['date','age_group','race_ethnicity','sex'], inplace=True)
df_roll2 = df_roll2.groupby(level=[1,2,3], as_index=False, dropna=False).rolling(4)['deaths'].mean()
df_roll2 = df_roll2.reset_index(level=[0,1,2], drop=True)
df_roll2 = df_i.join(df_roll2, rsuffix='_roll').dropna()
display(df_roll2)
df_roll2a = df_raw[['date','age_group','race_ethnicity','sex','hosp']].copy()
df_roll2a['date'] = df_roll2a.date - pd.DateOffset(7)
df_roll2a.set_index(['date','age_group','race_ethnicity','sex'], inplace=True)
df_roll2 = df_roll2.join(df_roll2a, rsuffix='_off7').dropna()
# display(df_roll2)
df_roll2a = df_raw[['date','age_group','race_ethnicity','sex','icu']].copy()
df_roll2a['date'] = df_roll2a.date - pd.DateOffset(10)
df_roll2a.set_index(['date','age_group','race_ethnicity','sex'], inplace=True)
df_roll2 = df_roll2.join(df_roll2a, rsuffix='_off10').dropna()
# display(df_roll2)
df_roll2.reset_index(inplace=True)
cutoff_date = pd.to_datetime('2020-04-01', format='%Y-%m-%d', errors='coerce')
df_roll2.drop(df_roll2[df_roll2.date < cutoff_date].index, inplace=True)
df_roll2.drop(['date','deaths','reported_cases','hosp','icu'], axis=1, inplace=True)
df_roll2.drop(df_roll2[df_roll2.deaths_roll == 0].index, inplace=True)
display(df_roll2)
df_roll2.to_csv("us_roll_off.csv", index=False)
df_adv.reset_index()
df_adv.to_csv("us_covid_adv.csv")
# age = run_sql('''select cdc_report_dt, age_group, count(*) from covid_19_us group by cdc_report_dt, age_group;''')
# age.columns = ['date', 'age', 'cases']
# age_ind = age.set_index(['date', 'age'])
# display(age_ind.unstack())
# data_frame = run_sql("""
# select med.*, onset from
# (select cdc_report_dt,
# count(*) as total_cases,
# sum(case current_status when "Laboratory-confirmed case" then 1 else 0 end) as confirmed_cases,
# sum(case sex when "Male" then 1 else 0 end) as male,
# sum(case age_group when "0 - 9 Years" then 1
# when "10 - 19 Years" then 1
# when "20 - 29 Years" then 1
# else 0 end) as age0_29,
# sum(case age_group when "30 - 39 Years" then 1 else 0 end) as age30_39,
# sum(case age_group when "40 - 49 Years" then 1 else 0 end) as age40_49,
# sum(case age_group when "50 - 59 Years" then 1 else 0 end) as age50_59,
# sum(case age_group when "60 - 69 Years" then 1 else 0 end) as age60_69,
# sum(case age_group when "70 - 79 Years" then 1 else 0 end) as age70_79,
# sum(case age_group when "80+ Years" then 1 else 0 end) as age80_up,
# sum(case `Race and ethnicity (combined)` when "Asian, Non-Hispanic" then 1 else 0 end) as r_asian,
# sum(case `Race and ethnicity (combined)` when "Multiple/Other, Non-Hispanic" then 1 else 0 end) as r_mult,
# sum(case `Race and ethnicity (combined)` when "Black, Non-Hispanic" then 1 else 0 end) as r_black,
# sum(case `Race and ethnicity (combined)` when "Hispanic/Latino" then 1 else 0 end) as r_hisp,
# sum(case hosp_yn when "Yes" then 1 else 0 end) as hosp,
# sum(case icu_yn when "Yes" then 1 else 0 end) as icu,
# sum(case medcond_yn when "Yes" then 1 else 0 end) as medcond,
# sum(case death_yn when "Yes" then 1 else 0 end) as deaths
# from covid_19_us
# group by cdc_report_dt) as med
# join
# (select onset_dt, count(*) as onset from covid_19_us
# where onset_dt != "0000-00-00"
# group by onset_dt) as onset
# on cdc_report_dt = onset_dt;""")
# data_frame.head()
# # display(data_frame)
# data_frame.columns = ["date", "total", "conf", "male", "age0_29", "age30_39", "age40_49",
# "age50_59", "age60_69", "age70_79", "age80_up", "r_asian",
# "r_mult", "r_black", "r_hisp", "hosp", "icu", "medcond", "deaths", "onset"]
# df = data_frame.set_index("date")
# display(df)
# df.to_csv("us_totals_category.csv")
df2.loc[(df2["deaths"] == 0) & (df2["reported_cases"] > 10)].sort_values(by="reported_cases", ascending=False)
df2.loc[(df2["deaths"] == 0) & (df2["reported_cases"] > 100)].sort_values(by="reported_cases", ascending=False)
# df2.loc[(df2["deaths"] == 0) & (df2["reported_cases"] > 1000)].sort_values(by="reported_cases", ascending=False)
df2.loc[(df2["deaths"] == 0)
& (df2["reported_cases"] > 100)
& (df2["age_group"] != "0 - 9 Years")
& (df2["age_group"] != "10 - 19 Years")
& (df2["age_group"] != "20 - 29 Years")
& (df2["age_group"] != "30 - 39 Years")].sort_values(by="reported_cases", ascending=False)
df3 = run_sql('''select cdc_report_dt,
age_group,
`Race and ethnicity (combined)`,
sex,
count(*) as reported_cases,
sum(case current_status when "Laboratory-confirmed case" then 1 else 0 end) as confirmed_cases,
sum(case hosp_yn when "Yes" then 1 else 0 end) as hosp,
sum(case icu_yn when "Yes" then 1 else 0 end) as icu,
sum(case medcond_yn when "Yes" then 1 else 0 end) as medcond,
sum(case death_yn when "Yes" then 1 else 0 end) as deaths
from covid_19_us
where -- cdc_report_dt >= "2020-04-01"
age_group != "NA"
and age_group != "Unknown"
and `Race and ethnicity (combined)` != "NA"
and `Race and ethnicity (combined)` != "Unknown"
and (sex = "Male" or sex = "Female")
group by cdc_report_dt, `Race and ethnicity (combined)`, age_group, sex;''')
# df3["d_1week"] = df3["deaths"]
df3_i = df3.set_index(['cdc_report_dt','age_group','Race and ethnicity (combined)','sex'])
###Output
_____no_output_____
###Markdown
Stablecoin Billionaires Descriptive Analysis of the Ethereum-based Stablecoin ecosystem by Anton Wahrstätter, 01.07.2020 Script to prepare the data
###Code
import pandas as pd
import numpy as np
from datetime import datetime
from collections import Counter
###Output
_____no_output_____
###Markdown
Data
###Code
#tether
tether_chunk_0 = 'data/tether/transfer/0_tether_transfer_4638568-8513778.csv'
tether_chunk_1 = 'data/tether/transfer/1_tether_transfer_8513799-8999999.csv'
tether_chunk_2 = 'data/tether/transfer/2_tether_transfer_9000000-9799999.csv'
tether_chunk_3 = 'data/tether/transfer/3_tether_transfer_9800000-10037842.csv'
tether_chunk_4 = 'data/tether/transfer/4_tether_transfer_10037843-10176690.csv'
tether_chunk_5 = 'data/tether/transfer/5_tether_transfer_10176691-10370273.csv'
tether_chunk_0_1 = 'data/tether/transfer/0_tether_transfer_4638568-8999999.csv'
tether_transfer = 'data/tether/transfer/tether_transfers.csv'
tether_issue = 'data/tether/issue/tether_issue.csv'
tether_destroyedblackfunds = 'data/tether/destroyedblackfunds/tether_destroyedblackfunds.csv'
tether_tx_count_to = 'plots/tether/tether_tx_count_to.csv'
tether_tx_count_from = 'plots/tether/tether_tx_count_from.csv'
#usdc
usdc_transfer = 'data/usdc/transfer/0_usdc_transfer_6082465-10370273.csv'
usdc_mint = 'data/usdc/mint/usdc_mint.csv'
usdc_burn = 'data/usdc/burn/usdc_burn.csv'
usdc_tx_count_to = 'plots/usdc/usdc_tx_count_to.csv'
usdc_tx_count_from = 'plots/usdc/usdc_tx_count_from.csv'
#paxos
paxos_transfer = 'data/paxos/transfer/0_paxos_transfer_6294931-10370273.csv'
paxos_mint = 'data/paxos/supplyincreased/paxos_supplyincreased.csv'
paxos_burn = 'data/paxos/supplydecreased/paxos_supplydecreased.csv'
paxos_tx_count_to = 'plots/paxos/paxos_tx_count_to.csv'
paxos_tx_count_from = 'plots/paxos/paxos_tx_count_from.csv'
#dai
dai_transfer = 'data/dai/transfer/0_dai_transfer_8928158-10370273.csv'
dai_mint = 'data/dai/mint/dai_mint.csv'
dai_burn = 'data/dai/burn/dai_burn.csv'
dai_tx_count_to = 'plots/dai/dai_tx_count_to.csv'
dai_tx_count_from = 'plots/dai/dai_tx_count_from.csv'
#trueusd
trueusd_transfer = 'data/trueusd/transfer/0_trueUSD_transfer_5198636-10370273.csv'
trueusd_mint = 'data/trueusd/mint/trueusd_mint.csv'
trueusd_mint_old = 'data/trueusd/mint/trueusd_mint_old.csv'
trueusd_burn = 'data/trueusd/burn/trueusd_burn.csv'
trueusd_burn_old = 'data/trueusd/burn/trueusd_burn_old.csv'
#binanceusd
binanceusd_transfer = 'data/binanceusd/transfer/0_binanceusd_transfer_8493105-10370273.csv'
binanceusd_mint = 'data/binanceusd/supplyincreased/binanceusd_supplyincreased.csv'
binanceusd_burn = 'data/binanceusd/supplydecreased/binanceusd_supplydecreased.csv'
binanceusd_tx_count_to = 'plots/binanceusd/binanceusd_tx_count_to.csv'
binanceusd_tx_count_from = 'plots/binanceusd/binanceusd_tx_count_from.csv'
#husd
husd_transfer = 'data/husd/transfer/0_husd_transfer_8174400-10370273.csv'
husd_mint = 'data/husd/issue/husd_issue.csv'
husd_burn = 'data/husd/redeem/husd_redeem.csv'
husd_tx_count_to = 'plots/husd/husd_tx_count_to.csv'
husd_tx_count_from = 'plots/husd/husd_tx_count_from.csv'
###Output
_____no_output_____
###Markdown
Concentrate datasets
###Code
def concentrate_data():
df = pd.concat([pd.read_csv(tether_chunk_0),
pd.read_csv(tether_chunk_1),
pd.read_csv(tether_chunk_2),
pd.read_csv(tether_chunk_3),
pd.read_csv(tether_chunk_4),
pd.read_csv(tether_chunk_5)], ignore_index=True)
df.to_csv('data/tether/transfer/tether_transfers.csv', index=False)
return
###Output
_____no_output_____
###Markdown
Prepare Transfer Data Balances
###Code
#works great for up to 18 decimals
#needs much RAM
pd.options.mode.chained_assignment = None
def get_balances(_df, decimals):
token = _df.split('/')[1]
print("Start with {}".format(token))
df = pd.read_csv(_df)
froms = df[['txfrom', 'txvalue']]
froms['txvalue'] = froms['txvalue'].apply(lambda x: int(x)*-1)
tos = df[['txto', 'txvalue']]
tos['txvalue'] = tos['txvalue'].apply(lambda x: int(x))
outs = froms.groupby("txfrom").sum().reset_index().rename(columns={"txfrom":"txto"})
ins = tos.groupby("txto").sum().reset_index()
balance = outs.append(ins).groupby("txto").sum()
balance = balance / 10**decimals
balance = balance.reset_index().rename(columns={"txto":"address"}).set_index("address").sort_values('txvalue')
balance.to_csv('plots/{}/{}_balances.csv'.format(token, token))
get_balances(tether_transfer, 6)
get_balances(binanceusd_transfer, 18)
get_balances(husd_transfer, 8)
get_balances(dai_transfer, 18)
get_balances(trueusd_transfer, 18)
get_balances(usdc_transfer, 6)
get_balances(paxos_transfer, 18)
get_balances(sai_transfer, 18)
###Output
_____no_output_____
###Markdown
Remove burned tokens from Tether balances
###Code
df = pd.read_csv('plots/tether/tether_balances.csv', index_col='address')
burn = pd.read_csv(tether_destroyedblackfunds).loc[:,['address', 'txvalue']].set_index('address')
burn['txvalue'] = burn['txvalue'] /-10**6
_df = df.append(burn)
_df.loc['0xc6cde7c39eb2f0f0095f41570af89efc2c1ea828',:]=108850 # bitfinex multisig
_df = _df.groupby(_df.index).sum().sort_values('txvalue')
_df.to_csv('plots/tether/tether_balances.csv')
###Output
_____no_output_____
###Markdown
###Code
# depreciated, but needs less RAM
# works well for small number decimal coins
def get_balances(dflist, decimals=0, chunked=False):
counter = 0
for chunk in dflist:
token = chunk.split('/')[1]
_chunk = pd.read_csv(chunk)
froms = _chunk[['txfrom', 'txvalue']].set_index('txfrom')
froms['txvalue'] = froms['txvalue'].apply(lambda x: int(x)/(10**decimals)*-1)
tos = _chunk[[ 'txto', 'txvalue']].set_index('txto')
tos['txvalue'] = tos['txvalue'].apply(lambda x: int(x)/(10**decimals))
df = tos.append(froms)
df = df.groupby(df.index).sum()
if chunked:
df.to_csv('plots/{}/{}_balances_chunk_{}.csv'.format(token, token, counter))
else:
df.to_csv('plots/{}/{}_balances.csv'.format(token, token))
counter += 1
return
usdt = [tether_chunk_0, tether_chunk_1, tether_chunk_2, tether_chunk_3, tether_chunk_4, tether_chunk_5]
get_balances(usdt, chunked=True)
get_balances([usdc_transfer])
get_balances([paxos_transfer], 18)
get_balances([dai_transfer])
get_balances([trueusd_transfer], 18)
get_balances([husd_transfer])
get_balances([binanceusd_transfer], 18)
###Output
_____no_output_____
###Markdown
Concentrate Chunks (Balances)
###Code
aa = pd.read_csv('plots/tether/tether_balances_chunk_0.csv', index_col=0)
bb = pd.read_csv('plots/tether/tether_balances_chunk_1.csv', index_col=0)
cc = pd.read_csv('plots/tether/tether_balances_chunk_2.csv', index_col=0)
dd = pd.read_csv('plots/tether/tether_balances_chunk_3.csv', index_col=0)
ee = pd.read_csv('plots/tether/tether_balances_chunk_4.csv', index_col=0)
ff = pd.read_csv('plots/tether/tether_balances_chunk_5.csv', index_col=0)
df = aa.append([bb,cc,dd,ee,ff])
df = df.groupby(df.index).sum()
df = df.sort_values('txvalue')
df.to_csv('plots/tether/tether_balances.csv')
###Output
_____no_output_____
###Markdown
Transfer Counter
###Code
def from_to_tx_counter(dflist):
from_count = Counter()
to_count = Counter()
for chunk in dflist:
token = chunk.split('/')[1]
df = pd.read_csv(chunk)
froms = Counter(df['txfrom'])
tos = Counter(df['txto'])
from_count = from_count + froms
to_count = to_count + tos
df_from = pd.DataFrame(dict(from_count).values(), index=dict(from_count).keys()).rename(columns={0: 'txs'})
df_to = pd.DataFrame(dict(to_count).values(), index=dict(to_count).keys()).rename(columns={0: 'txs'})
df_from.to_csv('plots/{}/{}_tx_count_from.csv'.format(token, token))
df_to.to_csv('plots/{}/{}_tx_count_to.csv'.format(token, token))
return
usdt = [tether_chunk_0, tether_chunk_1, tether_chunk_2, tether_chunk_3, tether_chunk_4, tether_chunk_5]
from_to_tx_counter(usdt)
from_to_tx_counter([usdc_transfer])
from_to_tx_counter([paxos_transfer])
from_to_tx_counter([dai_transfer])
from_to_tx_counter([trueusd_transfer])
from_to_tx_counter([husd_transfer])
from_to_tx_counter([binanceusd_transfer])
from_to_tx_counter([sai_transfer])
###Output
_____no_output_____
###Markdown
Create plot data Transfers over date
###Code
def extract_data(df):
dates = df.apply(lambda x: str(datetime.utcfromtimestamp(x))[:10])
txs = list(Counter(dates).values())
a = dates.iloc[0]
b = dates.iloc[-1]
idx = pd.date_range(a,b)
df = pd.DataFrame(txs, index=np.unique(dates), columns=['txs'] )
df.index = pd.DatetimeIndex(df.index)
df = df.reindex(idx, fill_value=0)
da, tx = df.index.tolist(), df['txs'].tolist()
return da, tx
def create_plot_txs_over_date(df):
dates = []
txs = []
token = df.split('/')[1]
for chunk in pd.read_csv(df, chunksize=100000):
da, tx = extract_data(chunk['timestamp'])
dates = dates + da
txs = txs + tx
df = pd.DataFrame({'dates': dates, 'txs': txs}).groupby('dates', as_index=False).sum()
df.to_csv('plots/{}/{}_txs_over_date.csv'.format(token, token))
return
create_plot_txs_over_date(tether_transfer)
create_plot_txs_over_date(usdc_transfer)
create_plot_txs_over_date(paxos_transfer)
create_plot_txs_over_date(dai_transfer)
create_plot_txs_over_date(trueusd_transfer)
create_plot_txs_over_date(binanceusd_transfer)
create_plot_txs_over_date(husd_transfer)
create_plot_txs_over_date(sai_transfer)
###Output
_____no_output_____
###Markdown
Transfers to new addresses
###Code
tos = set()
def _add(x):
global tos
tos.add(x)
return 1
def extract_uniques(df):
dates = df['timestamp'].apply(lambda x: str(datetime.utcfromtimestamp(x))[:10])
un = df['txto'].apply(lambda x: _add(x) if x not in tos else 0)
a = dates.iloc[0]
b = dates.iloc[-1]
idx = pd.date_range(a,b)
df = pd.DataFrame({'dates':dates, 'uniques':un})
df = df.groupby('dates', as_index = False).sum()
df = df.set_index('dates')
df.index = pd.DatetimeIndex(df.index)
df = df.reindex(idx, fill_value=0)
return df.index.tolist(), df['uniques'].tolist()
def create_plot_unique_recipients_over_date(df):
global tos
dates = []
txs = []
for i in df:
token = i.split('/')[1]
for chunk in pd.read_csv(i, chunksize=1000000):
da, tx = extract_uniques(chunk[['timestamp', 'txto']])
dates = dates + da
txs = txs + tx
df = pd.DataFrame({'dates': dates, 'txs': txs}).groupby('dates', as_index=False).sum()
df.to_csv('plots/{}/{}_unique_recipients_over_date.csv'.format(token, token))
tos = set()
return
df = [tether_chunk_0, tether_chunk_1, tether_chunk_2, tether_chunk_3, tether_chunk_4, tether_chunk_5]
create_plot_unique_recipients_over_date(df)
create_plot_unique_recipients_over_date([usdc_transfer])
create_plot_unique_recipients_over_date([paxos_transfer])
create_plot_unique_recipients_over_date([husd_transfer])
create_plot_unique_recipients_over_date([binanceusd_transfer])
create_plot_unique_recipients_over_date([trueusd_transfer])
create_plot_unique_recipients_over_date([dai_transfer])
create_plot_unique_recipients_over_date([sai_transfer])#
###Output
_____no_output_____
###Markdown
Transfers from new addresses
###Code
froms = set()
def _add(x):
global froms
froms.add(x)
return 1
def extract_uniques(df):
dates = df['timestamp'].apply(lambda x: str(datetime.utcfromtimestamp(x))[:10])
un = df['txfrom'].apply(lambda x: _add(x) if x not in froms else 0)
a = dates.iloc[0]
b = dates.iloc[-1]
idx = pd.date_range(a,b)
df = pd.DataFrame({'dates':dates, 'uniques':un})
df = df.groupby('dates', as_index = False).sum()
df = df.set_index('dates')
df.index = pd.DatetimeIndex(df.index)
df = df.reindex(idx, fill_value=0)
return df.index.tolist(), df['uniques'].tolist()
def create_plot_unique_senders_over_date(df):
dates = []
txs = []
for i in df:
token = i.split('/')[1]
for chunk in pd.read_csv(i, chunksize=10000000):
da, tx = extract_uniques(chunk[['timestamp', 'txfrom']])
dates = dates + da
txs = txs + tx
df = pd.DataFrame({'dates': dates, 'txs': txs}).groupby('dates', as_index=False).sum()
df.to_csv('plots/{}/{}_unique_senders_over_date.csv'.format(token, token))
return
df = [tether_chunk_0, tether_chunk_1, tether_chunk_2, tether_chunk_3, tether_chunk_4, tether_chunk_5]
create_plot_unique_senders_over_date(df)
create_plot_unique_senders_over_date([usdc_transfer])
create_plot_unique_senders_over_date([paxos_transfer])
create_plot_unique_senders_over_date([husd_transfer])
create_plot_unique_senders_over_date([binanceusd_transfer])
create_plot_unique_senders_over_date([trueusd_transfer])
create_plot_unique_senders_over_date([dai_transfer])
create_plot_unique_senders_over_date([sai_transfer])
###Output
_____no_output_____
###Markdown
Unique Transfers per day Unique Recipients
###Code
#timestamp first transfer event of contract
ts_tether = 1511827200
ts_usdc = 1536537600
ts_paxos = 1536537600
ts_dai = 1573603200
ts_sai = 1513555200
ts_trueusd = 1520208000
ts_husd = 1563580800
ts_binanceusd = 1568073600
def get_unique_recipients_per_day(df, ts):
unique = dict()
counter = 0
token = df.split('/')[1]
df = pd.read_csv(df)[['timestamp', 'txto']]
while ts + 86400*counter < 1593561600:
timefrom = ts + 86400*counter
timeto = ts + 86400*(counter+1)
uniques = len(df[(df['timestamp'] >=timefrom) & (df['timestamp'] < timeto)]['txto'].unique())
date = str(datetime.utcfromtimestamp(timefrom))[:10]
if date in unique.keys():
unique[date] += uniques
else:
unique[date] = uniques
counter += 1
_df = pd.DataFrame(unique.values(), index=unique.keys()).rename(columns={0:'txs'})
_df.to_csv('plots/{}/{}_unique_recipients_per_day_over_date.csv'.format(token, token))
get_unique_recipients_per_day(tether_transfer, ts_tether)
get_unique_recipients_per_day(usdc_transfer, ts_usdc)
get_unique_recipients_per_day(paxos_transfer, ts_paxos)
get_unique_recipients_per_day(dai_transfer, ts_dai)
get_unique_recipients_per_day(sai_transfer, ts_sai)
get_unique_recipients_per_day(husd_transfer, ts_husd)
get_unique_recipients_per_day(trueusd_transfer, ts_trueusd)
get_unique_recipients_per_day(binanceusd_transfer, ts_binanceusd)
###Output
_____no_output_____
###Markdown
Unique Senders
###Code
#timestamp first transfer event of contract
ts_tether = 1511827200
ts_usdc = 1536537600
ts_paxos = 1536537600
ts_dai = 1573603200
ts_sai = 1513555200
ts_trueusd = 1520208000
ts_husd = 1563580800
ts_binanceusd = 1568073600
def get_unique_senders_per_day(df, ts):
unique = dict()
counter = 0
token = df.split('/')[1]
df = pd.read_csv(df)[['timestamp', 'txfrom']]
while ts + 86400*counter < 1593561600:
timefrom = ts + 86400*counter
timeto = ts + 86400*(counter+1)
uniques = len(df[(df['timestamp'] >=timefrom) & (df['timestamp'] < timeto)]['txfrom'].unique())
date = str(datetime.utcfromtimestamp(timefrom))[:10]
if date in unique.keys():
unique[date] += uniques
else:
unique[date] = uniques
counter += 1
_df = pd.DataFrame(unique.values(), index=unique.keys()).rename(columns={0:'txs'})
_df.to_csv('plots/{}/{}_unique_senders_per_day_over_date.csv'.format(token, token))
get_unique_senders_per_day(tether_transfer, ts_tether)
get_unique_senders_per_day(usdc_transfer, ts_usdc)
get_unique_senders_per_day(paxos_transfer, ts_paxos)
get_unique_senders_per_day(dai_transfer, ts_dai)
get_unique_senders_per_day(sai_transfer, ts_sai)
get_unique_senders_per_day(husd_transfer, ts_husd)
get_unique_senders_per_day(trueusd_transfer, ts_trueusd)
get_unique_senders_per_day(binanceusd_transfer, ts_binanceusd)
###Output
_____no_output_____
###Markdown
Average transfer value
###Code
def create_plot_avg_txvalue(df, token, decimals):
df = df[['timestamp', 'txvalue']]
dates = df['timestamp'].apply(lambda x: str(datetime.utcfromtimestamp(x))[:10])
txvalue = df['txvalue']
df = pd.DataFrame({'dates': dates, 'txvalue': txvalue.astype(float)/(10**decimals)})
a = dates.iloc[0]
b = dates.iloc[-1]
idx = pd.date_range(a,b)
df = df.groupby('dates', as_index=False).mean()
df = df.set_index('dates')
df.index = pd.DatetimeIndex(df.index)
df = df.reindex(idx, fill_value=0)
df.to_csv('plots/{}/{}_avg_value_over_date.csv'.format(token, token))
return
###Output
_____no_output_____
###Markdown
Average gas price
###Code
def create_plot_avg_gas(df, token):
df = df[['timestamp', 'gas_price', 'gas_used']]
dates = df['timestamp'].apply(lambda x: str(datetime.utcfromtimestamp(x))[:10])
df = pd.DataFrame({'dates': dates, 'gas': df['gas_price']*df['gas_used']/(10**18)})
a = dates.iloc[0]
b = dates.iloc[-1]
idx = pd.date_range(a,b)
df = df.groupby('dates', as_index=False).mean()
df = df.set_index('dates')
df.index = pd.DatetimeIndex(df.index)
df = df.reindex(idx, fill_value=0)
df.to_csv('plots/{}/{}_avg_gas_over_date.csv'.format(token, token))
return
###Output
_____no_output_____
###Markdown
Run both
###Code
for i in [(paxos_transfer, 18), (usdc_transfer, 6), (husd_transfer, 8),
(dai_transfer, 18), (sai_transfer, 18), (trueusd_transfer, 18), (binanceusd_transfer, 18)]:
df = pd.read_csv(i[0])
token = i[0].split('/')[1]
create_plot_avg_txvalue(df, token, i[1])
create_plot_avg_gas(df, token)
###Output
_____no_output_____
###Markdown
Circulating supply Prepare tokens without Mint/Burn Events (DAI)
###Code
df = pd.read_csv(dai_transfer)
mint = df[df['txfrom'] == '0x0000000000000000000000000000000000000000'].reset_index().drop('index', axis = 1).rename(columns={'txfrom': 'address'})
burn = df[df['txto'] == '0x0000000000000000000000000000000000000000'].reset_index().drop('index', axis = 1).rename(columns={'txfrom': 'address'})
mint.to_csv('data/dai/mint/dai_mint.csv')
burn.to_csv('data/dai/burn/dai_burn.csv')
###Output
_____no_output_____
###Markdown
Create Plot for circulating supply
###Code
def create_plot_circulating_amount(df_mint, df_burn):
token = df_mint.split('/')[1]
_issue = pd.read_csv(df_mint)
_destroyedblackfunds = pd.read_csv(df_burn)
if type(_issue['txvalue'][0])==type(str()):
_issue['txvalue'] = _issue['txvalue'].astype(float)
_destroyedblackfunds['txvalue'] = _destroyedblackfunds['txvalue'].astype(float)
dbf = _destroyedblackfunds.loc[:, ['timestamp', 'txvalue']]
iss = _issue.loc[:, ['timestamp', 'txvalue']]
dbf['txvalue'] = dbf['txvalue']*-1
dfis = pd.concat([dbf,iss])
dfis = dfis.sort_values('timestamp', axis = 0).reset_index().loc[:,['timestamp', 'txvalue']]
dfis['utc'] = dfis['timestamp'].apply(lambda x: str(datetime.utcfromtimestamp(x))[0:10])
dfis = dfis[['utc', 'txvalue']]
dfis = dfis.groupby('utc').sum()
a = dfis.index[0]
b = dfis.index[-1]
idx = pd.date_range(a,b)
dfis.index = pd.DatetimeIndex(dfis.index)
cirulating_amount = dfis.reindex(idx, fill_value=0)
cirulating_amount.to_csv('plots/{}/{}_circulating_supply.csv'.format(token, token))
return cirulating_amount
create_plot_circulating_amount(tether_issue, tether_destroyedblackfunds)
create_plot_circulating_amount(usdc_mint, usdc_burn)
create_plot_circulating_amount(paxos_mint, paxos_burn)
create_plot_circulating_amount(dai_mint, dai_burn)
create_plot_circulating_amount(trueusd_mint, trueusd_burn)
create_plot_circulating_amount(husd_mint, husd_burn)
create_plot_circulating_amount(binanceusd_mint, binanceusd_burn)
###Output
_____no_output_____
###Markdown
Cumulated Balances
###Code
cumsum = pd.Series()
def create_cum_sum(df, st):
global cumsum
cs = df.cumsum() + st
end = cs.iloc[-1]
cumsum = cumsum.append(cs)
return end
def create_cum_bal(df):
global cumsum
start = 0
token = df.split("/")[1]
for i in pd.read_csv(df, chunksize = 1000000):
i = i['txvalue']
i = i[i>0]
if len(i) > 0:
start = create_cum_sum(i, st = start)
y = cumsum/cumsum.iloc[-1] #Supply 01 July 20
x = (np.arange(start = 0 , stop = len(cumsum), step = 1)/len(cumsum))*100
df = pd.read_csv('plots/{}/{}_balances.csv'.format(token, token))
df = df.rename(columns={'Unnamed: 0': 'address', 'txvalue': 'balance'})
df = df[df['balance']>0]
df = df.reset_index()[['address', 'balance']]
df['cum'] = cumsum.reset_index()[0]
df.to_csv('plots/{}/{}_positive_cumulated_balances.csv'.format(token, token))
cumsum = pd.Series()
create_cum_bal('plots/tether/tether_balances.csv')
create_cum_bal('plots/usdc/usdc_balances.csv')
create_cum_bal('plots/paxos/paxos_balances.csv')
create_cum_bal('plots/dai/dai_balances.csv')
create_cum_bal('plots/binanceusd/binanceusd_balances.csv')
create_cum_bal('plots/husd/husd_balances.csv')
create_cum_bal('plots/trueusd/trueusd_balances.csv')
create_cum_bal('plots/sai/sai_balances.csv')
#Tether fix
df2 = pd.read_csv('plots/tether/tether_positive_cumulated_balances.csv', index_col=0)
df2.iloc[1:].reset_index().loc[:, 'address':'cum'].to_csv('plots/tether/tether_positive_cumulated_balances.csv')
###Output
_____no_output_____
###Markdown
Global Transfer count over whole ecosystem
###Code
fr_tether = pd.read_csv(tether_tx_count_from, index_col='Unnamed: 0')
to_tether = pd.read_csv(tether_tx_count_to, index_col='Unnamed: 0')
fr_dai = pd.read_csv(dai_tx_count_from, index_col='Unnamed: 0')
to_dai = pd.read_csv(dai_tx_count_to, index_col='Unnamed: 0')
fr_usdc = pd.read_csv(usdc_tx_count_from, index_col='Unnamed: 0')
to_usdc = pd.read_csv(usdc_tx_count_to, index_col='Unnamed: 0')
fr_paxos = pd.read_csv(paxos_tx_count_from, index_col='Unnamed: 0')
to_paxos = pd.read_csv(paxos_tx_count_to, index_col='Unnamed: 0')
fr_trueusd = pd.read_csv(trueusd_tx_count_from, index_col='Unnamed: 0')
to_trueusd = pd.read_csv(trueusd_tx_count_to, index_col='Unnamed: 0')
fr_binanceusd = pd.read_csv(binanceusd_tx_count_from, index_col='Unnamed: 0')
to_binanceusd = pd.read_csv(binanceusd_tx_count_to, index_col='Unnamed: 0')
fr_husd = pd.read_csv(husd_tx_count_from, index_col='Unnamed: 0')
to_husd = pd.read_csv(husd_tx_count_to, index_col='Unnamed: 0')
fr_new = fr_tether.append([fr_dai, fr_usdc, fr_paxos, fr_trueusd, fr_binanceusd, fr_husd])
fr_new = fr_new.groupby(fr_new.index)['txs'].sum()
fr_new.to_csv('plots/summary/from.csv')
to_new = to_tether.append([to_dai, to_usdc, to_paxos, to_trueusd, to_binanceusd, to_husd])
to_new = to_new.groupby(to_new.index)['txs'].sum()
to_new.to_csv('plots/summary/to.csv')
###Output
_____no_output_____
###Markdown
Plot some left, center and right images
###Code
from keras.preprocessing.image import img_to_array, load_img
plt.rcParams['figure.figsize'] = (12, 6)
i = 0
for camera in ["left", "center", "right"]:
image = load_img("data/"+data_frame.iloc[1090][camera].strip())
image = img_to_array(image).astype(np.uint8)
plt.subplot(1, 3, i+1)
plt.imshow(image)
plt.axis('off')
plt.title(camera)
i += 1
###Output
_____no_output_____
###Markdown
Plot the same images but crop to remove the sky and car bonnet
###Code
# With cropping
plt.rcParams['figure.figsize'] = (12, 6)
i = 0
for camera in ["left", "center", "right"]:
image = load_img("data/"+data_frame.iloc[1090][camera].strip())
image = img_to_array(image).astype(np.uint8)
image = image[55:135, :, :]
plt.subplot(1, 3, i+1)
plt.imshow(image)
plt.axis('off')
plt.title(camera)
i += 1
###Output
_____no_output_____
###Markdown
Same images but resized
###Code
# With cropping then resizing
#plt.figure()
plt.rcParams['figure.figsize'] = (6, 3)
i = 0
for camera in ["left", "center", "right"]:
image = load_img("data/"+data_frame.iloc[7100][camera].strip())
image = img_to_array(image).astype(np.uint8)
image = image[55:135, :, :]
image = imresize(image, (32, 16, 3))
plt.subplot(1, 3, i+1)
plt.imshow(image)
plt.axis('off')
plt.title(camera)
i += 1
###Output
_____no_output_____
###Markdown
Converted to HSV colour space and showing only the S channel
###Code
# With cropping then resizing then HSV
i = 0
for camera in ["left", "center", "right"]:
image = load_img("data/"+data_frame.iloc[7100][camera].strip())
image = img_to_array(image).astype(np.uint8)
image = image[55:135, :, :]
image = imresize(image, (32, 16, 3))
hsv = cv2.cvtColor(image.astype("uint8"), cv2.COLOR_RGB2HSV)
hsv[:, :, 0] = hsv[:, :, 0] * 0
hsv[:, :, 2] = hsv[:, :, 2] * 0
plt.subplot(1, 3, i+1)
plt.imshow(hsv)
plt.axis('off')
plt.title(camera)
i += 1
###Output
_____no_output_____
###Markdown
Converted to YUV colour space and showing only the V channel
###Code
# With cropping then resizing then YUV
i = 0
for camera in ["left", "center", "right"]:
image = load_img("data/"+data_frame.iloc[7100][camera].strip())
image = img_to_array(image).astype(np.uint8)
image = image[55:135, :, :]
image = imresize(image, (32, 16, 3))
yuv = cv2.cvtColor(image.astype("uint8"), cv2.COLOR_RGB2YUV)
hsv[:, :, 0] = hsv[:, :, 0] * 0
hsv[:, :, 1] = hsv[:, :, 1] * 0
plt.subplot(1, 3, i+1)
plt.imshow(yuv)
plt.axis('off')
plt.title(camera)
i += 1
###Output
_____no_output_____
###Markdown
Show some examples from Track 2 with cropping, HSV (only S channel)
###Code
#plt.figure()
plt.rcParams['figure.figsize'] = (6, 3)
i = 0
for track2_image_file in ["data/track_2_1.jpg", "data/track_2_2.jpg", "data/track_2_3.jpg"]:
track2_image = load_img(track2_image_file)
track2_image = img_to_array(track2_image).astype(np.uint8)
track2_image = track2_image[55:135, :, :]
track2_image = imresize(track2_image, (32, 16, 3))
yuv = cv2.cvtColor(track2_image.astype("uint8"), cv2.COLOR_RGB2HSV)
yuv[:, :, 0] = yuv[:, :, 0] * 0
yuv[:, :, 2] = yuv[:, :, 2] * 0
plt.subplot(1, 3, i+1)
plt.imshow(yuv)
plt.axis('off')
i += 1
###Output
_____no_output_____
###Markdown
The S channel in the HSV colour-space looks promising as the result is very similar for both track 1 and track 2 which has very bad shadowing... Remove the data frame header row and train/val split
###Code
# Remove header
data_frame = data_frame.ix[1:]
# shuffle the data (frac=1 meand 100% of the data)
data_frame = data_frame.sample(frac=1).reset_index(drop=True)
# 80-20 training validation split
training_split = 0.8
num_rows_training = int(data_frame.shape[0]*training_split)
print(num_rows_training)
training_data = data_frame.loc[0:num_rows_training-1]
validation_data = data_frame.loc[num_rows_training:]
# release the main data_frame from memory
data_frame = None
###Output
6428
###Markdown
Routines for reading and processing images
###Code
def read_images(img_dataframe):
#from IPython.core.debugger import Tracer
#Tracer()() #this one triggers the debugger
imgs = np.empty([len(img_dataframe), 160, 320, 3])
angles = np.empty([len(img_dataframe)])
j = 0
for i, row in img_dataframe.iterrows():
# Randomly pick left, center, right camera image and adjust steering angle
# as necessary
camera = np.random.choice(["center", "left", "right"])
imgs[j] = imread("data/" + row[camera].strip())
steering = row["steering"]
if camera == "left":
steering += 0.25
elif camera == "right":
steering -= 0.25
angles[j] = steering
j += 1
#for i, path in enumerate(img_paths):
# print("data/" + path)
# imgs[i] = imread("data/" + path)
return imgs, angles
def resize(imgs, shape=(32, 16, 3)):
"""
Resize images to shape.
"""
height, width, channels = shape
imgs_resized = np.empty([len(imgs), height, width, channels])
for i, img in enumerate(imgs):
imgs_resized[i] = imresize(img, shape)
#imgs_resized[i] = cv2.resize(img, (16, 32))
return imgs_resized
def normalize(imgs):
"""
Normalize images between [-1, 1].
"""
#return imgs / (255.0 / 2) - 1
return imgs / 255.0 - 0.5
def augment_brightness(images):
"""
:param image: Input image
:return: output image with reduced brightness
"""
new_imgs = np.empty_like(images)
for i, image in enumerate(images):
#rgb = toimage(image)
# convert to HSV so that its easy to adjust brightness
hsv = cv2.cvtColor(image.astype("uint8"), cv2.COLOR_RGB2HSV)
# randomly generate the brightness reduction factor
# Add a constant so that it prevents the image from being completely dark
random_bright = .25+np.random.uniform()
# Apply the brightness reduction to the V channel
hsv[:,:,2] = hsv[:,:,2]*random_bright
# Clip the image so that no pixel has value greater than 255
hsv[:, :, 2] = np.clip(hsv[:, :, 2], a_min=0, a_max=255)
# convert to RBG again
new_imgs[i] = cv2.cvtColor(hsv, cv2.COLOR_HSV2RGB)
return new_imgs
def preprocess(imgs):
#imgs_processed = resize(imgs)
#imgs_processed = rgb2gray(imgs_processed)
imgs_processed = normalize(imgs)
return imgs_processed
###Output
_____no_output_____
###Markdown
Generator function (not yielding here as we want to just show the images) - displays 3 images from the batch and then the same images augmented
###Code
def gen_batches(data_frame, batch_size):
"""
Generates random batches of the input data.
:param imgs: The input images.
:param angles: The steering angles associated with each image.
:param batch_size: The size of each minibatch.
:yield: A tuple (images, angles), where both images and angles have batch_size elements.
"""
#while True:
df_batch = data_frame.sample(n=batch_size)
images_raw, angles_raw = read_images(df_batch)
plt.figure()
# Show a sample of 3 images
for i in range(3):
plt.subplot(2, 3, i+1)
plt.imshow(images_raw[i].astype("uint8"))
plt.axis("off")
plt.title("%.8f" % angles_raw[i])
# Augment data by altering brightness of images
#plt.figure()
augmented_imgs = augment_brightness(images_raw)
for i in range(3):
plt.subplot(2, 3, i+4)
plt.imshow(augmented_imgs[i].astype("uint8"))
plt.axis('off')
plt.title("%.8f" % angles_raw[i])
#batch_imgs, batch_angles = augment(preprocess(batch_imgs_raw), angles_raw)
# batch_imgs, batch_angles = augment(batch_imgs_raw, angles_raw)
# batch_imgs = preprocess(batch_imgs)
# yield batch_imgs, batch_angles
gen_batches(training_data, 3)
###Output
_____no_output_____
###Markdown
Quick Draw Project Data Processing
###Code
import os
import csv
import time
import json
import gzip
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tqdm import tqdm
# Retrieve all the csv file names
data_path = './data/'
training_path = './data/train_simplified/'
training_files = []
for path in os.listdir(training_path):
if path.endswith('.csv'):
training_files.append(path)
def load_image_csv(file):
"""
Reads a specific image csv file from the file name.
Uses Python built-in csv library with no dependencies.
file: file name with full directory
returns: full list of lists with all the data from the csv file
"""
result = []
with open(file) as csvfile:
current = csv.reader(csvfile)
for row in current:
result.append(row)
return result
def load_data(quant):
"""
Reads in all the data directly from the csv files.
quant: indicates the amount of data going to be stored and returned
0 ~ 1 would be the proportion of data
>= 1 would be the number of rows from each file
returns: dictionary of {word: stroke_list}
"""
all_images = {}
for file in tqdm(training_files):
name = file.split('.')[0]
current = pd.read_csv(training_path + file)
if quant >= 1:
count = quant
else:
count = int(len(current) * quant)
current = current[:count]
current = current.values.tolist()
all_images[name] = current
return all_images
# Stores data in Json file. 10 percent of training data would be around 2.5GB.
def json_store(file, data):
with open(file, 'w') as f:
json.dump(data, f)
# Loads data from Json file.
def json_load(file):
with open(file, 'r') as f:
result = json.load(f)
return result
def show_image(strokes):
"""
Takes the list of strokes as input and shows the image with matplotlib
"""
point_sets = []
# Separate the strokes and stores the points in different arrays
for stroke in strokes:
current = []
for x,y in zip(stroke[0], stroke[1]):
current.append([x,255-y]) # Subtracting from 255 as images appear to be inverted
current = np.array(current)
point_sets.append(current)
# Shows the image on a canvas with size 256*256
# The fixed size is to regulate the shown image
plt.plot([0,0,255,255,0], [0,255,255,0,0], '#999999') # Grey border
for group in point_sets:
plt.plot(group[:,0], group[:,1], 'k-') # Each stroke
plt.xlim((0, 255))
plt.ylim((0, 255))
plt.axis('scaled')
plt.axis('off')
plt.show()
# Loads 1000 rows from each file.
if input('y to confirm load') == 'y':
data_1000 = load_data(1000)
json_store(data_path + 'data_1000.json', data_1000)
data_1000 = json_load(data_path + 'data_1000.json')
sample_image = data_1000['airplane'][1][1]
sample_image = eval(sample_image)
show_image(sample_image)
with open('./data/test_1000.gz', 'wb') as f:
for x in data_1000.keys():
count = 0
for item in data_1000[x]:
if count > 10:
continue
sketch = item[1]
binary = ''.join(format(i, '08b') for i in bytearray(x+sketch, encoding ='utf-8'))
f.write(binary)
count += 1
s = data_1000['airplane'][0][1]
res = ''.join(format(i, '08b') for i in bytearray(s, encoding ='utf-8'))
print(b('a'))
###Output
_____no_output_____
###Markdown
**Procesiranje podatkov**V tej beležnici preberemo in shranimo podatke iz podatkovne zbirke slovenskih člankov ter jih pred-procesiramo. Priprava okolja
###Code
!pip install classla
import zipfile
import tarfile
import json
import os
import classla
classla.download('sl')
from gensim.utils import simple_preprocess
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
import sys
sys.path.insert(0, '/content/drive/MyDrive/Colab Notebooks/')
from utils import read_json_file, save_articles, prepare_dataframe, visualize_articles_by_media, read_preprocessed_specific_media, dataframe_info
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from collections import OrderedDict, defaultdict
###Output
_____no_output_____
###Markdown
Pomožne funkcije
###Code
def save_extracted_articles(articles, dir):
"""
Save articles to a file.
:param articles: articles to be saved
"""
for media in articles.keys():
filename = dir + media
with open(filename, 'w', encoding='utf8') as fp:
json.dump(articles[media], fp)
def read_data_json(json_file, articles_by_media):
"""
This function reads a single json file and it returns a dictionary of articles
in json_file
:param json_file: json file
:param articles_by_media: a dictionary of media names as keys and articles as
values
:return: articles_by_media (new articles added)
"""
data = json.load(json_file)
articles_full = data['articles']['results'] # a dictionary (JSON) of all articles' metadata
for article in articles_full:
body = article['body']
media = article['source']['title']
title = article['title']
if media not in articles_by_media.keys():
articles_by_media[media] = {}
articles_by_media[media]['body'] = []
articles_by_media[media]['title'] = []
articles_by_media[media]['body'].append(body)
articles_by_media[media]['title'].append(title)
return articles_by_media
def read_data_zip(filepath, save_path):
"""
Read and save data from a zip file of dataset of Slovenian articles. A zip file contains 7 tar.gz
files, each one for a year from 2014 and 2020.
:param filepath: path to the data zip file
:param save_path: path to save folder
"""
with zipfile.ZipFile(filepath, 'r') as zip_file:
year = 2014
for year_file in zip_file.namelist()[1:8]:
articles_by_media = {}
zip_file.extract(year_file)
tar = tarfile.open(year_file)
for member in tar.getmembers()[1:]:
json_file = tar.extractfile(member.name)
articles_by_media = read_data_json(json_file, articles_by_media)
try:
save_extracted_articles(articles_by_media, save_path)
except FileNotFoundError as err:
print(err)
year += 1
def preprocess_articles(articles, stop_words, nlp):
"""
Preprocess a list of raw articles. Remove words in stop_words list and are
shorter than 4 words from each article from article list and lemmatize each
word with nlp pipeline.
:param articles: list of strings to preprocess
:param stop_words: list of words to be removed from articles
:param nlp: classla pipeline for word lemmatization
:return preprocessed_articles: a list of preprocessed articles (lists of lemmas)
"""
preprocessed_articles = [] # list of preprocessed articles
for article in articles:
preprocessed_body = [] # a list of words of a single article
for token in simple_preprocess(article, min_len=4, max_len=25):
# remove all words shorter than three characters
if token not in stop_words:
preprocessed_body.append(token)
doc = nlp(' '.join(preprocessed_body))
lemmas = [word.lemma for sent in doc.sentences for word in sent.words]
preprocessed_articles.append(lemmas)
return preprocessed_articles
def preprocess_media_articles(media_list, load_dir, save_dir):
"""
Preprocess articles from media_list files in load_dir and save them to save_dir
:param media_list: a list of media names we want to preprocess
:param load_dir: a path to directory of files with raw articles
:param save_dir: a path to directory where preprocessed files will be saved
"""
stop_words = stopwords.words('slovene')
new_sw = ["href", "http", "https", "quot", "nbsp", "mailto", "mail", "getty", "foto", "images", "urbanec", "sportid"]
stop_words.extend(new_sw)
filepath = '/content/drive/MyDrive/Colab Notebooks/stopwords'
with open(filepath, 'r') as f:
additional_stopwords = f.read().splitlines()
stop_words.extend(additional_stopwords)
stop_words = list(set(stop_words))
config = {
'processors': 'tokenize, lemma', # Comma-separated list of processors to use
'lang': 'sl', # Language code for the language to build the Pipeline in
'tokenize_pretokenized': True, # Use pretokenized text as input and disable tokenization
'use_gpu': True
}
nlp = classla.Pipeline(**config)
for file in os.listdir(load_dir):
if file not in media_list:
continue
save_filepath = save_dir + file
if os.path.exists(save_filepath):
print("File ", file, " already exists")
continue
if not os.path.exists(save_dir):
os.mkdir(save_dir)
load_filepath = load_dir + file
articles = read_json_file(load_filepath)
df = pd.DataFrame.from_dict(articles)
df['word_length'] = df.body.apply(lambda x: len(str(x).split()))
df = df.loc[df['word_length'] > 25]
df = df.drop_duplicates(subset='title', keep="last")
df = df.drop('word_length', axis=1)
articles = df.to_dict('list')
print(f"Preprocessing file: {file} with {len(articles['body'])} articles")
preprocessed_articles = preprocess_articles(articles['body'], stop_words, nlp)
save_articles(preprocessed_articles, save_filepath)
print(f"File saved to {save_filepath}!\n**********************")
###Output
_____no_output_____
###Markdown
Main Nastavljanje konstant
###Code
YEAR = 2018
media_list = ['Dnevnik', 'Siol.net Novice', '24ur.com', 'MMC RTV Slovenija']
# Po potrebi spremenite naslednji dve vrstici.
# load_dir je pot do mape, kjer se nahajajo surovi podatki
load_dir = f'/content/drive/MyDrive/Colab Notebooks/raw_articles/{YEAR}/'
# save_dir je pot do mape, kamor želite shraniti procesirane podatke
save_dir = f'/content/drive/MyDrive/Colab Notebooks/preprocessed_articles/{YEAR}/'
###Output
_____no_output_____
###Markdown
**Predprocesiranje člankov**V tem delu se prebere članke navedenih medijev v media_list, odstrani tiste s krajšim besedilom od 25 besed in tiste, ki imajo znotraj posameznega medija enake naslove (duplikati).Nato vsak članek razdeli na besede (angl. tokenize), odstrani vse besede, ki so v stop_words (besede, ki nimajo nekega pomena, npr. da, tako, in...) in ki so krajše od 4 črk. Besede, ki so ostale lematiziramo (spremenimo v osnovno obliko)
###Code
preprocess_media_articles(media_list, load_dir, save_dir)
###Output
_____no_output_____
###Markdown
**Post-procesiranje**Pri določenih medijih se določeni deli člankov pojavljajo v mnogih člankih, zato je smiselno te ponavljajoče se dele odstraniti vsaj iz že pred-procesiranih člankov.*Slovenska tiskovna agencija STA:* Vsak članek se začne na način: 'Ljubljana, 29. oktobra (STA)' - 'vsebina članka'. Te dele torej odstranimo.*24ur.com*: Veliko člankov ima na začetku članka del besedila, ki se nanaša na omogočanje piškotkov spletnega mesta. Ta del odstranimo iz člankov.*Siol.net Novice*: Veliko člankov se začne z besedilom, ki se nanaša ta t.i. *termometer*, ki bralcu razloži vlogo le-tega pri poročanju o popularnosti članka. Tudi te dele odstranimo iz člankov.Poleg tega odstranimo tudi članke z manj kot 25 besedami.
###Code
df = prepare_dataframe(media_list, YEAR)
# ZA STA PREDPROCESIRANE BODY-JE
print(df.loc[df.media == 'Slovenska tiskovna agencija STA', 'preprocessed_body'])
# df.loc[df.media == 'Slovenska tiskovna agencija STA', 'preprocessed_body'] = df.loc[df.media == 'Slovenska tiskovna agencija STA', 'preprocessed_body'].apply(lambda x: x[1:])
# Če izhod naslednje vrstice na začetku vsakega seznama ne vsebuje več imena kraja ali meseca, smo odstranili ponavljajoče se dele
print(df.loc[df.media == 'Slovenska tiskovna agencija STA', 'preprocessed_body'])
# # ZA Siol PREDPROCESIRANE BODY-JE
# df.loc[df.media == 'Siol.net Novice', 'preprocessed_body'] = df.loc[df.media == 'Siol.net Novice', 'preprocessed_body'].apply(lambda x: x[10:] if x[0] == 'termometer' else x)
# Če je izhod naslednje vrstice enak '['ne']', potem smo odstranili ponavljajoče se dele besedila
print(df.loc[df.media == 'Siol.net Novice', 'preprocessed_body'].apply(lambda x: 'ja' if x[0] == 'termometer' else 'ne').unique())
# # ZA 24ur.com PREDPROCESIRANE BODY-JE
# df.loc[df.media == '24ur.com', 'preprocessed_body'] = df.loc[df.media == '24ur.com', 'preprocessed_body'].apply(lambda x: x[10:] if 'piškotek' in x[:9] else x)
# Če je izhod naslednje vrstice enak '['ne']', potem smo odstranili ponavljajoče se dele besedila
print(df.loc[df.media == '24ur.com', 'preprocessed_body'].apply(lambda x: 'ja' if 'piškotek' in x[:9] else 'ne').unique())
# save_preprocessed_articles(df.loc[df.media == 'Slovenska tiskovna agencija STA', 'preprocessed_body'].to_list(), '/content/gdrive/MyDrive/Colab Notebooks/preprocessed_articles/'+ str(2017) + '/' + 'Slovenska tiskovna agencija STA')
# save_preprocessed_articles(df.loc[df.media == 'Siol.net Novice', 'preprocessed_body'].to_list(), '/content/gdrive/MyDrive/Colab Notebooks/preprocessed_articles/'+ str(YEAR) + '/' + 'Siol.net Novice')
# save_preprocessed_articles(df.loc[df.media == '24ur.com', 'preprocessed_body'].to_list(), '/content/gdrive/MyDrive/Colab Notebooks/preprocessed_articles/'+ str(YEAR) + '/' + '24ur.com')
###Output
_____no_output_____
###Markdown
**Predstavitev končnih podatkov** Prikaz števila člankov posameznega leta v določenem letu.
###Code
count = {}
for f in os.listdir(load_dir):
if os.path.isfile(f'{load_dir}{f}'):
articles = read_json_file(f'{load_dir}{f}')
count[f] = len(articles['body'])
count = dict(sorted(count.items(), key=lambda item: item[1], reverse=True)[:20])
visualize_articles_by_media(list(count.keys()), list(count.values()))
df = prepare_dataframe(media_list, YEAR)
###Output
_____no_output_____
###Markdown
Prikaz števila člankov izbranih medijev v izbranem letu
###Code
count_articles = df.media.value_counts().to_dict()
media_names = list(count_articles.keys())
counts = list(count_articles.values())
print(f'Število vseh člankov skupaj: {sum(counts)}')
visualize_articles_by_media(count_articles, counts)
###Output
_____no_output_____
###Markdown
Prikaz števila besed v člankih izbranih medijev (skupno)
###Code
dataframe_info(df, 'word_length')
###Output
_____no_output_____
###Markdown
Prikaz števila besed v člankih izbranih medijev (vsak medij posebej)
###Code
for media in media_list:
print(f'\n{media}')
dataframe_info(df.loc[df.media == media], 'word_length', media)
###Output
_____no_output_____
###Markdown
Collecting Data from MIDI files here, i get the sequence of notes and their duration for each song in a dataset of classical music
###Code
midi_files = utils.get_midi_files('data\\classical')
midi_files[:5]
len(midi_files)
songs_notes = []
for midi_file in tqdm(midi_files):
mid = utils.parse_midi_file(midi_file)
if mid is None: continue
# pitches of song i.e.: C#, A, G, etc...durations ignored for now
notes = np.array(utils.get_notes(mid, chord_root=True, duration_type=True))
songs_notes.append(notes)
len(songs_notes)
songs_notes = np.array(songs_notes)
songs_notes[0].shape
np.save('processed_data/classical_songs_notes.npy', songs_notes)
###Output
_____no_output_____
###Markdown
Vectorize Data vectorize data to a format suitable for training a generative model
###Code
with open('processed_data/classical_songs_notes.npy', 'rb') as data:
data = np.load(data, allow_pickle=True)
song = data[0]
###Output
_____no_output_____
###Markdown
data is currently of the form of a sequence of pitch-duration pairs where pitches 0-11 correspond to notes C-B and durations are strings of the duration type
###Code
song
###Output
_____no_output_____
###Markdown
the song is mapped to a sequence of a unique integer class for each pitch-duration pair
###Code
song.shape
song_pitches, song_durs = utils.map_song(song)
song_pitches.shape, song_durs.shape
print(song_pitches[:20])
print(song_durs[:20])
###Output
[ 2 2 2 5 5 7 7 3 7 10 5 5 2 0 10 2 3 5 3 0]
[0 3 0 3 0 2 0 2 0 3 1 0 2 0 2 2 0 2 2 0]
###Markdown
now do this for all songs
###Code
vec_songs = []
for song in tqdm(data):
vec_songs.append(utils.map_song(song))
vec_songs = np.array(vec_songs)
np.save('processed_data/vectorized_classical_songs2.npy', vec_songs)
###Output
_____no_output_____
###Markdown
Data Processing Importing The Libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing The Dataset
###Code
dataset = pd.read_csv('Data.csv')
x = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
print(x)
print(y)
###Output
['No' 'Yes' 'No' 'No' 'Yes' 'Yes' 'No' 'Yes' 'No' 'Yes']
###Markdown
Handling The Missing Data Methods1. You can ignore the data row[s] that have missing values by removing them from the dataset, to avoid causing problems. Note: you apply it if the row[s] are 1% of the entire dataset, otherwise, do not apply this method.2. Replace the missing data with the average of that cell. The following is the implementation of this method:
###Code
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
imputer.fit(x[:, 1:3])
x[:, 1:3] = imputer.transform(x[:, 1:3])
print(x)
###Output
[['France' 44.0 72000.0]
['Spain' 27.0 48000.0]
['Germany' 30.0 54000.0]
['Spain' 38.0 61000.0]
['Germany' 40.0 63777.77777777778]
['France' 35.0 58000.0]
['Spain' 38.77777777777778 52000.0]
['France' 48.0 79000.0]
['Germany' 50.0 83000.0]
['France' 37.0 67000.0]]
###Markdown
Encoding Categorical Data
###Code
from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
ct = ColumnTransformer(transformers=[('encoder', OneHotEncoder(), [0])], remainder='passthrough')
x = np.array(ct.fit_transform(x))
print(x)
###Output
[[1.0 0.0 0.0 44.0 72000.0]
[0.0 0.0 1.0 27.0 48000.0]
[0.0 1.0 0.0 30.0 54000.0]
[0.0 0.0 1.0 38.0 61000.0]
[0.0 1.0 0.0 40.0 63777.77777777778]
[1.0 0.0 0.0 35.0 58000.0]
[0.0 0.0 1.0 38.77777777777778 52000.0]
[1.0 0.0 0.0 48.0 79000.0]
[0.0 1.0 0.0 50.0 83000.0]
[1.0 0.0 0.0 37.0 67000.0]]
###Markdown
Encoding The Dependent Variable
###Code
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
y = le.fit_transform(y)
print(y)
###Output
[0 1 0 0 1 1 0 1 0 1]
###Markdown
Splitting The Dataset Into The Training Set & The Test Set
###Code
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.2, random_state = 1)
print(x_train)
print(x_test)
print(y_train)
print(y_test)
###Output
[0 1]
###Markdown
Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
x_train[:, 3:] = sc.fit_transform(x_train[:, 3:])
x_test[:, 3:] = sc.transform(x_test[:, 3:])
print(x_train)
print(x_test)
###Output
[[0.0 1.0 0.0 -1.4661817944830124 -0.9069571034860727]
[1.0 0.0 0.0 -0.44973664397484414 0.2056403393225306]]
###Markdown
Feature Generation and Dataset Preparation
###Code
# Provides a way for us to save cells that we execute & test so that we can
# use them later in our prediction code (avoids having to copy-paste code)
#
# The syntax is as follows:
#
# %%execute_and_save <filename>
from IPython.core import magic_arguments
from IPython.core.magic import (Magics, magics_class, line_magic,
cell_magic, line_cell_magic)
@magics_class
class SaveScripts(Magics):
@cell_magic
def execute_and_save(self, line, cell):
self.shell.run_cell(cell)
with open(line,'w') as file:
file.write(cell)
ip = get_ipython()
ip.register_magics(SaveScripts)
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg')
font = {'weight' : 'normal',
'size' : 9}
mpl.rc('font', **font)
mpl.rcParams['axes.titlesize'] = 'medium'
mpl.rcParams['axes.labelsize'] = 'medium'
%%execute_and_save tmp_imports
import numpy as np
import pandas as pd
import math
import os
from datetime import datetime
from datetime import timedelta
# Import our custom that retrieves financial market data from #http://financialmodelingprep.com
import api
import api.tickers as tickers # Information about stock, ETF, etc. tickers
import api.stocks as stocks # Information about stocks and stock indices
from api.stocks import Stock # Information about a particular stock
from api.stocks import Index # Information about a particular index
from api.stocks import TIMEDELTA_QUARTER
from api.stocks import TIMEDELTA_MONTH
from api.stocks import TIMEDELTA_YEAR
from sklearn.preprocessing import MinMaxScaler
###Output
_____no_output_____
###Markdown
Data Retrieval and Caching
###Code
sp500_tickers = tickers.get_sp500_tickers()
print("Sample S&P 500 tickers: ",sp500_tickers[:10])
stock_data = { ticker : Stock(ticker) for ticker in sp500_tickers }
from IPython.display import display
def cache_stocks(start,end):
progress = display('',display_id=True)
success_tickers = list()
problem_tickers = list()
stock_list = list(stock_data.keys())
for i, ticker in enumerate(stock_list):
try:
progress.update(f'Caching {ticker} ({i+1}/{len(stock_list)})')
stock_data[ticker].cache_data(start,end)
success_tickers.append(ticker)
except:
problem_tickers.append(ticker)
continue
progress.update('Caching complete!')
print(f'Cached {len(success_tickers)} tickers: {", ".join(success_tickers)}.')
if len(problem_tickers) > 0:
print(f'The following tickers did not have complete data and were not cached: {", ".join(problem_tickers)}.')
# Cache data for the last 15 quarters (90 calendar days / quarter)
end_date = datetime.today().date()
start_date = (end_date - TIMEDELTA_QUARTER*15)
# This is a potentially time-intensive operation
cache_stocks(start_date,end_date)
###Output
_____no_output_____
###Markdown
Feature and Label PreparationWe will want to smooth out our time-series data so that our algorithms (our neural network and calculations we perform on the time series data) are not susceptible to noise. For instance, we do not want the neural network to learn based upon a single jump in stock price over one or just a few days in the time-series data, as what is of more interest to us is the longer-term performance of a stock (e.g., not on a day-by-day basis, but rather the performance over a quarter).Further, simple moving averages (SMA) introduce a lag in the data where the smoothed data appears to be delayed relative to the actual trends in the time-series price data. For this reason, we will use an exponentially-weighted moving average. We can experiment with different window sizes to get some smoothing with negligible lag.
###Code
# Let's use Apple as an example and see what kind of window size smooths out
# data for the last quarter while generally retaining features.
aapl_data = stock_data['AAPL'].get_historical_prices(start_date,end_date)
aapl_series = aapl_data['close']
aapl_smooth = aapl_series.ewm(span=5).mean()
plt.plot(aapl_series.values,'.',label='Close',color=plt.cm.viridis(.4),markerfacecolor=plt.cm.viridis(.9),markersize=12)
plt.plot(aapl_smooth.values,'-',label='5-day EWMA',color=plt.cm.viridis(0.7))
plt.title('Apple, Inc.')
plt.xlabel('Trading days')
plt.ylabel('Close Price')
plt.legend();
###Output
_____no_output_____
###Markdown
Stock Labels
###Code
%%execute_and_save tmp_benchmark_sp500_gain
# We can use a window size of 5 to smooth out some of the noise
# while retaining fidelity with respect to the general trends
# observed in the data.
# Let's compute the gain of the S&P 500 index over the last
# quarter. We will compare each sualtock's performance over the
# last quarte to this value.
# Note that the S&P 500 index is a capital-weighted index, so
# larger-cap stocks make up a larger portion of the fraction.
# Essentially the question we are asking is whether any given
# stock will outperform the index or "market". Investors can
# choose to invest in index-tracking ETFs instead of a given stock.
def get_sp500_gain_for_interval(interval,offset,output=False):
""" Get the gain for the S&P 500 over the specified interval
Args:
interval: The time interval for gain calculation as a datetime.timedelta
offset: The offset of interval relative to today as a datetime.timedelta
Returns:
The fractional gain or loss over the interval.
"""
end_date = datetime.today().date()
if offset is not None:
end_date -= offset
start_date = end_date - interval
sp500_index = Index('^GSPC')
sp500_time_series = sp500_index.get_historical_data(start_date,end_date)
sp500_close = sp500_time_series['close']
sp500_close_smooth = sp500_close.ewm(span=5).mean()
sp500_end_of_interval = round(sp500_close_smooth.values[-1],2)
sp500_start_of_interval = round(sp500_close_smooth.values[0],2)
sp500_gain_during_interval = round(sp500_end_of_interval / sp500_start_of_interval,4)
if output:
print("Value start of interval: ",sp500_start_of_interval)
print("Value end of interval: ",sp500_end_of_interval)
print("Approximate gain: ",sp500_gain_during_interval)
print("")
plt.plot(sp500_close.values,'.',label='Close',color=plt.cm.viridis(.4),markerfacecolor=plt.cm.viridis(.9),markersize=12)
plt.plot(sp500_close_smooth.values,'-',label='5-day EWMA',color=plt.cm.viridis(0.3))
plt.title('S&P 500 Index')
plt.xlabel('Trading days')
plt.ylabel('Close Price')
plt.legend()
return sp500_gain_during_interval
get_sp500_gain_for_interval(TIMEDELTA_QUARTER,offset=None,output=True);
###Output
Value start of interval: 3141.63
Value end of interval: 3031.2
Approximate gain: 0.9648
###Markdown
We will need to label our data so that we can provide the labels along with training data to our neural network. The labels in this case are generated by looking at the performance of a particular stock against the "market". Since the S&P 500 is a good representation of the US market, we will compare last quarter's performance of each stock that we will use to train the network with that of the S&P 500 index.
###Code
%%execute_and_save tmp_benchmark_label
def get_stock_label_func(p_interval,p_offset):
""" Generates a function that returns a stock label
Args:
p_interval: The prediction interval as a datetime.timedelta
p_offset: The offset of d_interval relative to today as a datetime.timedelta
Returns:
A function that can be called (for a specified stock) to get the stock label
"""
ref_value = get_sp500_gain_for_interval(p_interval,p_offset,output=False)
def get_stock_label(symbol,output=False):
""" Generates a stock label for training and/or validation dataset
Raises:
LookupError: If the stock could not be found
Returns:
An integer value (0 or 1) indicating the stock label
"""
end_date = datetime.today().date()
if p_offset is not None:
end_date -= p_offset
start_date = end_date - p_interval
try:
close_price = stock_data[symbol].get_historical_prices(start_date,end_date)['close']
except:
close_price = Stock(symbol).get_historical_prices(start_date,end_date)['close']
close_price = close_price.ewm(span=3).mean()
stock_gain = close_price.values[-1] / close_price.values[0]
stock_relative_gain = round( (stock_gain) / ref_value,4)
stock_label = 0 if stock_relative_gain < 1 else 1
if output:
print("Gain during interval: ",round(stock_gain,4))
print("Reference value: ",ref_value)
print("Gain relative to reference value: ",stock_relative_gain)
print("Label: ",stock_label)
return stock_label
return get_stock_label
test_get_stock_label = get_stock_label_func(p_interval=TIMEDELTA_QUARTER,p_offset=None)
print('Label for AAPL: ',test_get_stock_label('AAPL'))
print('Label for KSS: ',test_get_stock_label('KSS'))
print('Label for MSFT: ',test_get_stock_label('MSFT'))
print('Label for WELL: ',test_get_stock_label('WELL'))
###Output
Label for AAPL: 1
Label for KSS: 0
Label for MSFT: 1
Label for WELL: 0
###Markdown
Stock Features: Categorical
###Code
%%execute_and_save tmp_predictor_categorical
def get_stock_cat_features_func(d_interval,d_offset):
""" Generates a function that returns categorical features for a stock
Args:
d_interval: The data interval as a datetime.timedelta (e.g., 6*TIMEDELTA_QUARTER for 6 quarters of data)
d_offset: The offset of d_interval relative to today as a datetime.timedelta
Returns:
A tuple consisting of array that specifies which categorical feature are to be embedded (as opposed to
stand-alone features) and a function that can be called to get categorical features for a stock. The
array should include the embedding dimension for the feature, or 0 if it is not to be embedded.
"""
# Get list of sectors and map each sector to an index (normalized)
sector_list = np.array(['Energy',
'Consumer Cyclical',
'Real Estate',
'Utilities',
'Industrials',
'Basic Materials',
'Technology',
'Healthcare',
'Financial Services',
'Consumer Defensive'])
industry_list = np.array(['Agriculture',
'Insurance - Life',
'Medical Diagnostics & Research',
'Online Media',
'Oil & Gas - E&P',
'Homebuilding & Construction',
'Oil & Gas - Drilling',
'Oil & Gas - Refining & Marketing',
'Advertising & Marketing Services',
'Utilities - Regulated',
'Consulting & Outsourcing',
'Autos',
'Travel & Leisure',
'Oil & Gas - Integrated',
'Brokers & Exchanges',
'Application Software',
'Manufacturing - Apparel & Furniture',
'Medical Devices',
'Retail - Apparel & Specialty',
'Oil & Gas - Services',
'Consumer Packaged Goods',
'Insurance - Property & Casualty',
'Drug Manufacturers',
'Real Estate Services',
'Airlines',
'Insurance',
'Farm & Construction Machinery',
'Semiconductors',
'Medical Distribution',
'Steel',
'Restaurants',
'Waste Management',
'Entertainment',
'Chemicals',
'REITs',
'Insurance - Specialty',
'Metals & Mining',
'Retail - Defensive',
'Biotechnology',
'Conglomerates',
'Utilities - Independent Power Producers',
'Building Materials',
'Health Care Plans',
'Tobacco Products',
'Oil & Gas - Midstream',
'Transportation & Logistics',
'Business Services',
'Truck Manufacturing',
'Beverages - Non-Alcoholic',
'Personal Services',
'Banks',
'Medical Instruments & Equipment',
'Industrial Distribution',
'Asset Management',
'Forest Products',
'Industrial Products',
'Communication Equipment',
'Packaging & Containers',
'Credit Services',
'Engineering & Construction',
'Computer Hardware',
'Aerospace & Defense',
'Beverages - Alcoholic',
'Health Care Providers',
'Communication Services',
'Employment Services'])
sector_dict = { sector : i for i, sector in enumerate(sector_list)}
industry_dict = { industry : i for i, industry in enumerate(industry_list)}
# SP500 range is on the order of USD 1B to USD 1T, scale accordingly
MIN_MARKET_CAP = 1.0e9
MAX_MARKET_CAP = 1.0e12
# For the specified d_offset we will make a cyclic label corresponding
# to the month of the year (1-12) using sine and cosine functions
end_date = datetime.today().date()
if d_offset is not None:
end_date -= d_offset
# Encoding which month (fractional) the data ends. This is universal
# in that it will work for any intervals and offsets of interest.
month_decimal = end_date.month + end_date.day/30.0;
month_angle = 2*math.pi*month_decimal/12.0
month_x = math.cos(month_angle)
month_y = math.sin(month_angle)
# The feature structure (# of embeddings for each feature or 0 if not to be embedded)
cat_feature_embeddings = [len(sector_list)+1, len(industry_list)+1, 0, 0]
def get_stock_cat_features(symbol):
""" Gets categorical features associated with a paticular stock
Args:
symbol: A stock ticker symbol such as 'AAPL' or 'T'
Raises:
LookupError: If any categorical feature is unavailable of NaN for the stock.
Returns:
Categorical stock features as an array of M x 1 values (for M features). Categorical
features to be embedded are appear first in the returned array
"""
try:
profile = stock_data[symbol].get_company_profile()
except:
profile = Stock(symbol).get_company_profile()
sector = profile.loc[symbol,'sector']
industry = profile.loc[symbol,'industry']
try:
sector_feature = sector_dict[sector]
except:
sector_feature = len(sector_list)
try:
industry_feature = industry_dict[industry]
except:
industry_feature = len(industry_list)
# Get market capitalization corresponding to d_offset
if d_offset is None:
quarter_offset = 0
else:
quarter_offset = int(d_offset / TIMEDELTA_QUARTER)
# Get the "latest" key metrics as of the data interval
try:
key_metrics = stock_data[symbol].get_key_metrics(quarters=1,offset=quarter_offset)
except:
key_metrics = Stock(symbol).get_key_metrics(quarters=1,offset=quarter_offset)
market_cap = key_metrics['Market Cap'][0]
# Scalar value (approx 0-1) corresponding to market capitalization
market_cap_feature = math.log(float(market_cap)/MIN_MARKET_CAP,MAX_MARKET_CAP/MIN_MARKET_CAP)
features = np.array( [sector_feature, industry_feature, market_cap_feature, month_x, month_y],dtype='float32')
if np.isnan(features).any():
raise LookupError
return features
return cat_feature_embeddings, get_stock_cat_features
_, test_get_stock_cat_features = get_stock_cat_features_func(d_interval=4*TIMEDELTA_QUARTER,d_offset=TIMEDELTA_QUARTER)
test_get_stock_cat_features('AMZN')
test_get_stock_cat_features('WELL')
###Output
_____no_output_____
###Markdown
Stock Features: Daily Data
###Code
%%execute_and_save tmp_predictor_daily
def get_stock_daily_features_func(d_interval,d_offset):
""" Generates a function that returns daily features for a stock
Args:
d_interval: The data interval as a datetime.timedelta (e.g., 6*TIMEDELTA_QUARTER for 6 quarters of data)
d_offset: The offset of d_interval relative to today as a datetime.timedelta
Returns:
A function that can be called to get daily features for a stock
"""
end_date = datetime.today().date()
if d_offset is not None:
end_date -= d_offset
start_date = end_date - d_interval
# Th S&P 500 index will have a closing value for every trading day. Each of the stocks
# should also have the same number of values unless they were suspended and didn't trade or
# recently became public.
trading_day_count = len(Index('^GSPC').get_historical_data(start_date,end_date))
def get_stock_daily_features(symbol,output=False):
""" Gets daily features associated with a paticular stock
Args:
symbol: A stock ticker symbol such as 'AAPL' or 'T'
Raises:
LookupError: If any categorical feature is unavailable of NaN for the stock.
Returns:
Daily stock features as an array of M x N values (for M features with N values)
"""
try:
historical_data = stock_data[symbol].get_historical_prices(start_date,end_date)
except:
historical_data = Stock(symbol).get_historical_prices(start_date,end_date)
# Smooth and normalize closing price relative to initial price for data set
close_price = historical_data['close'].ewm(span=5).mean()
close_price = close_price / close_price.iat[0]
close_price = np.log10(close_price)
# Smooth and normalize volume relative to average volume
average_volume = historical_data['volume'].mean()
volume = historical_data['volume'].ewm(span=5).mean()
volume = volume / average_volume
volume = np.log10(volume+1e-6)
# Ensure equal lengths of data (nothing missing)
if len(volume) != len(close_price):
raise LookupError
# Ensure we have the right number of data points for the period
if len(close_price) != trading_day_count:
raise LookupError
features = np.array([close_price, volume],dtype='float32')
if np.isnan(features).any():
raise LookupError
return features
return get_stock_daily_features
test_get_stock_daily_features = get_stock_daily_features_func(1*TIMEDELTA_QUARTER,TIMEDELTA_QUARTER)
test_get_stock_daily_features('AAPL')
###Output
_____no_output_____
###Markdown
Stock Features: Quarterly Data
###Code
%%execute_and_save tmp_predictor_quarterly
def get_stock_quarterly_features_func(d_interval,d_offset):
""" Generates a function that returns quarterly features for a stock
Args:
d_interval: The data interval as a datetime.timedelta (e.g., 6*TIMEDELTA_QUARTER for 6 quarters of data)
d_offset: The offset of d_interval relative to today
Returns:
A function that can be called to get quarterly features for a stock
"""
# Quarterly features can only be used if prediction intervals
if d_interval < TIMEDELTA_QUARTER:
raise ValueError("The specified data interval is less than one quarter")
end_date = datetime.today().date()
if d_offset is not None:
end_date -= d_offset
start_date = end_date - d_interval
quarter_count = int(d_interval / TIMEDELTA_QUARTER)
if d_offset is None:
quarter_offset = 0
else:
quarter_offset = int(d_offset / TIMEDELTA_QUARTER)
price_to_earnings_scaler = MinMaxScaler()
price_to_sales_scaler = MinMaxScaler()
price_to_free_cash_flow_scaler = MinMaxScaler()
dividend_yield_scaler = MinMaxScaler()
price_to_earnings_scaler.fit_transform(np.array([0,200]).reshape(-1, 1))
price_to_sales_scaler.fit_transform(np.array([0,200]).reshape(-1, 1))
price_to_free_cash_flow_scaler.fit_transform(np.array([0,200]).reshape(-1, 1))
dividend_yield_scaler.fit_transform(np.array([0,1]).reshape(-1, 1))
def get_stock_quarterly_features(symbol):
""" Gets quarterly features associated with a paticular stock
Args:
symbol: A stock ticker symbol such as 'AAPL' or 'T'
Raises:
LookupError: If any categorical feature is unavailable of NaN for the stock.
Returns:
Quarterly stock features as an array of M x N values (for M features and N values)
"""
try:
key_metrics = stock_data[symbol].get_key_metrics(quarter_count,quarter_offset)
except:
key_metrics = Stock(symbol).get_key_metrics(quarter_count,quarter_offset)
key_metrics['PE ratio'] = price_to_earnings_scaler.transform(key_metrics['PE ratio'].values.reshape(-1,1))
key_metrics['Price to Sales Ratio'] = price_to_sales_scaler.transform(key_metrics['Price to Sales Ratio'].values.reshape(-1,1))
key_metrics['PFCF ratio'] = price_to_free_cash_flow_scaler.transform(key_metrics['PFCF ratio'].values.reshape(-1,1))
key_metrics['Dividend Yield'] = dividend_yield_scaler.transform(key_metrics['Dividend Yield'].values.reshape(-1,1))
try:
financials = stock_data[symbol].get_income_statement(quarter_count,quarter_offset)
except:
financials = Stock(symbol).get_income_statement(quarter_count,quarter_offset)
# Apply scaling for diluted EPS (we want growth relative to t=0)
financials['EPS Diluted'] = ( financials['EPS Diluted'].astype(dtype='float32') / float(financials['EPS Diluted'].iat[0]) )
features = np.array([
key_metrics['PE ratio'],
key_metrics['Price to Sales Ratio'],
key_metrics['PFCF ratio'],
key_metrics['Dividend Yield'],
financials['EPS Diluted'],
financials['Revenue Growth'],
],dtype='float32')
if np.isnan(features).any():
raise LookupError
return features
return get_stock_quarterly_features
test_get_stock_quarterly_features = get_stock_quarterly_features_func(4*TIMEDELTA_QUARTER,None)
test_get_stock_quarterly_features('AAPL')
test_get_stock_quarterly_features('T')
###Output
_____no_output_____
###Markdown
Generate Training and Testing Data SetsWe have created a custom dataset class that will accept references to the functions we created earlier for extracting categorical, daily and quarterly features and generating a label for a particular stock. To use the dataset, we specify a list of stocks and references to the functions. We are going to create multiple datasets: training and testing datasets for a number of folds.
###Code
!pygmentize model/dataset.py
###Output
[34mimport[39;49;00m [04m[36mtorch[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.utils.data.dataset[39;49;00m [34mas[39;49;00m [04m[36mdataset[39;49;00m
[34mimport[39;49;00m [04m[36mtorch.utils.data.dataloader[39;49;00m [34mas[39;49;00m [04m[36mdataLoader[39;49;00m
[34mfrom[39;49;00m [04m[36mtorch.nn.utils.rnn[39;49;00m [34mimport[39;49;00m pack_padded_sequence
[34mfrom[39;49;00m [04m[36mtorch.nn.utils.rnn[39;49;00m [34mimport[39;49;00m pad_sequence
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mos[39;49;00m
[34mfrom[39;49;00m [04m[36mabc[39;49;00m [34mimport[39;49;00m ABC
[34mclass[39;49;00m [04m[32mStockDataset[39;49;00m(dataset.Dataset,ABC):
[33m"""Stock dataset."""[39;49;00m
[34mdef[39;49;00m [32m__init__[39;49;00m([36mself[39;49;00m,p_interval,d_interval,offsets,features,labels=[36mNone[39;49;00m):
[34mtry[39;49;00m:
[36mself[39;49;00m.c_features_embedding_dims = features[[34m0[39;49;00m]
[36mself[39;49;00m.c_features = features[[34m1[39;49;00m]
[36mself[39;49;00m.d_features = features[[34m2[39;49;00m]
[36mself[39;49;00m.q_features = features[[34m3[39;49;00m]
[36mself[39;49;00m.labels = labels
[36mself[39;49;00m.p_interval = p_interval
[36mself[39;49;00m.d_interval = d_interval
[36mself[39;49;00m.offsets = offsets
[34mexcept[39;49;00m:
[34mraise[39;49;00m [36mValueError[39;49;00m
[90m@classmethod[39;49;00m
[34mdef[39;49;00m [32mconcat[39;49;00m([36mcls[39;49;00m,datasets):
[33m""" Concatenates datasets to make a new dataset (for use with K-folding)[39;49;00m
[33m Args:[39;49;00m
[33m datasets: An iterable of StockDatasets[39;49;00m
[33m[39;49;00m
[33m Retruns:[39;49;00m
[33m The concatenated dataset[39;49;00m
[33m """[39;49;00m
baseline_ds = datasets[[34m0[39;49;00m]
[34mfor[39;49;00m ds [35min[39;49;00m datasets:
[34mif[39;49;00m ds.get_prediction_interval() != baseline_ds.get_prediction_interval():
[34mraise[39;49;00m [36mValueError[39;49;00m([33m"[39;49;00m[33mMismatch in prediction interval[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m ds.get_data_interval() != baseline_ds.get_data_interval():
[34mraise[39;49;00m [36mValueError[39;49;00m([33m"[39;49;00m[33mMismatch in data interval[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m [35mnot[39;49;00m np.array_equal(ds.get_offsets(),baseline_ds.get_offsets()):
[34mraise[39;49;00m [36mValueError[39;49;00m([33m"[39;49;00m[33mMismatch in data offsets[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m ds.get_categorical_feature_count() != baseline_ds.get_categorical_feature_count():
[34mraise[39;49;00m [36mValueError[39;49;00m([33m"[39;49;00m[33mMismatch in categorical features[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m [35mnot[39;49;00m np.array_equal(ds.get_categorical_feature_embedding_dims(),baseline_ds.get_categorical_feature_embedding_dims()):
[34mraise[39;49;00m [36mValueError[39;49;00m([33m"[39;49;00m[33mMismatch in categorical feature embedding dimensions[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m ds.get_daily_feature_count() != baseline_ds.get_daily_feature_count():
[34mraise[39;49;00m [36mValueError[39;49;00m([33m"[39;49;00m[33mMismatch in daily features[39;49;00m[33m"[39;49;00m)
[34mif[39;49;00m ds.get_quarterly_feature_count() != baseline_ds.get_quarterly_feature_count():
[34mraise[39;49;00m [36mValueError[39;49;00m([33m"[39;49;00m[33mMismatch in quarterly features[39;49;00m[33m"[39;49;00m)
c_features_embedding_dims = ds.get_categorical_feature_embedding_dims()
c_features = np.concatenate([ ds.c_features [34mfor[39;49;00m ds [35min[39;49;00m datasets])
d_features = np.concatenate([ ds.d_features [34mfor[39;49;00m ds [35min[39;49;00m datasets])
q_features = np.concatenate([ ds.q_features [34mfor[39;49;00m ds [35min[39;49;00m datasets])
labels = np.concatenate([ ds.labels [34mfor[39;49;00m ds [35min[39;49;00m datasets])
[34mreturn[39;49;00m [36mcls[39;49;00m(baseline_ds.get_prediction_interval(),
baseline_ds.get_data_interval(),
baseline_ds.get_offsets(),
[c_features_embedding_dims,c_features, d_features, q_features],
labels)
[90m@classmethod[39;49;00m
[34mdef[39;49;00m [32mfrom_data[39;49;00m([36mcls[39;49;00m,
stocks,
p_interval,
d_interval,
offsets,
c_features_func_gen,
d_features_func_gen,
q_features_func_gen,
label_func_gen=[36mNone[39;49;00m,
output=[36mFalse[39;49;00m):
[33m""" Creates a dataset using the specified data generator functions[39;49;00m
[33m Args:[39;49;00m
[33m stocks: The data interval as a datetime.timedelta (e.g., 6*TIMEDELTA_QUARTER for 6 quarters of data)[39;49;00m
[33m p_interval: The prediction interval, as a datetime.timedelta object[39;49;00m
[33m d_interval: The data interval, as a datetime.timedelta object[39;49;00m
[33m offsets: An iterable of offsets to use for prediction and data relative to today, as a datetime.timedelta object[39;49;00m
[33m c_features_func_gen: A function that accepts d_interval and offset arguments and returns a [39;49;00m
[33m function that provides categorical features data for a specified stock[39;49;00m
[33m d_features_func_gen: A function that accepts d_interval and offset arguments and returns a [39;49;00m
[33m function that provides daily features data for a specified stock[39;49;00m
[33m q_features_func_gen: A function that accepts d_interval and offset arguments and returns a [39;49;00m
[33m function that provides quarterly features data for a specified stock[39;49;00m
[33m label_func_gen: A function that accepts p_interval and offset arguments and returns a function[39;49;00m
[33m that provides labels for a specified stock[39;49;00m
[33m [39;49;00m
[33m Returns:[39;49;00m
[33m A Dataset object that includes feature and label data for the specified stocks over the specified interval[39;49;00m
[33m """[39;49;00m
success_stocks = [36mlist[39;49;00m()
problem_stocks = [36mlist[39;49;00m()
c_features = [36mlist[39;49;00m()
d_features = [36mlist[39;49;00m()
q_features = [36mlist[39;49;00m()
labels = [36mlist[39;49;00m()
[34mfor[39;49;00m offset [35min[39;49;00m offsets:
[37m# For each specified data offset, prepare functions that will[39;49;00m
[37m# be used to generate data for the specified intervals [39;49;00m
c_features_embedding_dims, c_features_func = c_features_func_gen(d_interval,offset+p_interval)
d_features_func = d_features_func_gen(d_interval,offset+p_interval)
q_features_func = q_features_func_gen(d_interval,offset+p_interval)
[34mif[39;49;00m label_func_gen [35mis[39;49;00m [35mnot[39;49;00m [36mNone[39;49;00m:
label_func = label_func_gen(p_interval,offset)
[34melse[39;49;00m:
label_func = [36mNone[39;49;00m
[34mfor[39;49;00m stock [35min[39;49;00m stocks:
[34mtry[39;49;00m:
[37m# Attempt to get all data first, if not available exception will be thrown[39;49;00m
c = c_features_func(stock)
d = d_features_func(stock)
q = q_features_func(stock)
[34mif[39;49;00m label_func:
l = label_func(stock)
[37m# Time-series features will need to be transposed for our LSTM input[39;49;00m
c_features.append(c.transpose().astype(dtype=[33m'[39;49;00m[33mfloat32[39;49;00m[33m'[39;49;00m,copy=[36mFalse[39;49;00m))
d_features.append(d.transpose().astype(dtype=[33m'[39;49;00m[33mfloat32[39;49;00m[33m'[39;49;00m,copy=[36mFalse[39;49;00m))
q_features.append(q.transpose().astype(dtype=[33m'[39;49;00m[33mfloat32[39;49;00m[33m'[39;49;00m,copy=[36mFalse[39;49;00m))
[34mif[39;49;00m label_func:
labels.append(l)
success_stocks.append(stock)
[34mexcept[39;49;00m:
problem_stocks.append(stock)
[34mcontinue[39;49;00m
[34mif[39;49;00m output:
[34mprint[39;49;00m([33m"[39;49;00m[33m.[39;49;00m[33m"[39;49;00m, end = [33m'[39;49;00m[33m'[39;49;00m)
[34mif[39;49;00m output:
[34mprint[39;49;00m([33m'[39;49;00m[33m'[39;49;00m)
[34mprint[39;49;00m(f[33m'[39;49;00m[33mThe following stocks were successfully processed: {[39;49;00m[33m"[39;49;00m[33m, [39;49;00m[33m"[39;49;00m[33m.join(success_stocks)}[39;49;00m[33m'[39;49;00m)
[34mprint[39;49;00m([33m'[39;49;00m[33m'[39;49;00m)
[34mprint[39;49;00m(f[33m'[39;49;00m[33mThe following tickers did not have complete data and were not processed: {[39;49;00m[33m"[39;49;00m[33m, [39;49;00m[33m"[39;49;00m[33m.join(problem_stocks)}.[39;49;00m[33m'[39;49;00m)
features = [c_features_embedding_dims,np.stack(c_features,axis=[34m0[39;49;00m),np.array(d_features),np.stack(q_features,axis=[34m0[39;49;00m)]
labels = np.stack(labels,axis=[34m0[39;49;00m) [34mif[39;49;00m label_func [35mis[39;49;00m [35mnot[39;49;00m [36mNone[39;49;00m [34melse[39;49;00m [36mNone[39;49;00m
[34mreturn[39;49;00m [36mcls[39;49;00m(p_interval,d_interval,offsets,features,labels)
[90m@classmethod[39;49;00m
[34mdef[39;49;00m [32mfrom_file[39;49;00m([36mcls[39;49;00m,path):
data = np.load(path,allow_pickle=[36mTrue[39;49;00m)[[33m'[39;49;00m[33marr_0[39;49;00m[33m'[39;49;00m]
meta, features, labels = data[[34m0[39;49;00m], data[[34m1[39;49;00m], data[[34m2[39;49;00m]
[34mreturn[39;49;00m [36mcls[39;49;00m(meta[[34m0[39;49;00m],meta[[34m1[39;49;00m],meta[[34m2[39;49;00m],features,labels)
[34mdef[39;49;00m [32mto_file[39;49;00m([36mself[39;49;00m, path, output=[36mFalse[39;49;00m):
directory = os.path.dirname(path)
[34mif[39;49;00m [35mnot[39;49;00m os.path.exists(directory):
os.makedirs(directory)
meta = [[36mself[39;49;00m.p_interval,[36mself[39;49;00m.d_interval,[36mself[39;49;00m.offsets]
features = [[36mself[39;49;00m.c_features_embedding_dims,
[36mself[39;49;00m.c_features,
[36mself[39;49;00m.d_features,
[36mself[39;49;00m.q_features]
np.savez(path,[meta,features,[36mself[39;49;00m.labels])
[34mif[39;49;00m output:
[34mprint[39;49;00m(f[33m'[39;49;00m[33mSuccessfully wrote data to {path}[39;49;00m[33m'[39;49;00m)
[34mdef[39;49;00m [32m__len__[39;49;00m([36mself[39;49;00m):
[34mreturn[39;49;00m [36mlen[39;49;00m([36mself[39;49;00m.c_features)
[34mdef[39;49;00m [32m__getitem__[39;49;00m([36mself[39;49;00m, index):
features = [[36mself[39;49;00m.c_features[index],
[36mself[39;49;00m.d_features[index],
[36mself[39;49;00m.q_features[index]]
[34mif[39;49;00m [36mself[39;49;00m.labels [35mis[39;49;00m [36mNone[39;49;00m:
[34mreturn[39;49;00m (features, [36mNone[39;49;00m)
[34mreturn[39;49;00m (features, [36mself[39;49;00m.labels[index])
[90m@staticmethod[39;49;00m
[34mdef[39;49;00m [32mcollate_data[39;49;00m(batch):
[37m# Features is indexed as features[stock][frequency][sequence][feature][39;49;00m
(features, labels) = [36mzip[39;49;00m(*batch)
batch_size = [36mlen[39;49;00m(features)
[37m# Concatenate (stack) categorical and quarterly features, as those will[39;49;00m
[37m# have the same sequence length across all samples[39;49;00m
categorical_features = torch.stack([torch.from_numpy( features[i][[34m0[39;49;00m] ) [34mfor[39;49;00m i [35min[39;49;00m [36mrange[39;49;00m(batch_size)],axis=[34m0[39;49;00m)
quarterly_features = torch.stack([torch.from_numpy( features[i][[34m2[39;49;00m] ) [34mfor[39;49;00m i [35min[39;49;00m [36mrange[39;49;00m(batch_size)] ,axis=[34m0[39;49;00m)
[37m# Daily features: the sequence lengths may vary depending on the[39;49;00m
[37m# absolute time interval of the data (over some intervals there[39;49;00m
[37m# are market holidays and hence less data). We will need to pad[39;49;00m
[37m# and pack data using PyTorch.[39;49;00m
[37m# Get length of daily features (e.g., sequence length)[39;49;00m
daily_features_lengths = [ [36mlen[39;49;00m(features[i][[34m1[39;49;00m]) [34mfor[39;49;00m i [35min[39;49;00m [36mrange[39;49;00m(batch_size)]
[37m# Generate array of torch tensors for padding; tensors will have incompatible sizes[39;49;00m
daily_features = [torch.from_numpy( features[i][[34m1[39;49;00m] ) [34mfor[39;49;00m i [35min[39;49;00m [36mrange[39;49;00m(batch_size)]
[37m# Pad tensors to the longest size[39;49;00m
daily_features_padded = pad_sequence(daily_features,batch_first=[36mTrue[39;49;00m,padding_value = -[34m10[39;49;00m)
[37m# Pack the batch of daily features[39;49;00m
daily_features_packed = pack_padded_sequence(daily_features_padded,daily_features_lengths,batch_first=[36mTrue[39;49;00m,enforce_sorted=[36mFalse[39;49;00m)
features = [categorical_features,daily_features_packed,quarterly_features]
labels = torch.from_numpy(np.array(labels)) [34mif[39;49;00m labels[[34m0[39;49;00m] [35mis[39;49;00m [35mnot[39;49;00m [36mNone[39;49;00m [34melse[39;49;00m [36mNone[39;49;00m
[34mreturn[39;49;00m features, labels
[34mdef[39;49;00m [32mget_prediction_interval[39;49;00m([36mself[39;49;00m):
[34mreturn[39;49;00m [36mself[39;49;00m.p_interval
[34mdef[39;49;00m [32mget_data_interval[39;49;00m([36mself[39;49;00m):
[34mreturn[39;49;00m [36mself[39;49;00m.d_interval
[34mdef[39;49;00m [32mget_offsets[39;49;00m([36mself[39;49;00m):
[34mreturn[39;49;00m [36mself[39;49;00m.offsets
[34mdef[39;49;00m [32mget_categorical_feature_count[39;49;00m([36mself[39;49;00m):
[34mreturn[39;49;00m [36mlen[39;49;00m([36mself[39;49;00m.c_features[[34m0[39;49;00m])
[34mdef[39;49;00m [32mget_categorical_feature_embedding_dims[39;49;00m([36mself[39;49;00m):
[34mreturn[39;49;00m [36mself[39;49;00m.c_features_embedding_dims
[34mdef[39;49;00m [32mget_daily_feature_count[39;49;00m([36mself[39;49;00m):
[34mreturn[39;49;00m [36mself[39;49;00m.d_features[[34m0[39;49;00m].shape[[34m1[39;49;00m]
[34mdef[39;49;00m [32mget_quarterly_feature_count[39;49;00m([36mself[39;49;00m):
[34mreturn[39;49;00m [36mself[39;49;00m.q_features[[34m0[39;49;00m].shape[[34m1[39;49;00m]
###Markdown
Generating Datasets: Offsets to Incorporate Seasonality We are going to need to specify offsets for our dataset. That is, for a given stock, we will want to generate training data with offsets corresponding to predictions for each of the 4 quarters in order to address seasonality. The total number of training samples will therefore be the number of training stocks multiplied by the number of offsets. Here we pick 12 offsets, one for each month of the year. This will allow for seasonality but will also allow us to generate predictions between fiscal quarter cutoffs.
###Code
# An offset of zero corresponds to using the most recent p_interval of data
# for labeling and the preceding d_interval worth of data for features.
# Snap to nearest (last) month (not a strict requirement, but gives us
# some reproducibility for training purposes)
last_offset = timedelta(days=datetime.today().day)
# Produce 12 offsets one month apart from each other
one_month = timedelta(days=30)
offsets = [ last_offset + one_month*i for i in range(12) ]
from model.dataset import StockDataset
###Output
_____no_output_____
###Markdown
Generating Datasets: K-fold Sets for Cross-ValidationWe will split our stock list into K sublists to generate K training and testing datasets, each of which will include offsets as discussed above.
###Code
# Prepare list of available stock tickers
stock_list = np.array(list(stock_data.keys()))
stock_count = len(stock_list)
# Split tickers into k-folds (here we use 3 folds) for cross validation
k = 3
k_fold_size = math.ceil(stock_count / k)
k_fold_indices = [ [i, i+k_fold_size] for i in range(0,stock_count,k_fold_size) ]
# List of dataset objects associated with each fold
k_fold_ds_list = list()
for i, fold in enumerate(k_fold_indices):
print(f'Generating dataset for fold {i+1}/{k}')
start, end = k_fold_indices[i][0], k_fold_indices[i][1]
k_fold_ds = StockDataset.from_data(stock_list[start:end],
p_interval=TIMEDELTA_QUARTER,
d_interval=4*TIMEDELTA_QUARTER,
offsets=offsets,
c_features_func_gen=get_stock_cat_features_func,
d_features_func_gen=get_stock_daily_features_func,
q_features_func_gen=get_stock_quarterly_features_func,
label_func_gen=get_stock_label_func,
output=True)
k_fold_ds_list.append(k_fold_ds)
# Concatenate datasets to form new K new train/test sets
for i in range(k):
# Each fold becomes the test dataset
test_ds = k_fold_ds_list[i]
test_ds.to_file(f'data/test-{i}.npz')
# The other folds form the training dataset
train_ds = StockDataset.concat(k_fold_ds_list[:i] + k_fold_ds_list[i+1:])
train_ds.to_file(f'data/train-{i}.npz')
# Generate a seperate full dataset for final model training
train_data_full = StockDataset.concat(k_fold_ds_list)
train_data_full.to_file(f'data/train-full.npz')
# Read our training datasets from disk and check for consistency
datasets = np.append(np.arange(0,k),['full'])
print('Fold Type Samples Features Pos/Neg')
for i in datasets:
for data_type in ['train','test']:
try:
ds = StockDataset.from_file(f'data/{data_type}-{i}.npz')
except:
continue
sample_count = len(ds)
categorical_count = ds.get_categorical_feature_count()
daily_count = ds.get_daily_feature_count()
quarterly_count = ds.get_quarterly_feature_count()
pos_count = ds[:][1].sum()
neg_count = sample_count - pos_count
features = f'{categorical_count}/{daily_count}/{quarterly_count}'
print(f'%4s' % f'{i}' + ' %-5s' % data_type + '%8s' % f'{sample_count}' + '%10s' % f'{features}' + '%11s' % f'{pos_count}/{neg_count}')
# Plot the month encoding to ensure that was correctly
# captured in the aggregate dataset. For our chosen offsets,
# we should see 12 equally spaced points on a unit circle
ds = StockDataset.from_file(f'data/train-full.npz')
x = [train_ds[i][0][0][-2] for i in range(len(train_ds)) ]
y = [train_ds[i][0][0][-1] for i in range(len(train_ds)) ]
plt.figure(figsize=(5,5))
plt.title('Cyclical Offset Encoding')
plt.xlabel('Month X')
plt.ylabel('Month Y')
plt.plot(x,y,marker='.',linestyle='');
###Output
_____no_output_____
###Markdown
AutomationConcatenate the code associated with the data generator functions (4 functions) that generate labels and categorical, daily and quarterly data for each stock. This will allow us to use the code later for our prediction engine without having to copy-paste from the notebook.
###Code
# Generate a file for the categorical, daily and quarterly functions to be used by the prediction code
!cat tmp_imports tmp_predictor_categorical tmp_predictor_daily tmp_predictor_quarterly > ./model/predictor_helper.py
!cat tmp_imports tmp_benchmark_sp500_gain tmp_benchmark_label > ./model/benchmark_helper.py
# Generate a file for benchmarking (gets S&P 500 gains)
!rm tmp_imports tmp_predictor_categorical tmp_predictor_daily tmp_predictor_quarterly
!rm tmp_benchmark_sp500_gain tmp_benchmark_label
###Output
_____no_output_____
###Markdown
###Code
data['date']=pd.to_datetime(data['date'])
data
#We want to predict the growth rate of the plants/crops with accuracy to manage the water consumption. So for a water
#consumption and certain external caracteristics we need to know the growth rate (hours by hours)
#Growth rate computing in %
data['var_height']=data['height(cm)'].diff()/data['height(cm)'].shift(1)*100
#These are the caracteristics (water provide) at time t that allow a certain growth rate observed at time t+1
data['var_height']=data['var_height'].shift(-1)
#We cann't evaluate the last row because we don't know the next plant height, so we don't know the influence
#of the different parameters
data=data.drop([data.shape[0]-1])
data
from sklearn.model_selection import train_test_split
Y=data['var_height']
X=data[['rainfall_24h(mm)', 'rainfall_5d(mm)', 'luminosity(lux)', 'humidity(%)', 'water_consumption(L)', 'height(cm)', 'period', 'date']]
X
Y
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=1/5)
X_train
y_train
###Output
_____no_output_____
###Markdown
---- Random Forest
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
columns=['age_approximate', 'sex']
X= df[columns] ##this makes a copy
y=df[['melanoma']]
X.head(2)
#X.sex = X.sex.map({'female': 0, 'male': 1,'unknown':2}) #need to change all the values for this
#X.age_approximate = X.age_approximate.map({'unknown': '0'})
#X.loc[X.age_approximate == 'unknown', 'age_approximate'] = 0
for v in df.melanoma:
print(v)
dict_df
X, y = make_classification(n_samples=1000, n_features=2,
n_informative=2, n_redundant=0,
random_state=0, shuffle=False)
clf = RandomForestClassifier(n_estimators=100, max_depth=10,
random_state=0)
#clf.fit(X, y)
#clf = RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
# max_depth=25, max_features='auto', max_leaf_nodes=None,
# min_impurity_decrease=0.0, min_impurity_split=None,
# min_samples_leaf=1, min_samples_split=2,
# min_weight_fraction_leaf=0.0, n_estimators=100, n_jobs=None,
# oob_score=False, random_state=0, verbose=0, warm_start=False)
clf.fit(X,y)
print(clf.feature_importances_)
#print(clf.predict([[85, 4]]))
print(clf.predict([[85, 1],[5, 0],[15, 0], [85, 2], [65, 1], [25, 0], [45, 1]]))
clf2 = tree.DecisionTreeClassifier(criterion='entropy', max_depth=3)
clf2=clf2.fit(X,y)
dot=StringIO()
tree.export_graphviz(clf2, out_file=dot, #feature_names = columns, class_names = ['0','1'],
filled = True, rounded = True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot.getvalue())
#Image(graph.create_png())
p=graph.create_png
#plt.imread(p)
plt.show()
from sklearn.metrics import classification_report
predicted = clf2.predict(X)
report = classification_report(y, predicted)
print(report)
###Output
precision recall f1-score support
0 0.98 0.95 0.96 497
1 0.95 0.98 0.96 503
avg / total 0.96 0.96 0.96 1000
###Markdown
Image Vectorization using Pretrained NetworksIn this notebook, we compute image vectors for images in the Holidays dataset against the following pretrained Keras Networks available from the [Keras model zoo](https://keras.io/applications/).
###Code
from __future__ import division, print_function
from scipy.misc import imresize
from keras.applications import vgg16, vgg19, inception_v3, resnet50, xception
from keras.models import Model
import matplotlib.pyplot as plt
import numpy as np
import os
%matplotlib inline
DATA_DIR = "/"
IMAGE_DIR = os.path.join(DATA_DIR, "pigtest_a")
filelist = os.listdir(IMAGE_DIR)
print(len(filelist))
image_names = [x for x in filelist if not (x.startswith('.'))]
print(len(image_names))
def image_batch_generator(image_names, batch_size):
num_batches = len(image_names) // batch_size
for i in range(num_batches):
batch = image_names[i * batch_size : (i + 1) * batch_size]
yield batch
batch = image_names[(i+1) * batch_size:]
yield batch
def vectorize_images(image_dir, image_size, preprocessor,
model, vector_file, batch_size=32):
filelist = os.listdir(image_dir)
image_names = [x for x in filelist if not (x.startswith('.'))]
num_vecs = 0
fvec = open(vector_file, "wb")
for image_batch in image_batch_generator(image_names, batch_size):
batched_images = []
for image_name in image_batch:
image = plt.imread(os.path.join(image_dir, image_name))
image = imresize(image, (image_size, image_size))
batched_images.append(image)
X = preprocessor(np.array(batched_images, dtype="float32"))
vectors = model.predict(X)
for i in range(vectors.shape[0]):
if num_vecs % 100 == 0:
print("{:d} vectors generated".format(num_vecs))
image_vector = ",".join(["{:.5e}".format(v) for v in vectors[i].tolist()])
fvec.write("{:s}\t{:s}\n".format(image_batch[i], image_vector))
num_vecs += 1
print("{:d} vectors generated".format(num_vecs))
fvec.close()
###Output
_____no_output_____
###Markdown
Generate Vectors using Resnet 50
###Code
IMAGE_SIZE = 224
VECTOR_FILE = os.path.join("/output/", "resnet-vectors-test-a.tsv")
#resnet_model = load_model('resnet50_weights_tf_dim_ordering_tf_kernels.h5')
resnet_model = resnet50.ResNet50(weights="imagenet", include_top=True)
resnet_model.summary()
model = Model(input=resnet_model.input,
output=resnet_model.get_layer("flatten_1").output)
preprocessor = resnet50.preprocess_input
vectorize_images(IMAGE_DIR, IMAGE_SIZE, preprocessor, model, VECTOR_FILE)
###Output
/usr/local/lib/python2.7/site-packages/ipykernel_launcher.py:2: UserWarning: Update your `Model` call to the Keras 2 API: `Model(outputs=Tensor("fl..., inputs=Tensor("in...)`
###Markdown
Read data
###Code
df1 = pd.read_csv('./data/LoanStats_securev1_2017Q1.csv', skiprows=[0])
df2 = pd.read_csv('./data/LoanStats_securev1_2017Q2.csv', skiprows=[0])
df3 = pd.read_csv('./data/LoanStats_securev1_2017Q3.csv', skiprows=[0])
df4 = pd.read_csv('./data/LoanStats3c_securev1_2014.csv', skiprows=[0])
df5 = pd.read_csv('./data/LoanStats3d_securev1_2015.csv', skiprows=[0])
###Output
_____no_output_____
###Markdown
Check if all the datasets have same column
###Code
columns = np.dstack((list(df1.columns), list(df2.columns), list(df3.columns), list(df4.columns), list(df5.columns)))
coldf = pd.DataFrame(columns[0])
# coldf.head()
df = pd.concat([df1, df2, df3, df4, df5])
###Output
_____no_output_____
###Markdown
Get familiar with data
###Code
df.shape
print(list(df.columns))
df.head(5)
df.dtypes.sort_values().to_frame('feature_type').groupby(by = 'feature_type').size().to_frame('count').reset_index()
###Output
_____no_output_____
###Markdown
Select data with loan_status either Fully Paid or Charged Off
###Code
df.loan_status.value_counts()
df = df.loc[(df['loan_status'].isin(['Fully Paid', 'Charged Off']))]
df.shape
###Output
_____no_output_____
###Markdown
Feature selections and clean Find the missing columns and their types
###Code
df_dtypes = pd.merge(df.isnull().sum(axis = 0).sort_values().to_frame('missing_value').reset_index(),
df.dtypes.to_frame('feature_type').reset_index(),
on = 'index',
how = 'inner')
df_dtypes.sort_values(['missing_value', 'feature_type'])
###Output
_____no_output_____
###Markdown
1. Check columns have more than $400000$ missing values ($\approx90\%$)
###Code
missing_df = df.isnull().sum(axis = 0).sort_values().to_frame('missing_value').reset_index()
miss_4000 = list(missing_df[missing_df.missing_value >= 400000]['index'])
print(len(miss_4000))
print(sorted(miss_4000))
df.drop(miss_4000, axis = 1, inplace = True)
###Output
_____no_output_____
###Markdown
2. Remove constant features
###Code
def find_constant_features(dataFrame):
const_features = []
for column in list(dataFrame.columns):
if dataFrame[column].unique().size < 2:
const_features.append(column)
return const_features
const_features = find_constant_features(df)
const_features
df.hardship_flag.value_counts()
df.drop(const_features, axis = 1, inplace = True)
###Output
_____no_output_____
###Markdown
3. Remove Duplicate rows
###Code
df.shape
df.drop_duplicates(inplace= True)
df.shape
###Output
_____no_output_____
###Markdown
4. Remove duplicate columns
###Code
def duplicate_columns(frame):
groups = frame.columns.to_series().groupby(frame.dtypes).groups
dups = []
for t, v in groups.items():
cs = frame[v].columns
vs = frame[v]
lcs = len(cs)
for i in range(lcs):
ia = vs.iloc[:,i].values
for j in range(i+1, lcs):
ja = vs.iloc[:,j].values
if np.array_equal(ia, ja):
dups.append(cs[i])
break
return dups
duplicate_cols = duplicate_columns(df)
duplicate_cols
df.shape
###Output
_____no_output_____
###Markdown
5. Remove/process features manually
###Code
features_to_be_removed = []
def plot_feature(col_name, isContinuous):
"""
Visualize a variable with and without faceting on the loan status.
- col_name is the variable name in the dataframe
- full_name is the full variable name
- continuous is True if the variable is continuous, False otherwise
"""
f, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(12,3), dpi=90)
# Plot without loan status
if isContinuous:
sns.distplot(df.loc[df[col_name].notnull(), col_name], kde=False, ax=ax1)
else:
sns.countplot(df[col_name], order=sorted(df[col_name].unique()), color='#5975A4', saturation=1, ax=ax1)
ax1.set_xlabel(col_name)
ax1.set_ylabel('Count')
ax1.set_title(col_name)
plt.xticks(rotation = 90)
# Plot with loan status
if isContinuous:
sns.boxplot(x=col_name, y='loan_status', data=df, ax=ax2)
ax2.set_ylabel('')
ax2.set_title(col_name + ' by Loan Status')
else:
data = df.groupby(col_name)['loan_status'].value_counts(normalize=True).to_frame('proportion').reset_index()
sns.barplot(x = col_name, y = 'proportion', hue= "loan_status", data = data, saturation=1, ax=ax2)
ax2.set_ylabel('Loan fraction')
ax2.set_title('Loan status')
plt.xticks(rotation = 90)
ax2.set_xlabel(col_name)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
0-10 features
###Code
df.iloc[0:5, 0: 10]
len(df.loan_amnt.value_counts())
plot_feature('loan_amnt', True)
###Output
_____no_output_____
###Markdown
It looks like all loans are not unique. Certain amount appear several times. It may be the reason, company has some range or certain amount to lend Term feature
###Code
df.term = df.term.str.replace('months', '').astype(np.int)
df.term.value_counts()
plot_feature('term', False)
###Output
_____no_output_____
###Markdown
interest rate
###Code
df.int_rate = df.int_rate.str.replace('%', '').astype(np.float32)
len(df.int_rate.value_counts())
plot_feature('int_rate', True)
###Output
_____no_output_____
###Markdown
It looks like applicants who could not afford to pay back or were charged off had higher interest rate. grade and subgrade
###Code
df.grade.value_counts()
df.sub_grade.value_counts()
plot_feature('grade', False)
plot_feature('sub_grade', False)
###Output
_____no_output_____
###Markdown
It seems that grade and sub grade have same shape and relation with loan status. IN this case I would keep sub_grade, because it carries more information than the grade. emp_title
###Code
len(df.emp_title.value_counts())
###Output
_____no_output_____
###Markdown
It looks like emp_title has lots of unique value, which may not be strongly associated with predicted loan amount
###Code
features_to_be_removed.extend(['emp_title', 'id'])
###Output
_____no_output_____
###Markdown
11-20 features
###Code
df.iloc[0:5, 6: 20]
###Output
_____no_output_____
###Markdown
emp_length
###Code
df.emp_length.value_counts()
df.emp_length.fillna(value=0,inplace=True)
df['emp_length'].replace(to_replace='[^0-9]+', value='', inplace=True, regex=True)
df['emp_length'] = df['emp_length'].astype(int)
plot_feature('emp_length', False)
###Output
_____no_output_____
###Markdown
It looks like emp lenght is not good predictor to determine the loan status. Sicne number of loanees remain same with the employment length. home_ownership
###Code
df.home_ownership.value_counts()
plot_feature('home_ownership', False)
###Output
_____no_output_____
###Markdown
home_ownership is also not that much discreminatory verification_status
###Code
df.verification_status.value_counts()
df.verification_status = df.verification_status.map(lambda x: 1 if x == 'Not Verified' else 0)
plot_feature('verification_status', False)
###Output
_____no_output_____
###Markdown
verification_status is somewhat discreminative in the sense that, among the loanes whose source was verified are more charged off which is a bit wired. issue_d
###Code
df.issue_d.value_counts()
df['issue_month'] = pd.Series(df.issue_d).str.replace(r'-\d+', '')
plot_feature('issue_month', False)
###Output
_____no_output_____
###Markdown
It looks like people who borrowed in December, are more charged off than those who borrowed in other months.
###Code
df.issue_month = df.issue_month.astype("category", categories=np.unique(df.issue_month)).cat.codes
df.issue_month.value_counts()
df['issue_year'] = pd.Series(df.issue_d).str.replace(r'\w+-', '').astype(np.int)
df.issue_year.value_counts()
###Output
_____no_output_____
###Markdown
loan status
###Code
df.loan_status.value_counts()
df.loan_status = df.loan_status.map(lambda x: 1 if x == 'Charged Off' else 0)
###Output
_____no_output_____
###Markdown
url
###Code
features_to_be_removed.append('url')
###Output
_____no_output_____
###Markdown
purpose
###Code
df.purpose.value_counts()
plot_feature('purpose', False)
###Output
_____no_output_____
###Markdown
It looks like, purpose can be a good discrimnatory. For exmaple people who had a purpose for renewable energy are more charged off while people borrwed loan for car or educational purpose are less charged off. title
###Code
len(df.title.value_counts())
features_to_be_removed.append('title')
###Output
_____no_output_____
###Markdown
zip_code
###Code
len(df.zip_code.value_counts())
features_to_be_removed.append('zip_code')
###Output
_____no_output_____
###Markdown
addr_state
###Code
df.addr_state.value_counts()
plot_feature('addr_state', False)
###Output
_____no_output_____
###Markdown
addr_state can be a good discreminatory feature. dti
###Code
# plot_feature('dti', True)
###Output
_____no_output_____
###Markdown
21 - 30 features
###Code
df.iloc[0:5, 15: 30]
###Output
_____no_output_____
###Markdown
earliest_cr_line
###Code
df['earliest_cr_year'] = df.earliest_cr_line.str.replace(r'\w+-', '').astype(np.int)
df['credit_history'] = np.absolute(df['issue_year']- df['earliest_cr_year'])
df.credit_history.value_counts()
features_to_be_removed.extend(['issue_d', 'mths_since_last_delinq', 'mths_since_last_record', 'inq_last_6mths', 'mths_since_last_delinq', 'mths_since_last_record'])
###Output
_____no_output_____
###Markdown
31 - 40 features
###Code
df.iloc[0:5, 25: 40]
df.revol_util = df.revol_util.str.replace('%', '').astype(np.float32)
df.initial_list_status.value_counts()
df.initial_list_status = df.initial_list_status.map(lambda x: 1 if x== 'w' else 0)
features_to_be_removed.extend(['total_pymnt', 'total_pymnt_inv', 'total_rec_prncp', 'total_rec_int', 'total_rec_late_fee'])
###Output
_____no_output_____
###Markdown
41 - 50 features
###Code
df.iloc[0:5, 35: 50]
df.application_type.value_counts()
df.application_type = df.application_type.map(lambda x: 0 if x == 'Individual' else 1)
features_to_be_removed.extend(['recoveries', 'collection_recovery_fee', 'last_pymnt_d', 'last_pymnt_amnt', 'last_credit_pull_d', 'last_fico_range_high', 'last_fico_range_low', 'collections_12_mths_ex_med', 'mths_since_last_major_derog'])
###Output
_____no_output_____
###Markdown
51 - 60 features
###Code
df.iloc[0:5, 45: 60]
features_to_be_removed.extend([ 'acc_now_delinq', 'tot_coll_amt', 'tot_cur_bal', 'total_rev_hi_lim', 'avg_cur_bal', 'bc_open_to_buy', 'bc_util', 'chargeoff_within_12_mths', 'delinq_amnt'])
###Output
_____no_output_____
###Markdown
61 - 70 features
###Code
df.iloc[0:5, 55: 70]
features_to_be_removed.extend(['mo_sin_old_il_acct', 'mo_sin_old_rev_tl_op', 'mo_sin_rcnt_rev_tl_op', 'mo_sin_rcnt_tl', 'mths_since_recent_bc', 'mths_since_recent_bc_dlq', 'mths_since_recent_inq', 'mths_since_recent_revol_delinq', 'num_accts_ever_120_pd'])
###Output
_____no_output_____
###Markdown
71 - 80 features
###Code
df.iloc[0:5, 65: 80]
features_to_be_removed.extend(['num_actv_bc_tl', 'num_actv_rev_tl', 'num_bc_sats', 'num_bc_tl', 'num_il_tl', 'num_op_rev_tl', 'num_rev_accts', 'num_rev_tl_bal_gt_0', 'num_sats', 'num_tl_120dpd_2m'])
###Output
_____no_output_____
###Markdown
81 - 90 features
###Code
df.iloc[0:5, 75: 90]
features_to_be_removed.extend(['num_tl_30dpd', 'num_tl_90g_dpd_24m', 'num_tl_op_past_12m', 'pct_tl_nvr_dlq', 'percent_bc_gt_75', 'tot_hi_cred_lim', 'total_bal_ex_mort', 'total_bc_limit'])
###Output
_____no_output_____
###Markdown
91 to rest of the features
###Code
df.iloc[0:5, 85:]
df.disbursement_method.value_counts()
df.disbursement_method = df.disbursement_method.map(lambda x: 0 if x == 'Cash' else 1)
df.debt_settlement_flag.value_counts()
df.debt_settlement_flag = df.debt_settlement_flag.map(lambda x: 0 if x == 'N' else 1)
features_to_be_removed.extend(['debt_settlement_flag', 'total_il_high_credit_limit'])
###Output
_____no_output_____
###Markdown
Removed _ features
###Code
print(features_to_be_removed)
len(set(features_to_be_removed))
###Output
_____no_output_____
###Markdown
Drop selected features
###Code
df_selected = df.drop(list(set(features_to_be_removed)), axis = 1)
df_selected.shape
df_dtypes = pd.merge(df_selected.isnull().sum(axis = 0).sort_values().to_frame('missing_value').reset_index(),
df_selected.dtypes.to_frame('feature_type').reset_index(),
on = 'index',
how = 'inner')
df_dtypes.sort_values(['missing_value', 'feature_type'])
df_selected.dropna(inplace=True)
df_selected.shape
df_selected.drop('earliest_cr_line', axis = True, inplace=True)
df_selected.purpose.value_counts()
df_selected.purpose = df_selected.purpose.astype("category", categories=np.unique(df_selected.purpose)).cat.codes
df_selected.purpose.value_counts()
df_selected.home_ownership = df_selected.home_ownership.astype("category", categories = np.unique(df_selected.home_ownership)).cat.codes
df_selected.home_ownership.value_counts()
df_selected.grade = df_selected.grade.astype("category", categories = np.unique(df_selected.grade)).cat.codes
df_selected.grade.value_counts()
df_selected.sub_grade = df_selected.sub_grade.astype("category", categories = np.unique(df_selected.sub_grade)).cat.codes
df_selected.sub_grade.value_counts()
df_selected.addr_state = df_selected.addr_state.astype("category", categories = np.unique(df_selected.addr_state)).cat.codes
df_selected.sub_grade.value_counts()
df_selected.columns
###Output
_____no_output_____
###Markdown
Save selected features
###Code
df_selected.to_csv('./data/df_selected.csv', index = False)
###Output
_____no_output_____
###Markdown
Предобработка данных
###Code
import pandas as pd
import numpy as np
import re
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
df = pd.read_csv('flat_research_minsk.csv')
df = df.drop_duplicates()
df.reset_index(drop=True, inplace=True)
print(df.shape)
df.head(3)
df.info()
plt.figure(figsize=(15,8))
sns.heatmap(df.isnull());
plt.figure(figsize=(15,8))
(100*df.isnull().sum()/df.shape[0]).sort_values(ascending=False).plot(kind='bar');
df['Населенный пункт'].unique()
###Output
_____no_output_____
###Markdown
Выкинем из наших данных неиформативные признаки и признаки с большим количеством пропусков, а также Область и Населеный пункт, так как наши все данные из Минска
###Code
df.drop(['Число уровней', 'Unnamed: 24', 'Год кап.ремонта',
'Площадь по СНБ', 'Площадь балконов, лоджий, террас',
'Вид этажа', 'Телефон', 'Населенный пункт', 'Область'], axis=1, inplace=True)
df.head(3)
###Output
_____no_output_____
###Markdown
Создадими стоблцы с раздельными данными
###Code
df['Комнат всего/разд.'].unique()
df['Все комнаты'] = df['Комнат всего/разд.'].apply(lambda x : x[0] if re.search('[а-я-А-Я]', x) is None else np.nan)
df['Все комнаты'] = pd.to_numeric(df['Все комнаты'], errors='coerce')
df['Раздельных комнат'] = df['Комнат всего/разд.'].apply(lambda x : x[-1] if re.search('[а-я-А-Я]', x) is None else int(re.search('[0-9]-', x)[0][0]))
df['Раздельных комнат'] = pd.to_numeric(df['Раздельных комнат'], errors='coerce')
df['Этаж / этажность'].fillna(0, inplace=True)
df['Этаж / этажность'] = df['Этаж / этажность'].apply(str)
df['Этаж / этажность'].unique()
df['Этаж'] = df['Этаж / этажность'].apply(lambda x : int(re.search('\d\d-|\d-', x)[0][:-1])
if re.search('-[а-яА-Я]', x) is not None and x!=0
else( int(re.search('\d\d|\d', x)[0][0]
if x != 0 and re.search('[а-яА-Я]', x) is not None
else(int(x[0:2]) if x!=0 else np.nan))))
df['Этаж'] = pd.to_numeric(df['Этаж'], errors='coerce')
df['Этажность'] = df['Этаж / этажность'].apply(lambda x : int(re.search('\d\d-|\d-', x)[0][:-1])
if re.search('-[а-яА-Я]', x) is not None and x!=0
else( int(re.search('\d\d|\d', x)[0][0]
if x != 0 and re.search('[а-яА-Я]', x) is not None
else(int(x[-2:]) if x!=0 else np.nan))))
df['Этажность'] = pd.to_numeric(df['Этажность'], errors='coerce')
df['Площадь общая/жилая/кухня'] = df['Площадь общая/жилая/кухня'].apply(str)
df['Площадь общая/жилая/кухня'].unique()
df['Площадь общая'] = df['Площадь общая/жилая/кухня'].apply(lambda x: x.split('/')[0])
df['Площадь общая'] = pd.to_numeric(df['Площадь общая'], errors='coerce')
df['Площадь жилая'] = df['Площадь общая/жилая/кухня'].apply(lambda x: x.split('/')[1])
df['Площадь жилая'] = pd.to_numeric(df['Площадь жилая'], errors='coerce')
df['Площадь кухни'] = df['Площадь общая/жилая/кухня'].apply(lambda x: x.split('/')[2][:-2])
df['Площадь кухни'] = pd.to_numeric(df['Площадь кухни'], errors='coerce')
df.drop(['Комнат всего/разд.', 'Этаж / этажность', 'Площадь общая/жилая/кухня'], axis=1, inplace=True)
df.info()
df['Метро'].fillna(-1, inplace=True)
df['Метро'].isnull().sum()
df['Метро'] = df['Метро'].apply(str)
###Output
_____no_output_____
###Markdown
Сделаем столбец с расстоянием до метро. Если в данных не было указано расстояние то возьмем среднее, а если в данных была пропущен столбец метро то поставим -1
###Code
df['Расстояние до метро'] = df['Метро'].apply(lambda x: re.search('\≈\w+', x)[0][1:-1] if x !=-1 and
re.search('\≈\w+', x) is not None
else (-1 if x == '-1' else 0))
df['Расстояние до метро'] = pd.to_numeric(df['Расстояние до метро'], errors='coerce')
df['Расстояние до метро'] = df['Расстояние до метро'].apply(lambda x: df['Расстояние до метро'].mean() if x == 0 else x)
df.drop('Метро',axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
Выделим районы города
###Code
df['Район города'].isnull().sum()
###Output
_____no_output_____
###Markdown
Eсли по нему не было данных просто удалим это наблюдение, так как их очень мало
###Code
df['Район города'].unique()
df['Район города'].dropna(inplace=True)
df['Район города'] = df['Район города'].apply(str)
df['Район города'] = df['Район города'].apply(lambda x: re.search('\w+ район', x)[0]
if re.search('\w+ район', x) is not None
else x[0])
df.head(3)
###Output
_____no_output_____
###Markdown
Посмотрим на стольец Типы домов
###Code
df['Тип дома'].unique()
df[df['Тип дома']=='кар']
###Output
_____no_output_____
###Markdown
Удалим непонятный нам тип потому что он единственный
###Code
df.drop(df[df['Тип дома']=='кар'].index, inplace=True)
###Output
_____no_output_____
###Markdown
Переведем числовые значения в правильный тип
###Code
df['Высота потолков'].fillna(-1, inplace=True)
df['Высота потолков'] = df['Высота потолков'].apply(str)
df['Высота потолков'] = df['Высота потолков'].apply(lambda x: x[:-2] if x != '-1' else -1)
df['Цена'] = df['Цена'].apply(lambda x: x[:-4])
df['Цена'] = df['Цена'].apply(lambda x: x.replace(" ", ""))
df['Цена'] = pd.to_numeric(df['Цена'])
df.info()
df.to_csv('processed_data.csv', index=False)
###Output
_____no_output_____
###Markdown
Starbucks Capstone Project solution for ML Engineer Nanodegree Data Processing
###Code
# Importing the required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from datetime import datetime
# Read the data
portfolio_df = pd.read_json('data/portfolio.json', orient='records', lines=True)
profile_df = pd.read_json('data/profile.json', orient='records', lines=True)
transcript_df = pd.read_json('data/transcript.json', orient='records', lines=True)
###Output
_____no_output_____
###Markdown
Data Exploration Portfolio DataFrame
###Code
# Descripe the Portfolio dataframe
print('The shape of Portfolio dataframe is {}'.format(portfolio_df.shape))
###Output
The shape of Portfolio dataframe is (10, 6)
###Markdown
This dataframe contains the information about different offers with details about each of them
###Code
# Show the Portfolio dataframe
display(portfolio_df)
###Output
_____no_output_____
###Markdown
Profile DataFrame
###Code
# Descripe the Profile dataframe
print('The shape of Profile dataframe is {}'.format(profile_df.shape))
###Output
The shape of Profile dataframe is (17000, 5)
###Markdown
This dataframe contains the information about different customers with their demographic data
###Code
# Show the Profile dataframe
display(profile_df)
###Output
_____no_output_____
###Markdown
We see the missing values in gender and income, so there is a reason to process this dataframe. In addition, it is useful to convert the string dates into datetime values.
###Code
# There are no duplicated customers in dataframe
set(profile_df.duplicated(subset=['id']))
# We see that NaN values for Income and Gender intersects, so we can drop them
display(profile_df.loc[profile_df['income'].isnull()].describe())
display(profile_df.loc[profile_df['gender'].isnull()].describe())
profile_df = profile_df.loc[~profile_df['income'].isnull()]
print('After that, the shape of Profile dataframe is {}'.format(profile_df.shape))
display(profile_df)
# Let's change string date to datetime
profile_df['became_member_on'] = pd.to_datetime(profile_df['became_member_on'].astype(str)).dt.date
# # We see that the Other gender is not so frequent in the data
# pd.DataFrame(profile_df.groupby('gender').describe()['age']['count'])
# We can see the age distribution looks bell-shaped
sns.distplot(profile_df['age'])
plt.title('Age distribution')
plt.show()
# While income distribution is not bell-shaped
sns.distplot(profile_df['income'])
plt.title('Income distribution')
plt.show()
# The major share of the customers arrived after the 2017
profile_df['became_member_on'].hist()
plt.show()
###Output
_____no_output_____
###Markdown
Transcript DataFrame This dataframe contains the information about different transactions with details.
###Code
# Descripe the Transcript dataframe
print('The shape of Transcript dataframe is {}'.format(transcript_df.shape))
# Show the Transcript dataframe
display(transcript_df)
# Here is the descriptive statistics about the each event count
pd.DataFrame(transcript_df.groupby('event').describe()['time']['count'])
# Let's delve more into the Value feature
# and check the cross-intersection between the event and value
values_parsed = transcript_df['value'].apply(lambda x: str(list(x.keys())))
pd.crosstab(values_parsed, transcript_df['event'])
# We can parse these values and replace value feature with the more
# detailed ones
transcript_df['offer_id'] = transcript_df['value'].apply(lambda x: \
x['offer_id'] if 'offer_id' in x \
else (x['offer id'] if 'offer id' \
in x else None))
for key in ['amount', 'reward']:
transcript_df[key] = transcript_df['value'].apply(lambda x: \
x[key] if key in x else None)
# Therefore, we can drop the old feature
transcript_df = transcript_df.drop('value', axis=1)
# Let's analyze the behavior of the particular client and check
# the maximum number of purchases for specific customer
purchases_per_client = transcript_df.groupby('person')['time'].count().sort_values(ascending=False)
# Here is Top-5
purchases_per_client.head(5)
# Let's check the first client
transcript_df.loc[transcript_df['person'] == \
purchases_per_client.index[0]].sort_values('time')
###Output
_____no_output_____
###Markdown
We see that there is connection between transaction and offer completed, displayed with the same time. Let's check whether this is true
###Code
print('There are {} matches'.format(\
len(pd.merge(transcript_df.loc[transcript_df['event'] == \
'offer completed'],
transcript_df.loc[transcript_df['event'] == 'transaction'],
on=['person', 'time']))))
# Let's also check the connection between offer received and offer viewed
print('There are {} matches'.format(\
len(pd.merge(transcript_df.loc[transcript_df['event'] == \
'offer received'],
transcript_df.loc[transcript_df['event'] == 'offer viewed'],
on=['person', 'offer_id']))))
###Output
There are 79329 matches
###Markdown
Customer's Journey In order to analyze the conversion, we have to recreate the customer's journey using the data. We have to:- Analyze the data about the offer view- Check the conversion into the purchase- Analyze the data about the transaction
###Code
# Merge the offer receives and offer views
offer_view_df = pd.merge(\
transcript_df.loc[transcript_df['event'] == 'offer received', \
['person', 'offer_id', 'time']],
transcript_df.loc[transcript_df['event'] == 'offer viewed', \
['person', 'offer_id', 'time']],
on=['person', 'offer_id'], how='left', \
suffixes=['_received', '_viewed'])
# Remove the broken data: view have to be later than receive and remove null values
offer_view_df = offer_view_df.loc[(offer_view_df['time_viewed'] >= \
offer_view_df['time_received']) | \
~(offer_view_df['time_viewed'].isnull())]
# Take the nearest receive before the view
offer_view_df = pd.concat((offer_view_df.groupby(['person', 'offer_id',
'time_viewed']).agg({'time_received': 'max'}).reset_index(),
offer_view_df.loc[offer_view_df['time_viewed'].isnull()]))
offer_view_df.head()
###Output
_____no_output_____
###Markdown
Let's apply the same reasoning to the offer completion
###Code
# Merge the DataFrames
offer_complete_df = pd.merge(offer_view_df,
transcript_df.loc[transcript_df['event'] == 'offer completed', \
['person', 'offer_id', 'time', 'reward']],
on=['person', 'offer_id'], how='left')
# Rename the column
offer_complete_df.rename(columns={'time': 'time_completed'}, inplace=True)
# We ensure that completion is before the view
offer_complete_df.loc[(offer_complete_df['time_viewed'].isnull()) | \
(offer_complete_df['time_viewed'] > \
offer_complete_df['time_completed']), \
['reward', 'time_completed']] = (np.nan, np.nan)
offer_complete_df.drop_duplicates = offer_complete_df.drop_duplicates()
# Concatenate the nearest completion to the view and receive
offer_complete_df = pd.concat(
(offer_complete_df.groupby(['person', 'offer_id',
'time_completed', 'reward']).agg({'time_viewed': 'max',
'time_received': 'max'}).reset_index(),
offer_complete_df.loc[offer_complete_df['time_completed'].isnull()]))
offer_complete_df.head()
###Output
_____no_output_____
###Markdown
Now let's add the information about the transactions
###Code
# Merge the DataFrames
offer_transaction_df = pd.merge(offer_complete_df,
transcript_df.loc[transcript_df['event'] == 'transaction', \
['person', 'time', 'amount']],
left_on=['person', 'time_completed'],
right_on=['person', 'time'], how='outer')
# Rename the column
offer_transaction_df.rename(columns={'time': 'time_transaction'}, inplace=True)
# Add a column with time equal to received offer,
# and transaction time otherwise
offer_transaction_df['time'] = offer_transaction_df['time_received']
offer_transaction_df.loc[offer_transaction_df['time'].isnull(),
'time'] = offer_transaction_df['time_transaction']
# Drop the duplicates
offer_transaction_df.sort_values(['person', 'offer_id', 'time',
'time_completed'], inplace=True)
offer_transaction_df = offer_transaction_df.drop_duplicates(['person',
'offer_id', 'time'])
print("The final data size is ", offer_transaction_df.shape)
###Output
The final data size is (164558, 9)
###Markdown
Let's finally merge all the data into the single DataFrame.
###Code
# Add offer type information
offer_type_df = pd.merge(offer_transaction_df,
portfolio_df.rename(columns={'id': 'offer_id',
'reward': 'portfolio_reward'}),
on='offer_id', how='left')
offer_type_df.head()
# Add demographic information
offer_all_df = pd.merge(offer_type_df,
profile_df.rename(columns={'id': 'person'}),
how='inner', on='person')
offer_all_df.head()
# Sort the data
offer_all_df.sort_values(['person', 'time', 'offer_id'], inplace=True)
# Let's fill the values for transactions' offer type
offer_all_df['offer_type'].fillna('transaction', inplace=True)
offer_all_df.head()
print('The final shape of the data is ', offer_all_df.shape)
# Save the data
offer_all_df.to_csv('./data/customer_journey.csv', index=False)
###Output
_____no_output_____
###Markdown
New Features Creation
###Code
# Let's test that the file we saved is loading correctly
customer_journey_df = pd.read_csv('./data/customer_journey.csv',
parse_dates=['became_member_on'])
# Let's drop the data when the offer was never viewed
customer_journey_df = customer_journey_df.loc[\
(customer_journey_df['offer_type'] == 'transaction') \
|(customer_journey_df['time_viewed'].isnull() == False)]
# Keep the time variable equal to time viewed, transaction time otherwise
customer_journey_df['time'] = customer_journey_df['time_viewed']
customer_journey_df.loc[customer_journey_df['offer_type'] == \
'transaction', 'time'] = customer_journey_df['time_transaction']
print('The current shape of data is {}'.format(customer_journey_df.shape))
customer_journey_df.head()
###Output
_____no_output_____
###Markdown
We set as the aim to maximize the conversion rate for each offer type.In order to evaluate the model, we have to calculate the benchmark based on the historical data.
###Code
# Keep only relevant features
conversion_df = customer_journey_df.loc[:, ['offer_type',
'time_viewed', 'time_completed']]
# Mark the offers viewed if they are non-informational and viewed
conversion_df['viewed'] = 0
conversion_df.loc[(conversion_df['offer_type'].isin(['bogo', 'discount'])) & \
(conversion_df['time_viewed'].isnull() == False),
'viewed'] = 1
# Mark conversion
conversion_df['conversion'] = 0
conversion_df.loc[(conversion_df['viewed'] == 1.0) & \
(conversion_df['time_completed'].isnull() == False),
'conversion'] = 1
viewed_num = np.sum(conversion_df['viewed'])
conversion_num = np.sum(conversion_df['conversion'])
print('{} users viewed the offer and {} completed it'.format(
viewed_num, conversion_num))
print('Therefore, the conversion is {} %'.format(\
round(conversion_num/viewed_num*100, 2)))
# We can also divide it by the offer type
conversion_df.loc[conversion_df['viewed'] == 1\
].groupby('offer_type').agg({'conversion': 'mean'})
###Output
_____no_output_____
###Markdown
Furthermore, we can analyze the conversion for the informational offer. This can be evaluated as transaction near the informational offer.
###Code
# Copy the dataset and take viewed offers with non-empty transaction
informational_offer_df = customer_journey_df.loc[
(customer_journey_df['time_viewed'].isnull() == False) | \
(customer_journey_df['time_transaction'].isnull() == False),
['person', 'offer_id', 'offer_type', 'time_viewed', 'time_transaction']]
# Replace time with time viewed. Otherwise - transaction time
informational_offer_df['time'] = informational_offer_df['time_viewed']
informational_offer_df.loc[informational_offer_df['time'].isnull(),
'time'] = informational_offer_df['time_transaction']
# In order to analyze it, we have to check the consequent offer for the user
informational_offer_df['next_offer_type'] = \
informational_offer_df['offer_type'].shift(-1)
informational_offer_df['next_time'] = informational_offer_df['time'].shift(-1)
# If the offer relates to other person, we skip it
informational_offer_df.loc[
informational_offer_df['person'].shift(-1) != \
informational_offer_df['person'],
['next_offer_type', 'next_time']] = ['', np.nan]
# Get the information about the difference in time for the offer types
informational_offer_df['time_diff'] = \
informational_offer_df['next_time'] - informational_offer_df['time_viewed']
# Let's check the time distribution between informational offer and transaction
informational_offer_df.loc[
(informational_offer_df['offer_type'] == 'informational') & \
(informational_offer_df['next_offer_type'] == 'transaction') &
(informational_offer_df['time_diff'] >=0),
'time_diff'].describe()
###Output
_____no_output_____
###Markdown
We see that the median difference in 24 hours
###Code
informational_offer_df.loc[
(informational_offer_df['offer_type'] == 'informational') & \
(informational_offer_df['next_offer_type'] == 'transaction')&
(informational_offer_df['time_diff'] >=0),
'time_diff'].hist()
# Let's check the conversion if we check the transaction within 24 hours
# after the informational offer
time_diff_threshold = 24.0
viewed_info_num = np.sum(informational_offer_df['offer_type'] == \
'informational')
conversion_info_num = np.sum((informational_offer_df['offer_type'] == \
'informational') \
& (informational_offer_df['next_offer_type'] == 'transaction') & \
(informational_offer_df['time_diff'] <= time_diff_threshold))
print('{} users viewed the offer and {} completed it'.format(
viewed_info_num, conversion_info_num))
print('Therefore, the conversion is {} %'.format(\
round(conversion_info_num/viewed_info_num*100, 2)))
###Output
8042 users viewed the offer and 3367 completed it
Therefore, the conversion is 41.87 %
###Markdown
Now let's create features for each offer type
###Code
# If the offer was viewed and it is BOGO and there was transaction, fill it
customer_journey_df.loc[
(customer_journey_df['time_viewed'].isnull() == False) & \
(customer_journey_df['offer_type'] == 'bogo'),
'bogo'] = 0
customer_journey_df.loc[
(customer_journey_df['time_viewed'].isnull() == False) & \
(customer_journey_df['offer_type'] == 'bogo') & \
(customer_journey_df['time_completed'].isnull() == False), 'bogo'] = 1
# If the offer was viewed and it is Discount and there was transaction, fill it
customer_journey_df.loc[
(customer_journey_df['time_viewed'].isnull() == False) & \
(customer_journey_df['offer_type'] == 'discount'),
'discount'] = 0
customer_journey_df.loc[
(customer_journey_df['time_viewed'].isnull() == False) & \
(customer_journey_df['offer_type'] == 'discount') & \
(customer_journey_df['time_completed'].isnull() == False), 'discount'] = 1
###Output
_____no_output_____
###Markdown
Now let's work a bit on the informational offer DataFrame
###Code
informational_offer_df.loc[
informational_offer_df['offer_type'] == 'informational', 'info'] = 0
informational_offer_df.loc[
(informational_offer_df['offer_type'] == 'informational') & \
(informational_offer_df['next_offer_type'] == 'transaction') & \
(informational_offer_df['time_diff'] <= time_diff_threshold), 'info'] = 1
customer_journey_df = pd.merge(customer_journey_df,
informational_offer_df.loc[
informational_offer_df['info'].isnull() == False,
['person', 'offer_id', 'time_viewed', 'info', 'next_time']],
how='left', on=['person', 'offer_id', 'time_viewed'])
# Override time completed with the following time of transaction
customer_journey_df.loc[customer_journey_df['info'] == 1,
'time_completed'] = customer_journey_df['next_time']
customer_journey_df.loc[customer_journey_df['info'] == 1,
'time_transaction'] = customer_journey_df['next_time']
customer_journey_df = customer_journey_df.drop('next_time', axis=1)
bogo_num = np.sum(customer_journey_df['bogo'].isnull() == False)
disc_num = np.sum(customer_journey_df['discount'].isnull() == False)
info_num = np.sum(customer_journey_df['info'].isnull() == False)
print('The current DataFrame contains: {} BOGO, {} Discount and {} \
Informational events of conversion.'.format(bogo_num, disc_num, info_num))
###Output
The current DataFrame contains: 18690 BOGO, 15761 Discount and 8042 Informational events of conversion.
###Markdown
Now we can work more on the features for the customers
###Code
customer_df = customer_journey_df[['person', 'gender',
'age', 'income', 'became_member_on']].drop_duplicates()
customer_df.describe(include='all').T
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/ipykernel/__main__.py:4: FutureWarning: Treating datetime data as categorical rather than numeric in `.describe` is deprecated and will be removed in a future version of pandas. Specify `datetime_is_numeric=True` to silence this warning and adopt the future behavior now.
###Markdown
Now let's create a feature to analyze the retention of the customers to the service.
###Code
def months_difference(date_start, date_end):
''' This function is used to calculate the difference
in months between two dates
Args:
date_start (timestamp/datetime) - start date of the period
date_end (timestamp/datetime) - end date of the period
Outputs:
difference(int) - difference in months between the dates
'''
difference = (date_end.year - date_start.year) * 12 + \
(date_end.month - date_start.month)
return difference
customer_journey_df['day'] = np.floor(
customer_journey_df['time_viewed'] / 24.0)
customer_journey_df['weekday'] = customer_journey_df['day'] % 7.0
customer_journey_df['became_member_from'] = customer_journey_df.apply(
lambda x: months_difference(
x['became_member_on'], datetime(2018, 8, 1)), 1)
customer_journey_df.head()
###Output
_____no_output_____
###Markdown
Let's check the distribution of these values
###Code
sns.distplot(customer_journey_df['day'].dropna())
plt.title('Offer Day Distribution')
plt.show()
sns.distplot(customer_journey_df['weekday'].dropna())
plt.title('Offer Weekday Distribution')
plt.show()
sns.distplot(customer_journey_df['became_member_from'].dropna())
plt.title('Months from the initial Membership')
plt.show()
###Output
/home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
In order to analyze the data correctly, it is important to look at the data in the past. I propose to create new features to analyze the particular clients' behavior:- Particular Transactions- Average number of Transactions per client- Number of rewards sent- Number of offers, which were completed or viewed- The time from offer receival to completion or view
###Code
# Check whether there was a transaction
customer_journey_df['transaction'] = 0
customer_journey_df.loc[
customer_journey_df['time_transaction'].isnull() == False,
'transaction'] = 1
# Check whether the offer was completed
customer_journey_df['completed'] = 0
customer_journey_df.loc[
customer_journey_df['time_completed'].isnull() == False,
'completed'] = 1
# Create new features
customer_journey_df['number_of_offers_viewed'] = 0
customer_journey_df['number_of_offers_completed'] = 0
customer_journey_df['receival_to_view_avg'] = 0
customer_journey_df['view_to_completion_avg'] = 0
customer_journey_df['number_of_transactions'] = 0
customer_journey_df['avg_number_of_transctions'] = 0
customer_journey_df['avg_reward'] = 0
customer_journey_df['receival_to_view'] = \
customer_journey_df['time_viewed'] - customer_journey_df['time_received']
customer_journey_df['time_from_view_to_completion'] = \
customer_journey_df['time_completed'] - customer_journey_df['time_viewed']
# Check if the same person is between the transactions
customer_journey_df['prev_person'] = customer_journey_df['person'].shift(1)
# Fill the features via loop
for i, row in customer_journey_df.iterrows():
# Check the progress
print(str(i)+' / '+str(len(customer_journey_df)), end='\r')
# We fill the features if rows are attributed to the same person
if row['person'] == row['prev_person']:
# If the previous offer was viewed
customer_journey_df.loc[i, 'number_of_offers_viewed'] = \
customer_journey_df.loc[i-1, 'number_of_offers_viewed'] + \
(0 if customer_journey_df.loc[i-1, 'offer_type'] == \
'transaction' else 1)
# If the previous offer was completed
customer_journey_df.loc[i, 'number_of_offers_completed'] = \
customer_journey_df.loc[i-1, 'number_of_offers_completed'] + \
customer_journey_df.loc[i-1, 'completed']
# Previous time from Receival to View
customer_journey_df.loc[i, 'receival_to_view_avg'] = \
np.nansum((customer_journey_df.loc[i-1, \
'receival_to_view_avg'],
customer_journey_df.loc[i-1, 'receival_to_view_avg']))
# Previous time from View to Completion
customer_journey_df.loc[i, 'view_to_completion_avg'] = \
np.nansum((customer_journey_df.loc[i-1,
'view_to_completion_avg'],
customer_journey_df.loc[i-1,
'time_from_view_to_completion']))
# If the previous row was a Transaction
customer_journey_df.loc[i, 'number_of_transactions'] = \
customer_journey_df.loc[i-1, 'number_of_transactions'] + \
customer_journey_df.loc[i-1, 'transaction']
# If the previous row was a Transaction, add amount
customer_journey_df.loc[i, 'avg_number_of_transctions'] = \
customer_journey_df.loc[i-1, 'avg_number_of_transctions'] + \
(0 if customer_journey_df.loc[i-1, 'transaction'] == \
0 else customer_journey_df.loc[i-1, 'amount'])
# If the previous row was a Reward, add reward
customer_journey_df.loc[i, 'avg_reward'] = \
np.nansum((customer_journey_df.loc[i-1, 'avg_reward'],
customer_journey_df.loc[i-1, 'reward']))
# Get the average values
customer_journey_df['receival_to_view_avg'] = \
customer_journey_df['receival_to_view_avg'] / \
customer_journey_df['number_of_offers_viewed']
customer_journey_df['view_to_completion_avg'] = \
customer_journey_df['view_to_completion_avg'] / \
customer_journey_df['number_of_offers_completed']
customer_journey_df['avg_number_of_transctions'] = \
customer_journey_df['avg_number_of_transctions'] / \
customer_journey_df['number_of_transactions']
customer_journey_df['receival_to_view_avg'].fillna(0, inplace=True)
customer_journey_df['view_to_completion_avg'].fillna(0, inplace=True)
customer_journey_df['avg_number_of_transctions'].fillna(0, inplace=True)
customer_journey_df.tail()
# Save the data to CSV to upload it after to the Sagemaker
customer_journey_df.to_csv('customer_journey_updated.csv')
###Output
_____no_output_____
###Markdown
Now let's upload the data to Sagemaker
###Code
import boto3
import sagemaker
from sagemaker import get_execution_role
session = sagemaker.Session()
role = get_execution_role()
variables = ['gender', 'weekday', 'age', 'income', 'day','became_member_from',
'number_of_transactions', 'avg_number_of_transctions',
'number_of_offers_completed', 'number_of_offers_viewed',
'avg_reward', 'receival_to_view_avg', 'view_to_completion_avg']
# Create dictionary for each type of offer
df_target = {}
prefix = 'CapstoneProjectStarbucks'
data_location_dict = {}
for tgt in ['bogo', 'discount', 'info']:
df_target[tgt] = customer_journey_df.loc[
customer_journey_df[tgt].isnull() == False, [tgt] + variables]
df_target[tgt].to_csv(f'./data/{tgt}.csv', index=False, header=False)
data_location_dict[tgt] = \
session.upload_data(f'./data/{tgt}.csv', key_prefix=prefix)
# Check the location
data_location_dict
# Read the data to check
df_target = {}
for tgt in ['bogo', 'discount', 'info']:
df_target[tgt] = pd.read_csv(f'./data/{tgt}.csv',
header=None, names=[tgt] + variables)
###Output
_____no_output_____
###Markdown
Now we are ready to compare the current analysis of features versus the baseline values. In order to do this, we have to introduce couple of new functions to analyze the charts.
###Code
def analyse_categorical_vars(df, feature_name, threshold=0.01,
target_name='tgt', plot_title=None, target_mean_color='black'):
'''
Function charts the mean Target value versus a categorical
variable.
Args:
df (DataFrame) - input data
feature_name (str) - feature to analyze versus the target.
threshold(float) - categories with frequency between threshold are
labeled as Other
target_name (str) - name of the target variable
plot_title (str) - plot title
target_mean_color (str) - color for the average target value
Outputs:
chart (matplotlib) - chart plotting the analyzed variable versus target
'''
# Select only used features
df_copy = df[[feature_name, target_name]].copy()
df_copy[feature_name] = df_copy[feature_name].fillna('NULL')
# Replace categories with distribution less than threshold with Other
df_temp = df_copy[feature_name].value_counts(1)
others_list = df_temp[df_temp < threshold].index.tolist()
if len(others_list) > 1:
df_copy[feature_name] = \
df_analysis[feature_name].replace(others_list, 'Other')
# Compute the target mean
target_mean = df_copy[target_name].mean()
plt.title(plot_title)
plt.xticks(rotation='vertical')
df_barplot = df_copy.groupby(feature_name).agg(
{target_name: 'mean'}).reset_index()
plot = sns.barplot(x = feature_name, y = target_name,
data = df_barplot, ci = None)
plot.axhline(target_mean, ls = '--', color = target_mean_color)
def analyse_numerical_vars(df, feature_name, q=(0, 0.25, 0.5, 0.75, 1),
target_name='tgt', plot_title=None, target_mean_color='black'):
'''
Function charts the mean Target value versus a numerical
variable and the list of its quantiles.
Args:
df (DataFrame) - input data
feature_name (str) - feature to analyze versus the target.
q (tuple of floats) - quantiles for bucketing the values
target_name (str) - name of the target variable
plot_title (str) - plot title
target_mean_color (str) - color for the average target value
Outputs:
chart (matplotlib) - chart plotting the analyzed variable versus target
'''
# Compute the overall target mean
target_mean = df[target_name].mean()
plt.title(plot_title)
plt.xticks(rotation='vertical')
df_temp = df[[feature_name, target_name]].copy()
cuts = np.quantile(df[feature_name].dropna(), q)
df_temp['agg'] = pd.cut(df_temp[feature_name], bins=cuts,
duplicates='drop', include_lowest=True).astype(str)
df_agg = df_temp.groupby('agg').agg({target_name: 'mean'}).reset_index()
df_agg['ord'] = df_agg['agg'].apply(
lambda x: float(x[1:].split(',')[0]) if x != 'nan' else np.nan)
df_agg.sort_values('ord', inplace=True)
plot = sns.barplot(x='agg', y=target_name, data=df_agg, ci=None)
plot.set_xlabel(feature_name)
plot.axhline(target_mean, ls='--', color=target_mean_color)
###Output
_____no_output_____
###Markdown
Now let's plot all the values compared with the target ones
###Code
def analyse_var(df_target, feature):
'''
This function creates a chart for all required variables for all
types of offers.
Args:
df_target (dict) - dictionary with DataFrames with input data
feature (str) - the required feature to plot the charts
Outputs:
chart (matplotlib) - chart comparing the analyzed features
versus target for each offer type
'''
plt.subplots(1, 3, figsize=(15, 5))
if df_target['bogo'][feature].dtype == 'O' or feature == 'weekday':
func = analyse_categorical_vars
else:
func = analyse_numerical_vars
plt.suptitle(feature, fontsize=20)
plt.subplot(131)
func(df_target['bogo'], feature, target_name='bogo',
plot_title='Bogo')
plt.subplot(132)
func(df_target['discount'], feature, target_name='discount',
plot_title='Discount')
plt.subplot(133)
func(df_target['info'], feature, target_name='info',
plot_title='Informational')
plt.show()
print('\n')
for feature in variables:
analyse_var(df_target, feature)
###Output
_____no_output_____ |
diploma/Progression.ipynb | ###Markdown
Outlier Factors for Device ProfilingTODOFor this first part of the presentation, we will be using data in the form of (timestamp, source computer, number of bytes) where each data point represents a data flow. Data will be binned and so two different features will be extracted per source computer, per bin. That is count of flows and average number of flows per bin.For testing purposes a generated dataset will be used. From these flows per user, we can generate the features count and average byte count.Next we will display the in a scatter plot the points generated
###Code
import pandas as pd
import numpy as np
%matplotlib inline
number_of_hosts = 5
time_limits = [1,100]
df = pd.read_csv('../../../diploma/generated_data/flows_test.txt', header=None)
df.columns = ['time', 'source computer', 'byte count']
df.index = df['time']
df.drop(columns=['time'],inplace=True)
df.sort_index(inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
For better illustration we will plot these points in a two dimensional grid In the following we plot the for each host the flows per bin and after temporal bins format.
###Code
# keep track of the different host TODO
hosts = np.array(list(set(df['source computer'].values)))
hosts.sort()
# Create buckets based on the time of the events, a bucker for every size_of_bin_seconds seconds
size_of_bin_seconds = 10
bins = np.arange(df.index.min(), df.index.max() + size_of_bin_seconds + 1, size_of_bin_seconds)
print('The borders of the bins created: ', bins)
###Output
The borders of the bins created: [ 1 11 21 31 41 51 61 71 81 91 101]
###Markdown
In this ty example we create a bin for each time index. In the final dataset this will correspond to a bin every second
###Code
# group by the correct bin and the source computer
groups = df[['byte count','source computer']].groupby([np.digitize(df.index, bins),'source computer'])
mean_values = groups.mean().values
count_values = groups.count().values
import matplotlib.pyplot as plt
from pylab import rcParams
rcParams['figure.figsize'] = 16, 9
for i in range(number_of_hosts):
if number_of_hosts % 2 == 0:
plt.subplot(number_of_hosts/2, 2, i + 1)
else:
if i < number_of_hosts - 1:
plt.subplot(int(number_of_hosts/2) + 1, 2, i + 1)
else:
plt.subplot(int(number_of_hosts/2) + 1, 1, int(number_of_hosts/2) + 1)
df_for_host = df[df['source computer'].isin([hosts[i]])]
plt.bar(df_for_host.index - 1, df_for_host['byte count'], label=hosts[i], color='blue')
plt.title('Flows for host ' + hosts[i])
plt.ylabel('number of bytes')
plt.xlabel('timestamp')
plt.xticks(bins - 1);
plt.gca().xaxis.grid(linestyle='--')
#plt.grid(color='r', linestyle='--')
plt.xlim([- 1, time_limits[1] + 1])
plt.legend()
plt.tight_layout()
plt.show()
markers = ['v', 'x', '.', 's', 'p']
for i in range(number_of_hosts):
filter_list = [x for x in groups.apply(lambda x: (x['source computer'] == hosts[i]).values[0])]
plt.scatter(count_values[filter_list], mean_values[filter_list], s=100,
marker=markers[i % len(markers)], label=hosts[i])
plt.title('Average bytes by count of flows in bins', fontsize=18)
plt.legend(fontsize=22)
plt.grid()
plt.ylabel('average bytes', fontsize = 20)
plt.xlabel('count of flows', fontsize = 20)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Global and host - specific meansAs a first naive approach we calculate the mean for each individual host and the global mean generated from these meassurementsFirst we preprocess the dataAs all data will have a positive value we could just scale them using a simple approach:$$log(x + 1)$$
###Code
data = groups.count()
data.columns = ['number of flows']
data['mean(byte count)'] = groups.mean().values
data.head()
def scale(x):
return np.log(x + 1)
data_by_host = {}
for host in hosts:
for i in range(len(bins) - 1):
try:
values = scale(data.loc[(i + 1, host)].values)
except:
values = scale(np.array([0, 0]))
if i == 0:
data_by_host[host] = np.array([values])
else:
data_by_host[host] = np.append(data_by_host[host], np.array([values]), axis=0)
###Output
_____no_output_____
###Markdown
Perhaps the log function will "hide" potential outliers and will not be a good match for the metric distance used later.Instead we will use a standard
###Code
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
# to all data append [0, 0] so that [0, 0] is the mapped to [0, 0] with our new scaler
scaler.fit(np.append(data.values, np.array([[0, 0]]), axis=0))
def process_data(data, doScale=False):
data_by_host = {}
for host in hosts:
for i in range(len(bins) - 1):
try:
if doScale:
values = scaler.transform([data.loc[(i + 1, host)].values])
else:
values = [data.loc[(i + 1, host)].values]
except:
if doScale:
values = scaler.transform([np.array([0, 0])])
else:
values = [np.array([0, 0])]
if i == 0:
data_by_host[host] = np.array(values)
else:
data_by_host[host] = np.append(data_by_host[host], np.array(values), axis=0)
return data_by_host
data_by_host = process_data(data)
# the shrinkage
shrinkage = 0.5
i = 0
means = []
for host, data_for_host in data_by_host.items():
# two features used
x = data_for_host[:,0]
y = data_for_host[:,1]
plt.scatter(x, y, marker=markers[i % len(markers)], s=150, color='black', label='C' + str(i))
mean_x = sum(x)/len(x)
mean_y = sum(y)/len(y)
means.append([mean_x, mean_y])
plt.scatter(mean_x, mean_y, marker=markers[i % len(markers)], s=150, color='red', label='Avg' + str(i))
i += 1
if i == number_of_hosts:
break
global_mean = [float(sum(col))/len(col) for col in zip(*means)]
plt.scatter(global_mean[0], global_mean[1], marker='+', s=150, color='blue', label='Avg total')
i = 0
for mean in means:
if i == 0:
plt.plot([mean[0], global_mean[0]], [mean[1], global_mean[1]], '--', label='Shrinking', color='pink')
else:
plt.plot([mean[0], global_mean[0]], [mean[1], global_mean[1]], '--', color='pink')
i += 1
i = 0
for mean in means:
shrinked = np.array(mean) * (1 - shrinkage) + np.array(global_mean) * shrinkage
plt.scatter(shrinked[0], shrinked[1], marker=markers[i % len(markers)], s=150, color='pink', label='C' + str(i) + ' after shrinking')
i += 1
plt.legend()
# this is an inset axes over the main axes
plt.axes([.45, .55, .33, .3])
for i, mean in enumerate(means):
plt.scatter(mean[0], mean[1], marker=markers[i % len(markers)], s=100)
plt.plot([mean[0], global_mean[0]], [mean[1], global_mean[1]], '--', color='pink')
shrinked = np.array(mean) * (1 - shrinkage) + np.array(global_mean) * shrinkage
plt.scatter(shrinked[0], shrinked[1], marker=markers[i % len(markers)], color='pink', s=100)
plt.scatter(global_mean[0], global_mean[1], marker='+', s=150, color='blue')
plt.title('Average means')
plt.xticks([])
plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
First we will attempt a naive clustering of these points
###Code
import itertools
all_data = list(itertools.chain(*list(data_by_host.values())))
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2, random_state=0).fit(all_data)
print(kmeans.cluster_centers_)
###Output
[[ 1.79591837e+00 2.97278912e+02]
[ 4.00000000e+00 1.86675000e+03]]
###Markdown
A possible error that can occur is if the number of clusters in high and anomalies are similar, clusters may be formed around the anomalies.This can probably be accpted
###Code
def distance_to_closest_cluster(X, kmeans):
distances = kmeans.transform(X)
return np.min(distances, axis=1)
# plot the level sets of the decision function
xx, yy = np.meshgrid(np.linspace(-0.2, 1.2, 50), np.linspace(-0.2, 1.2, 50))
#Z = clf._decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = distance_to_closest_cluster(np.c_[xx.ravel(), yy.ravel()], kmeans)
Z = Z.reshape(xx.shape)
plt.title("Clustering distances", fontsize=18)
plt.contourf(xx, yy, Z, cmap=plt.cm.Blues_r)
for center in kmeans.cluster_centers_:
a = plt.scatter(center[0], center[1], color='red', marker='x', s=150, linewidth=5)
for point in all_data:
b = plt.scatter(point[0], point[1], color='green', marker='o')
plt.axis('tight')
plt.xlim(-0.2,1.2)
plt.ylim(-0.2,1.2)
plt.legend([a, b], ["cluster centers","flow points"], fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
For the rest of this excercise we will be using the dataset provided at https://csr.lanl.gov/data/cyber1/ From the file flows.txt the first 100,000 lines will be initially used
###Code
N = 50000
df_N = pd.read_csv('../../../diploma/multi-source-syber-security-events/flows.txt', header=None, nrows=N)
df_N.columns = ['time', 'duration', 'source computer', 'source port', 'destination computer',
'destination port', 'protocol', 'packet count', 'byte count']
df_N.index = df_N['time']
df_N.drop(columns=['time'],inplace=True)
df_N.head()
from sklearn.preprocessing import MinMaxScaler
def scale(x):
return np.log(x + 1)
def get_data_by_dataframe(df, size_of_bin_seconds=50, doScale=True):
"""
:param size_of_bin_seconds: the time period of each bin,
assumes the dataframe has a column names 'source computer' and a name 'byte count'
:return: a dictionary containing for each host the features, the hosts
"""
hosts = np.array(list(set(df['source computer'].values)))
bins = np.arange(df.index.min(), df.index.max() + size_of_bin_seconds + 1, size_of_bin_seconds)
groups = df[['byte count','source computer']].groupby([np.digitize(df.index, bins),'source computer'])
data = groups.count()
data.columns = ['number of flows']
data['mean(byte count)'] = groups.mean().values
"""
scaler = MinMaxScaler()
# to all data append [0, 0] so that [0, 0] is the mapped to [0, 0] with our new scaler
scaler.fit(np.append(data.values, np.array([[0, 0]]), axis=0))
"""
data_by_host = {}
for host in hosts:
for i in range(len(bins) - 1):
try:
if doScale == True:
values = scale(data.loc[(i + 1, host)].values)
else:
values = data.loc[(i + 1, host)].values
except:
if doScale == True:
values = scale(np.array([0, 0]))
else:
values = np.array([0, 0])
if i == 0:
data_by_host[host] = np.array([values])
else:
data_by_host[host] = np.append(data_by_host[host], np.array([values]), axis=0)
return data_by_host, hosts
data_by_host_N, hosts_N = get_data_by_dataframe(df_N)
def plot_host_behavior(data_by_host, hosts):
number_of_hosts = len(hosts)
for i in range(number_of_hosts):
if number_of_hosts % 2 == 0:
plt.subplot(number_of_hosts/2, 2, i + 1)
else:
if i < number_of_hosts - 1:
plt.subplot(int(number_of_hosts/2) + 1, 2, i + 1)
else:
plt.subplot(int(number_of_hosts/2) + 1, 1, int(number_of_hosts/2) + 1)
data_for_host = data_by_host[hosts[i]]
plt.bar(np.arange(len(data_for_host)), data_for_host[:,0] * data_for_host[:,1], label=hosts[i], color='blue')
plt.title('Flows for host ' + hosts[i])
plt.legend()
plt.tight_layout()
plt.show()
plot_host_behavior(data_by_host_N, hosts_N[40:50])
###Output
_____no_output_____
###Markdown
We must now consider the number of clusters into which to divide our data to[1](https://datasciencelab.wordpress.com/2013/12/27/finding-the-k-in-k-means-clustering/), [2](http://www.sthda.com/english/articles/29-cluster-validation-essentials/96-determining-the-optimal-number-of-clusters-3-must-know-methods/) Elbow method
###Code
import itertools
from sklearn.cluster import KMeans
all_data_N = np.vstack(list(itertools.chain(*list(data_by_host_N.values()))))
from sklearn.metrics import silhouette_score
cluster_sizes = range(3, 10)
cluster_scores = []
silhouette_scores = []
for k in cluster_sizes:
km = KMeans(k, random_state=77)
km.fit(all_data_N)
cluster_scores.append(km.inertia_)
plt.plot(cluster_sizes, cluster_scores)
plt.xlabel('Number of clusters')
plt.show()
###Output
_____no_output_____
###Markdown
From the above example we could probably conclude that the best number of clusters for this dataset will be k=4 Silhouette coefficient could also be used.Vey often it can lead to memory error due to the memory required. For completeness display the two dimensional space created in this real dataset
###Code
n_clusters = 4
kmeans_N = KMeans(n_clusters=n_clusters, random_state=0).fit(all_data_N)
print('In the following statistics (0, 0) values have been included if no flows have been meassured')
print('Cluster centers')
print(kmeans_N.cluster_centers_)
closest_cluster = kmeans_N.predict(all_data_N)
for i in range(n_clusters):
cluster_i = np.where(closest_cluster == i)
print('A total of', len(cluster_i[0]), '\tpoints have a closest cluster', i)
all_data_N_min = np.min(all_data_N, axis=0)
all_data_N_max = np.max(all_data_N, axis=0)
len_x = all_data_N_max[0] - all_data_N_min[0]
len_y = all_data_N_max[1] - all_data_N_min[1]
limits_x = [all_data_N_min[0] - 0.1 * len_x, all_data_N_max[0] + 0.1 * len_x]
limits_y = [all_data_N_min[1] - 0.1 * len_y, all_data_N_max[1] + 0.1 * len_y]
# plot the level sets of the decision function
xx, yy = np.meshgrid(np.linspace(limits_x[0], limits_x[1], 50),
np.linspace(limits_y[0], limits_y[1], 50))
#Z = clf._decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = distance_to_closest_cluster(np.c_[xx.ravel(), yy.ravel()], kmeans_N)
Z = Z.reshape(xx.shape)
plt.title("Clustering distances", fontsize=18)
plt.contourf(xx, yy, Z, cmap=plt.cm.Blues_r)
# randomly scatter some only points
choices = np.random.choice(len(all_data_N), 5000)
for choice in choices:
point = all_data_N[choice]
b = plt.scatter(point[0], point[1], color='green', marker='o')
for center in kmeans_N.cluster_centers_:
a = plt.scatter(center[0], center[1], color='red', marker='x', s=150, linewidth=5)
plt.axis('tight')
plt.xlim(limits_x)
plt.ylim(limits_y)
plt.legend([a, b], ["cluster centers","flow points"], fontsize=14)
plt.show()
###Output
In the following statistics (0, 0) values have been included if no flows have been meassured
Cluster centers
[[ 1.22901689e-13 -7.76489983e-13]
[ 8.67925421e-01 4.30436142e+00]
[ 1.23548981e+00 7.46372927e+00]
[ 1.69638071e+00 1.22511758e+01]]
A total of 54096 points have a closest cluster 0
A total of 6870 points have a closest cluster 1
A total of 6808 points have a closest cluster 2
A total of 515 points have a closest cluster 3
###Markdown
To detect outlier we will be using a method similar to - A COMPUTER HOST-BASED USER ANOMALY DETCTION SYSTEM USING THE SELF-ORGANIZING MAP Albert J. Hoglund, Kimmo Hatonen, Antti S. SorvariFor each data point calculate the distance to the closest cluster to it.THen calculate the percentage of points having a larger distance to their closest cluster center.
###Code
plt.title("Outliers from above clusters", fontsize=18)
plt.contourf(xx, yy, Z, cmap=plt.cm.Blues_r)
for center in kmeans_N.cluster_centers_:
a = plt.scatter(center[0], center[1], color='red', marker='x', s=150, linewidth=5)
distances_to_closest_cluster = distance_to_closest_cluster(all_data_N, kmeans_N)
total_points = len(distances_to_closest_cluster)
Copper = plt.get_cmap('copper')
for point in all_data_N:
test = np.where(distance_to_closest_cluster([point], kmeans_N)[0] > distances_to_closest_cluster)
# consider an outlier if the distance to its closest cluster is bigger than a percentage compared to other points
threshold = 0.999
if len(test[0]) > total_points * threshold:
b = plt.scatter(point[0], point[1], marker='o',
color=Copper((total_points - len(test[0])) / ((1 - threshold)*total_points)))
plt.axis('tight')
plt.xlim(limits_x)
plt.ylim(limits_y)
plt.legend([a, b], ["cluster centers","outliers"], fontsize=14)
plt.show()
###Output
_____no_output_____
###Markdown
For the above examples a simple kmeans clustering is implementedNow we will use a mixture of Poisson distributions as shown in Online EM algorithm for mixture with applicationto internet traffic modelingZ. Liua,∗, J. Almhanaa, V. Choulakiana, R. McGormanb
###Code
from onlineEM import OnlineEM
all_data_N_no_scale = np.vstack(np.array(list(itertools.chain(*list(data_by_host_N_no_scale.values()))), dtype=np.int64))
from random import randint
def get_random_initialize_lamdas(data, number_of_mixtures=4):
mins = np.min(data, axis=0)
maxs = np.max(data, axis=0)
dims = len(mins)
lambdas = [[] for _ in range(number_of_mixtures)]
for i in range(dims):
for j in range(number_of_mixtures):
lambdas[j].append(randint(int(mins[i]), int(maxs[i])))
return np.vstack(lambdas)
from sklearn.preprocessing import MinMaxScaler
def scale_data(data, feature_range=(1,100)):
scaler = MinMaxScaler(feature_range=feature_range)
scaler.fit(data)
transformed = scaler.transform(data).astype(int)
return np.array(transformed, dtype=np.int64)
# scale to 1 - a maximum value equal to the maximum value that can be achieved
all_data_N_rescaled = scale_data(all_data_N)
mixtures = 10
# random initialization
onlineEM = OnlineEM([1/mixtures]*mixtures, get_random_initialize_lamdas(all_data_N_rescaled, number_of_mixtures=10), 500)
onlineEM.train(all_data_N_rescaled)
from plots import plot_results, plot_points
plot_results(onlineEM)
plot_points(all_data_N_rescaled, onlineEM)
###Output
_____no_output_____
###Markdown
Not all points can be represented adequatelyThis could be a proper issueSome of the poissons from the mixture have a very low probability. A smaller number could be used perhaps.
###Code
onlineEM.gammas
mixtures = 50
# random initialization
onlineEM_50 = OnlineEM([1/mixtures]*mixtures, get_random_initialize_lamdas(all_data_N_rescaled, number_of_mixtures=50), 500)
onlineEM_50.train(all_data_N_rescaled)
plot_points(all_data_N_rescaled, onlineEM_50)
###Output
_____no_output_____
###Markdown
As we can see adding for centers for our Poisson can be considered a failure towards represeningt more data points.
###Code
from sklearn.externals import joblib
joblib.dump(onlineEM_50, 'onlineEM_50????.pkl')
joblib.dump(all_data_N, 'all_data_N????.pkl')
joblib.dump(all_data_N_rescaled, 'all_data_N_rescaled?????.pkl')
onlineEM_50 = joblib.load('onlineEM_50.pkl')
###Output
_____no_output_____
###Markdown
LOF doesn't work very well
###Code
all_data_unique = [np.array(i) for i in set(tuple(i) for i in all_data_N)]
from sklearn.neighbors import LocalOutlierFactor
np.random.seed(42)
# fit the model
clf = LocalOutlierFactor(n_neighbors=25, contamination=0.01)
y_pred = clf.fit_predict(all_data_N)
len(np.where(y_pred == 1)[0])
# plot the level sets of the decision function
for data_index in np.where(y_pred == -1)[0]:
data_point = all_data_N[data_index]
plt.scatter(data_point[0], data_point[1], c='red')
for data_index in np.where(y_pred == 1)[0][:500]:
data_point = all_data_N[data_index]
plt.scatter(data_point[0], data_point[1], c='blue')
"""
a = plt.scatter(X[:200, 0], X[:200, 1], c='white',
edgecolor='k', s=20)
b = plt.scatter(X[200:, 0], X[200:, 1], c='red',
edgecolor='k', s=20)"""
plt.title("Local Outlier Factor (LOF)")
plt.axis('tight')
plt.xlim(limits_x)
plt.ylim(limits_y)
plt.show()
###Output
_____no_output_____ |
tensorflow_examples/lite/model_customization/demo/image_classification.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification with TensorFlow Lite model customization with TensorFlow 2.0 Run in Google Colab View source on GitHub The model customization library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes this model customization library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device. PrerequisitesTo run this example, we first need to install serveral required packages, including model customization package that in github [repo](https://github.com/tensorflow/examples).
###Code
%tensorflow_version 2.x
!pip install -q tf-hub-nightly==0.8.0.dev201911110007
!pip install -q git+https://github.com/tensorflow/examples
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_customization.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_customization.core.task import image_classifier
from tensorflow_examples.lite.model_customization.core.task.model_spec import efficientnet_b0_spec
from tensorflow_examples.lite.model_customization.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Simple End-to-End ExampleLet's get some images to play with this simple end-to-end example. You could replace it with your own image folders. Hundreds of images is a good start for model customization while more data could achieve better accuracy.
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process. 1. Load input data specific to an on-device ML app.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
2. Customize the TensorFlow model.
###Code
model = image_classifier.create(data)
###Output
_____no_output_____
###Markdown
3. Evaluate the model.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
4. Export to TensorFlow Lite model.
###Code
model.export('image_classifier.tflite', 'image_labels.txt')
###Output
_____no_output_____
###Markdown
After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app. Detailed ProcessCurrently, we only include MobileNetV2 and EfficientNetB0 models as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.The following walks through this end-to-end example step by step to show more detail. Step 1: Load Input Data Specific to an On-device ML AppThe flower dataset contains 3670 images belonging to 5 classes. Download the archive version of the dataset and untar it.The dataset has the following directory structure:flower_photos|__ daisy |______ 100080576_f52e8ee070_n.jpg |______ 14167534527_781ceb1b7a_n.jpg |______ ...|__ dandelion |______ 10043234166_e6dd915111_n.jpg |______ 1426682852_e62169221f_m.jpg |______ ...|__ roses |______ 102501987_3cdb8e5394_n.jpg |______ 14982802401_a3dfb22afb.jpg |______ ...|__ sunflowers |______ 12471791574_bb1be83df4.jpg |______ 15122112402_cafa41934f.jpg |______ ...|__ tulips |______ 13976522214_ccec508fe7.jpg |______ 14487943607_651e8062a1_m.jpg |______ ...
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
Use `ImageClassifierDataLoader` class to load data.As for `from_folder()` method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
Show 25 image examples with labels.
###Code
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.dataset.take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: Customize the TensorFlow ModelCreate a custom image classifier model based on the loaded data. The default model is MobileNetV2.
###Code
model = image_classifier.create(data)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Step 3: Evaluate the Customized ModelEvaluate the result of the model, get the loss and accuracy of the model.By default, the results are evaluated on the test data that's splitted in `create` method. Other test data could also be evaluated if served as a parameter.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
###Code
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_topk(model.test_data)
for i, (image, label) in enumerate(model.test_data.dataset.take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
model.test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
###Output
_____no_output_____
###Markdown
If the accuracy doesn't meet the app requirement, one could refer to [Advanced Usage](scrollTo=zNDBP2qA54aK) to explore alternatives such as changing to a larger model, adjusting re-training parameters etc. Step 4: Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format and save the image labels in label file.
###Code
model.export('flower_classifier.tflite', 'flower_labels.txt')
###Output
_____no_output_____
###Markdown
The TensorFlow Lite model file and label file could be used in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app.As for android reference app as an example, we could add `flower_classifier.tflite` and `flower_label.txt` in [assets](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android/app/src/main/assets) folder. Meanwhile, change label filename in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL65) and TensorFlow Lite file name in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL60). Thus, we could run the retrained float TensorFlow Lite model on the android app. Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.
###Code
# Read TensorFlow Lite model from TensorFlow Lite file.
with tf.io.gfile.GFile('flower_classifier.tflite', 'rb') as f:
model_content = f.read()
# Read label names from label file.
with tf.io.gfile.GFile('flower_labels.txt', 'r') as f:
label_names = f.read().split('\n')
# Initialze TensorFlow Lite inpterpreter.
interpreter = tf.lite.Interpreter(model_content=model_content)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
# Run predictions on each test image data and calculate accuracy.
accurate_count = 0
for i, (image, label) in enumerate(model.test_data.dataset):
# Pre-processing should remain the same. Currently, just normalize each pixel value and resize image according to the model's specification.
image, _ = model.preprocess(image, label)
# Add batch dimension and convert to float32 to match with the model's input
# data format.
image = tf.expand_dims(image, 0).numpy()
# Run inference.
interpreter.set_tensor(input_index, image)
interpreter.invoke()
# Post-processing: remove batch dimension and find the label with highest
# probability.
predict_label = np.argmax(output()[0])
# Get label name with label index.
predict_label_name = label_names[predict_label]
accurate_count += (predict_label == label.numpy())
accuracy = accurate_count * 1.0 / model.test_data.size
print('TensorFlow Lite model accuracy = %.4f' % accuracy)
###Output
_____no_output_____
###Markdown
Note that preprocessing for inference should be the same as training. Currently, preprocessing contains normalizing each pixel value and resizing the image according to the model's specification. For MobileNetV2, input image should be normalized to `[0, 1]` and resized to `[224, 224, 3]`. Advanced UsageThe `create` function is the critical part of this library. It use transfer learning with a pretrained model similiar to the [tutorial](https://www.tensorflow.org/tutorials/images/transfer_learning).The `create`function contains the following steps:1. Split the data into training, validation, testing data according to parameter `validation_ratio` and `test_ratio`. The default value of `validation_ratio` and `test_ratio` are `0.1` and `0.1`.2. Download a [Image Feature Vector](https://www.tensorflow.org/hub/common_signatures/imagesimage_feature_vector) as the base model from TensorFlow Hub. The default pre-trained model is MobileNetV2.3. Add a classifier head with a Dropout Layer with `dropout_rate` between head layer and pre-trained model. The default `dropout_rate` is the default `dropout_rate` value from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub.4. Preprocess the raw input data. Currently, preprocessing steps including normalizing the value of each image pixel to model input scale and resizing it to model input size. MobileNetV2 have the input scale `[0, 1]` and the input image size `[224, 224, 3]`.5. Feed the data into the classifier model. By default, the training parameters such as training epochs, batch size, learning rate, momentum are the default values from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub. Only the classifier head is trained.In this section, we describe several advanced topics, including switching to a different image classification model, changing the training hyperparameters etc. Change the model Change to the model that's supported in this library.This library supports MobileNetV2 and EfficientNetB0 model by now. The default model is MobileNetV2.[EfficientNets](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) are a family of image classification models that could acheive state-of-art accuracy. EfficinetNetB0 is one of the EfficientNet models that's small and suitable for on-device applications. It's larger than MobileNetV2 while might achieve better performance.We could switch model to EfficientNetB0 by just setting parameter `model_spec` to `efficientnet_b0_spec` in `create` method.
###Code
model = image_classifier.create(data, model_spec=efficientnet_b0_spec)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained EfficientNetB0 model to see the accuracy and loss in testing data.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
Change to the model in TensorFlow HubMoreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.As [Inception V3](https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1) model as an example, we could define `inception_v3_spec` which is an object of `ImageModelSpec` and contains the specification of the Inception V3 model.We need to specify the model name `name`, the url of the TensorFlow Hub model `uri`. Meanwhile, the default value of `input_image_shape` is `[224, 224]`. We need to change it to `[299, 299]` for Inception V3 model.
###Code
inception_v3_spec = ImageModelSpec(
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
###Output
_____no_output_____
###Markdown
Then, by setting parameter `model_spec` to `inception_v3_spec` in `create` method, we could retrain the Inception V3 model. The remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end. Change your own custom model If we'd like to use the custom model that's not in TensorFlow Hub, we should create and export [ModelSpec](https://www.tensorflow.org/hub/api_docs/python/hub/ModuleSpec) in TensorFlow Hub.Then start to define `ImageModelSpec` object like the process above. Change the training hyperparametersWe could also change the training hyperparameters like `epochs`, `dropout_rate` and `batch_size` that could affect the model accuracy. For instance,* `epochs`: more epochs could achieve better accuracy until converage but training for too many epochs may lead to overfitting.* `dropout_rate`: avoid overfitting.* `batch_size`: number of samples to use in one training step.For example, we could train with more epochs.
###Code
model = image_classifier.create(data, epochs=10)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained model with 10 training epochs.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification with TensorFlow Lite model customization with TensorFlow 2.0 Run in Google Colab View source on GitHub The model customization library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes this model customization library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device. PrerequisitesTo run this example, we first need to install serveral required packages, including model customization package that in github [repo](https://github.com/tensorflow/examples).
###Code
%tensorflow_version 2.x
!pip install -q tf-hub-nightly==0.8.0.dev201911110007
!pip install -q git+https://github.com/tensorflow/examples
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_customization.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_customization.core.task import image_classifier
from tensorflow_examples.lite.model_customization.core.task.model_spec import efficientnet_b0_spec
from tensorflow_examples.lite.model_customization.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Simple End-to-End ExampleLet's get some images to play with this simple end-to-end example. You could replace it with your own image folders. Hundreds of images is a good start for model customization while more data could achieve better accuracy.
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process. 1. Load input data specific to an on-device ML app.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
2. Customize the TensorFlow model.
###Code
model = image_classifier.create(data)
###Output
_____no_output_____
###Markdown
3. Evaluate the model.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
4. Export to TensorFlow Lite model.
###Code
model.export('image_classifier.tflite', 'image_labels.txt')
###Output
_____no_output_____
###Markdown
After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app. Detailed ProcessCurrently, we only include MobileNetV2 and EfficientNetB0 models as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.The following walks through this end-to-end example step by step to show more detail. Step 1: Load Input Data Specific to an On-device ML AppThe flower dataset contains 3670 images belonging to 5 classes. Download the archive version of the dataset and untar it.The dataset has the following directory structure:flower_photos|__ daisy |______ 100080576_f52e8ee070_n.jpg |______ 14167534527_781ceb1b7a_n.jpg |______ ...|__ dandelion |______ 10043234166_e6dd915111_n.jpg |______ 1426682852_e62169221f_m.jpg |______ ...|__ roses |______ 102501987_3cdb8e5394_n.jpg |______ 14982802401_a3dfb22afb.jpg |______ ...|__ sunflowers |______ 12471791574_bb1be83df4.jpg |______ 15122112402_cafa41934f.jpg |______ ...|__ tulips |______ 13976522214_ccec508fe7.jpg |______ 14487943607_651e8062a1_m.jpg |______ ...
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
Use `ImageClassifierDataLoader` class to load data.As for `from_folder()` method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
Show 25 image examples with labels.
###Code
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.dataset.take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: Customize the TensorFlow ModelCreate a custom image classifier model based on the loaded data. The default model is MobileNetV2.
###Code
model = image_classifier.create(data)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Step 3: Evaluate the Customized ModelEvaluate the result of the model, get the loss and accuracy of the model.By default, the results are evaluated on the test data that's splitted in `create` method. Other test data could also be evaluated if served as a parameter.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
###Code
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_topk(model.test_data)
for i, (image, label) in enumerate(model.test_data.dataset.take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
model.test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
###Output
_____no_output_____
###Markdown
If the accuracy doesn't meet the app requirement, one could refer to [Advanced Usage](scrollTo=zNDBP2qA54aK) to explore alternatives such as changing to a larger model, adjusting re-training parameters etc. Step 4: Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format and save the image labels in label file.
###Code
model.export('flower_classifier.tflite', 'flower_labels.txt')
###Output
_____no_output_____
###Markdown
The TensorFlow Lite model file and label file could be used in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app.As for android reference app as an example, we could add `flower_classifier.tflite` and `flower_label.txt` in [assets](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android/app/src/main/assets) folder. Meanwhile, change label filename in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL65) and TensorFlow Lite file name in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL60). Thus, we could run the retrained float TensorFlow Lite model on the android app. Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.
###Code
# Read TensorFlow Lite model from TensorFlow Lite file.
with tf.io.gfile.GFile('flower_classifier.tflite', 'rb') as f:
model_content = f.read()
# Read label names from label file.
with tf.io.gfile.GFile('flower_labels.txt', 'r') as f:
label_names = f.read().split('\n')
# Initialze TensorFlow Lite inpterpreter.
interpreter = tf.lite.Interpreter(model_content=model_content)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
# Run predictions on each test image data and calculate accuracy.
accurate_count = 0
for i, (image, label) in enumerate(model.test_data.dataset):
# Pre-processing should remain the same. Currently, just normalize each pixel value and resize image according to the model's specification.
image, _ = model.preprocess_image(image, label)
# Add batch dimension and convert to float32 to match with the model's input
# data format.
image = tf.expand_dims(image, 0).numpy()
# Run inference.
interpreter.set_tensor(input_index, image)
interpreter.invoke()
# Post-processing: remove batch dimension and find the label with highest
# probability.
predict_label = np.argmax(output()[0])
# Get label name with label index.
predict_label_name = label_names[predict_label]
accurate_count += (predict_label == label.numpy())
accuracy = accurate_count * 1.0 / model.test_data.size
print('TensorFlow Lite model accuracy = %.4f' % accuracy)
###Output
_____no_output_____
###Markdown
Note that preprocessing for inference should be the same as training. Currently, preprocessing contains normalizing each pixel value and resizing the image according to the model's specification. For MobileNetV2, input image should be normalized to `[0, 1]` and resized to `[224, 224, 3]`. Advanced UsageThe `create` function is the critical part of this library. It use transfer learning with a pretrained model similiar to the [tutorial](https://www.tensorflow.org/tutorials/images/transfer_learning).The `create`function contains the following steps:1. Split the data into training, validation, testing data according to parameter `validation_ratio` and `test_ratio`. The default value of `validation_ratio` and `test_ratio` are `0.1` and `0.1`.2. Download a [Image Feature Vector](https://www.tensorflow.org/hub/common_signatures/imagesimage_feature_vector) as the base model from TensorFlow Hub. The default pre-trained model is MobileNetV2.3. Add a classifier head with a Dropout Layer with `dropout_rate` between head layer and pre-trained model. The default `dropout_rate` is the default `dropout_rate` value from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub.4. Preprocess the raw input data. Currently, preprocessing steps including normalizing the value of each image pixel to model input scale and resizing it to model input size. MobileNetV2 have the input scale `[0, 1]` and the input image size `[224, 224, 3]`.5. Feed the data into the classifier model. By default, the training parameters such as training epochs, batch size, learning rate, momentum are the default values from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub. Only the classifier head is trained.In this section, we describe several advanced topics, including switching to a different image classification model, changing the training hyperparameters etc. Change the model Change to the model that's supported in this library.This library supports MobileNetV2 and EfficientNetB0 model by now. The default model is MobileNetV2.[EfficientNets](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) are a family of image classification models that could acheive state-of-art accuracy. EfficinetNetB0 is one of the EfficientNet models that's small and suitable for on-device applications. It's larger than MobileNetV2 while might achieve better performance.We could switch model to EfficientNetB0 by just setting parameter `model_spec` to `efficientnet_b0_spec` in `create` method.
###Code
model = image_classifier.create(data, model_spec=efficientnet_b0_spec)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained EfficientNetB0 model to see the accuracy and loss in testing data.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
Change to the model in TensorFlow HubMoreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.As [Inception V3](https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1) model as an example, we could define `inception_v3_spec` which is an object of `ImageModelSpec` and contains the specification of the Inception V3 model.We need to specify the model name `name`, the url of the TensorFlow Hub model `uri`. Meanwhile, the default value of `input_image_shape` is `[224, 224]`. We need to change it to `[299, 299]` for Inception V3 model.
###Code
inception_v3_spec = ImageModelSpec(
name='inception_v3',
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
###Output
_____no_output_____
###Markdown
Then, by setting parameter `model_spec` to `inception_v3_spec` in `create` method, we could retrain the Inception V3 model. The remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end. Change your own custom model If we'd like to use the custom model that's not in TensorFlow Hub, we should create and export [ModelSpec](https://www.tensorflow.org/hub/api_docs/python/hub/ModuleSpec) in TensorFlow Hub.Then start to define `ImageModelSpec` object like the process above. Change the training hyperparametersWe could also change the training hyperparameters like `epochs`, `dropout_rate` and `batch_size` that could affect the model accuracy. For instance,* `epochs`: more epochs could achieve better accuracy until converage but training for too many epochs may lead to overfitting.* `dropout_rate`: avoid overfitting.* `batch_size`: number of samples to use in one training step.For example, we could train with more epochs.
###Code
model = image_classifier.create(data, epochs=10)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained model with 10 training epochs.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification with TensorFlow Lite model customization with TensorFlow 2.0 Run in Google Colab View source on GitHub The model customization library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes this model customization library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device. PrerequisitesTo run this example, we first need to install serveral required packages, including model customization package that in github [repo](https://github.com/tensorflow/examples).
###Code
%tensorflow_version 2.x
!pip install -q git+https://github.com/tensorflow/examples
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_customization.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_customization.core.task import image_classifier
from tensorflow_examples.lite.model_customization.core.task.model_spec import efficientnet_b0_spec
from tensorflow_examples.lite.model_customization.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Simple End-to-End ExampleLet's get some images to play with this simple end-to-end example. You could replace it with your own image folders. Hundreds of images is a good start for model customization while more data could achieve better accuracy.
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process. 1. Load input data specific to an on-device ML app.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
2. Customize the TensorFlow model.
###Code
model = image_classifier.create(data)
###Output
_____no_output_____
###Markdown
3. Evaluate the model.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
4. Export to TensorFlow Lite model.
###Code
model.export('image_classifier.tflite', 'image_labels.txt')
###Output
_____no_output_____
###Markdown
After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app. Detailed ProcessCurrently, we only include MobileNetV2 and EfficientNetB0 models as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.The following walks through this end-to-end example step by step to show more detail. Step 1: Load Input Data Specific to an On-device ML AppThe flower dataset contains 3670 images belonging to 5 classes. Download the archive version of the dataset and untar it.The dataset has the following directory structure:flower_photos|__ daisy |______ 100080576_f52e8ee070_n.jpg |______ 14167534527_781ceb1b7a_n.jpg |______ ...|__ dandelion |______ 10043234166_e6dd915111_n.jpg |______ 1426682852_e62169221f_m.jpg |______ ...|__ roses |______ 102501987_3cdb8e5394_n.jpg |______ 14982802401_a3dfb22afb.jpg |______ ...|__ sunflowers |______ 12471791574_bb1be83df4.jpg |______ 15122112402_cafa41934f.jpg |______ ...|__ tulips |______ 13976522214_ccec508fe7.jpg |______ 14487943607_651e8062a1_m.jpg |______ ...
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
Use `ImageClassifierDataLoader` class to load data.As for `from_folder()` method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
Show 25 image examples with labels.
###Code
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.dataset.take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: Customize the TensorFlow ModelCreate a custom image classifier model based on the loaded data. The default model is MobileNetV2.
###Code
model = image_classifier.create(data)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Step 3: Evaluate the Customized ModelEvaluate the result of the model, get the loss and accuracy of the model.By default, the results are evaluated on the test data that's splitted in `create` method. Other test data could also be evaluated if served as a parameter.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
###Code
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
for i, (image, label) in enumerate(model.test_data.dataset.take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
# Pre-processing should remain the same. Currently, just normalize each pixel value to [0, 1] and resize image to [224, 224, 3].
image, label = model.preprocess_image(image, label)
# Add batch dimension and convert to float32 to match with the model's input
# data format.
image = tf.expand_dims(image, 0).numpy()
predict_prob = model.model.predict(image)
predict_label = np.argmax(predict_prob, axis=1)[0]
ax.xaxis.label.set_color(get_label_color(predict_label,\
label.numpy()))
plt.xlabel('Predicted: %s' % model.test_data.index_to_label[predict_label])
plt.show()
###Output
_____no_output_____
###Markdown
If the accuracy doesn't meet the app requirement, one could refer to [Advanced Usage](scrollTo=zNDBP2qA54aK) to explore alternatives such as changing to a larger model, adjusting re-training parameters etc. Step 4: Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format and save the image labels in label file.
###Code
model.export('flower_classifier.tflite', 'flower_labels.txt')
###Output
_____no_output_____
###Markdown
The TensorFlow Lite model file and label file could be used in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app.As for android reference app as an example, we could add `flower_classifier.tflite` and `flower_label.txt` in [assets](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android/app/src/main/assets) folder. Meanwhile, change label filename in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL65) and TensorFlow Lite file name in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL60). Thus, we could run the retrained float TensorFlow Lite model on the android app. Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.
###Code
# Read TensorFlow Lite model from TensorFlow Lite file.
with tf.io.gfile.GFile('flower_classifier.tflite', 'rb') as f:
model_content = f.read()
# Read label names from label file.
with tf.io.gfile.GFile('flower_labels.txt', 'r') as f:
label_names = f.read().split('\n')
# Initialze TensorFlow Lite inpterpreter.
interpreter = tf.lite.Interpreter(model_content=model_content)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
# Run predictions on each test image data and calculate accuracy.
accurate_count = 0
for i, (image, label) in enumerate(model.test_data.dataset):
# Pre-processing should remain the same. Currently, just normalize each pixel value and resize image according to the model's specification.
image, label = model.preprocess_image(image, label)
# Add batch dimension and convert to float32 to match with the model's input
# data format.
image = tf.expand_dims(image, 0).numpy()
# Run inference.
interpreter.set_tensor(input_index, image)
interpreter.invoke()
# Post-processing: remove batch dimension and find the label with highest
# probability.
predict_label = np.argmax(output()[0])
# Get label name with label index.
predict_label_name = label_names[predict_label]
accurate_count += (predict_label == label.numpy())
accuracy = accurate_count * 1.0 / model.test_data.size
print('TensorFlow Lite model accuracy = %.4f' % accuracy)
###Output
_____no_output_____
###Markdown
Note that preprocessing for inference should be the same as training. Currently, preprocessing contains normalizing each pixel value and resizing the image according to the model's specification. For MobileNetV2, input image should be normalized to `[0, 1]` and resized to `[224, 224, 3]`. Advanced UsageThe `create` function is the critical part of this library. It use transfer learning with a pretrained model similiar to the [tutorial](https://www.tensorflow.org/tutorials/images/transfer_learning).The `create`function contains the following steps:1. Split the data into training, validation, testing data according to parameter `validation_ratio` and `test_ratio`. The default value of `validation_ratio` and `test_ratio` are `0.1` and `0.1`.2. Download a [Image Feature Vector](https://www.tensorflow.org/hub/common_signatures/imagesimage_feature_vector) as the base model from TensorFlow Hub. The default pre-trained model is MobileNetV2.3. Add a classifier head with a Dropout Layer with `dropout_rate` between head layer and pre-trained model. The default `dropout_rate` is `0.2`.4. Preprocess the raw input data. Currently, preprocessing steps including normalizing the value of each image pixel to model input scale and resizing it to model input size. MobileNetV2 have the input scale `[0, 1]` and the input image size `[224, 224, 3]`.5. Feed the data into the classifier model. By default, the number of training epochs is `2`, batch size is `32`, and only the classifier head is trained.In this section, we describe several advanced topics, including switching to a different image classification model, changing the training hyperparameters etc. Change the model Change to the model that's supported in this library.This library supports MobileNetV2 and EfficientNetB0 model by now. The default model is MobileNetV2.[EfficientNets](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) are a family of image classification models that could acheive state-of-art accuracy. EfficinetNetB0 is one of the EfficientNet models that's small and suitable for on-device applications. It's larger than MobileNetV2 while might achieve better performance.We could switch model to EfficientNetB0 by just setting parameter `model_spec` to `efficientnet_b0_spec` in `create` method.
###Code
model = image_classifier.create(data, model_spec=efficientnet_b0_spec)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained EfficientNetB0 model to see the accuracy and loss in testing data.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
Change to the model in TensorFlow HubMoreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.As [Inception V3](https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1) model as an example, we could define `inception_v3_spec` which is an object of `ImageModelSpec` and contains the specification of the Inception V3 model.We need to specify the model name `name`, the url of the TensorFlow Hub model `uri`, TensorFlow version of the model `tf_version`. Meanwhile, the default value of `input_image_shape` is `[224, 224]`. We need to change it to `[299, 299]` for Inception V3 model.
###Code
inception_v3_spec = ImageModelSpec(
name='inception_v3',
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1',
tf_version=1)
inception_v3_spec.input_image_shape = [299, 299]
###Output
_____no_output_____
###Markdown
Then, by setting parameter model_spec to `inception_v3_spec` in `create` method, we could retrain the Inception V3 model. The remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end. Change your own custom model If we'd like to use the custom model that's not in TensorFlow Hub, we should create and export [ModelSpec](https://www.tensorflow.org/hub/api_docs/python/hub/ModuleSpec) in TensorFlow Hub.Then start to define `ImageModelSpec` object like the process above. Change the training hyperparametersWe could also change the training hyperparameters like `epochs`, `dropout_rate` and `batch_size` that could affect the model accuracy. For instance,* `epochs`: more epochs could achieve better accuracy until converage but training for too many epochs may lead to overfitting.* `dropout_rate`: avoid overfitting.* `batch_size`: number of samples to use in one training step.For example, we could train with more epochs.
###Code
model = image_classifier.create(data, epochs=5)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained model with 5 training epochs.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification with TensorFlow Lite model customization with TensorFlow 2.0 Run in Google Colab View source on GitHub The model customization library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes this model customization library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device. PrerequisitesTo run this example, we first need to install serveral required packages, including model customization package that in github [repo](https://github.com/tensorflow/examples).
###Code
%tensorflow_version 2.x
!pip install -q tf-hub-nightly==0.8.0.dev201911110007
!pip install -q tf-models-official==2.1.0.dev1
!pip install -q git+https://github.com/tensorflow/examples
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_customization.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_customization.core.task import image_classifier
from tensorflow_examples.lite.model_customization.core.task.model_spec import efficientnet_b0_spec
from tensorflow_examples.lite.model_customization.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Simple End-to-End ExampleLet's get some images to play with this simple end-to-end example. You could replace it with your own image folders. Hundreds of images is a good start for model customization while more data could achieve better accuracy.
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process. 1. Load input data specific to an on-device ML app. Split it to training data and testing data.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
###Output
_____no_output_____
###Markdown
2. Customize the TensorFlow model.
###Code
model = image_classifier.create(train_data)
###Output
_____no_output_____
###Markdown
3. Evaluate the model.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
4. Export to TensorFlow Lite model.
###Code
model.export('image_classifier.tflite', 'image_labels.txt')
###Output
_____no_output_____
###Markdown
After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app. Detailed ProcessCurrently, we only include MobileNetV2 and EfficientNetB0 models as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.The following walks through this end-to-end example step by step to show more detail. Step 1: Load Input Data Specific to an On-device ML AppThe flower dataset contains 3670 images belonging to 5 classes. Download the archive version of the dataset and untar it.The dataset has the following directory structure:flower_photos|__ daisy |______ 100080576_f52e8ee070_n.jpg |______ 14167534527_781ceb1b7a_n.jpg |______ ...|__ dandelion |______ 10043234166_e6dd915111_n.jpg |______ 1426682852_e62169221f_m.jpg |______ ...|__ roses |______ 102501987_3cdb8e5394_n.jpg |______ 14982802401_a3dfb22afb.jpg |______ ...|__ sunflowers |______ 12471791574_bb1be83df4.jpg |______ 15122112402_cafa41934f.jpg |______ ...|__ tulips |______ 13976522214_ccec508fe7.jpg |______ 14487943607_651e8062a1_m.jpg |______ ...
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
Use `ImageClassifierDataLoader` class to load data.As for `from_folder()` method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
Split it to training data (80%), validation data (10%, optional) and testing data (10%).
###Code
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
###Output
_____no_output_____
###Markdown
Show 25 image examples with labels.
###Code
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.dataset.take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: Customize the TensorFlow ModelCreate a custom image classifier model based on the loaded data. The default model is MobileNetV2.
###Code
model = image_classifier.create(train_data, validation_data=validation_data)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Step 3: Evaluate the Customized ModelEvaluate the result of the model, get the loss and accuracy of the model.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
###Code
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_top_k(test_data)
for i, (image, label) in enumerate(test_data.dataset.take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
###Output
_____no_output_____
###Markdown
If the accuracy doesn't meet the app requirement, one could refer to [Advanced Usage](scrollTo=zNDBP2qA54aK) to explore alternatives such as changing to a larger model, adjusting re-training parameters etc. Step 4: Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format and save the image labels in label file.
###Code
model.export('flower_classifier.tflite', 'flower_labels.txt')
###Output
_____no_output_____
###Markdown
The TensorFlow Lite model file and label file could be used in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app.As for android reference app as an example, we could add `flower_classifier.tflite` and `flower_label.txt` in [assets](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android/app/src/main/assets) folder. Meanwhile, change label filename in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL65) and TensorFlow Lite file name in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL60). Thus, we could run the retrained float TensorFlow Lite model on the android app. Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.
###Code
# Read TensorFlow Lite model from TensorFlow Lite file.
with tf.io.gfile.GFile('flower_classifier.tflite', 'rb') as f:
model_content = f.read()
# Read label names from label file.
with tf.io.gfile.GFile('flower_labels.txt', 'r') as f:
label_names = f.read().split('\n')
# Initialze TensorFlow Lite inpterpreter.
interpreter = tf.lite.Interpreter(model_content=model_content)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
# Run predictions on each test image data and calculate accuracy.
accurate_count = 0
for i, (image, label) in enumerate(test_data.dataset):
# Pre-processing should remain the same. Currently, just normalize each pixel value and resize image according to the model's specification.
image, _ = model.preprocess(image, label)
# Add batch dimension and convert to float32 to match with the model's input
# data format.
image = tf.expand_dims(image, 0).numpy()
# Run inference.
interpreter.set_tensor(input_index, image)
interpreter.invoke()
# Post-processing: remove batch dimension and find the label with highest
# probability.
predict_label = np.argmax(output()[0])
# Get label name with label index.
predict_label_name = label_names[predict_label]
accurate_count += (predict_label == label.numpy())
accuracy = accurate_count * 1.0 / test_data.size
print('TensorFlow Lite model accuracy = %.4f' % accuracy)
###Output
_____no_output_____
###Markdown
Note that preprocessing for inference should be the same as training. Currently, preprocessing contains normalizing each pixel value and resizing the image according to the model's specification. For MobileNetV2, input image should be normalized to `[0, 1]` and resized to `[224, 224, 3]`. Advanced UsageThe `create` function is the critical part of this library. It use transfer learning with a pretrained model similiar to the [tutorial](https://www.tensorflow.org/tutorials/images/transfer_learning).The `create`function contains the following steps:1. Split the data into training, validation, testing data according to parameter `validation_ratio` and `test_ratio`. The default value of `validation_ratio` and `test_ratio` are `0.1` and `0.1`.2. Download a [Image Feature Vector](https://www.tensorflow.org/hub/common_signatures/imagesimage_feature_vector) as the base model from TensorFlow Hub. The default pre-trained model is MobileNetV2.3. Add a classifier head with a Dropout Layer with `dropout_rate` between head layer and pre-trained model. The default `dropout_rate` is the default `dropout_rate` value from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub.4. Preprocess the raw input data. Currently, preprocessing steps including normalizing the value of each image pixel to model input scale and resizing it to model input size. MobileNetV2 have the input scale `[0, 1]` and the input image size `[224, 224, 3]`.5. Feed the data into the classifier model. By default, the training parameters such as training epochs, batch size, learning rate, momentum are the default values from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub. Only the classifier head is trained.In this section, we describe several advanced topics, including switching to a different image classification model, changing the training hyperparameters etc. Change the model Change to the model that's supported in this library.This library supports MobileNetV2 and EfficientNetB0 model by now. The default model is MobileNetV2.[EfficientNets](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) are a family of image classification models that could acheive state-of-art accuracy. EfficinetNetB0 is one of the EfficientNet models that's small and suitable for on-device applications. It's larger than MobileNetV2 while might achieve better performance.We could switch model to EfficientNetB0 by just setting parameter `model_spec` to `efficientnet_b0_spec` in `create` method.
###Code
model = image_classifier.create(train_data, model_spec=efficientnet_b0_spec, validation_data=validation_data)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained EfficientNetB0 model to see the accuracy and loss in testing data.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
Change to the model in TensorFlow HubMoreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.As [Inception V3](https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1) model as an example, we could define `inception_v3_spec` which is an object of `ImageModelSpec` and contains the specification of the Inception V3 model.We need to specify the model name `name`, the url of the TensorFlow Hub model `uri`. Meanwhile, the default value of `input_image_shape` is `[224, 224]`. We need to change it to `[299, 299]` for Inception V3 model.
###Code
inception_v3_spec = ImageModelSpec(
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
###Output
_____no_output_____
###Markdown
Then, by setting parameter `model_spec` to `inception_v3_spec` in `create` method, we could retrain the Inception V3 model. The remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end. Change your own custom model If we'd like to use the custom model that's not in TensorFlow Hub, we should create and export [ModelSpec](https://www.tensorflow.org/hub/api_docs/python/hub/ModuleSpec) in TensorFlow Hub.Then start to define `ImageModelSpec` object like the process above. Change the training hyperparametersWe could also change the training hyperparameters like `epochs`, `dropout_rate` and `batch_size` that could affect the model accuracy. For instance,* `epochs`: more epochs could achieve better accuracy until converage but training for too many epochs may lead to overfitting.* `dropout_rate`: avoid overfitting.* `batch_size`: number of samples to use in one training step.* `validation_data`: number of samples to use in one training step.For example, we could train with more epochs.
###Code
model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained model with 10 training epochs.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification with TensorFlow Lite model customization with TensorFlow 2.0 Run in Google Colab View source on GitHub The model customization library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes this model customization library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device. PrerequisitesTo run this example, we first need to install serveral required packages, including model customization package that in github [repo](https://github.com/tensorflow/examples).
###Code
!pip uninstall -y -q tensorflow fancyimpute
!pip install -q git+git://github.com/tensorflow/examples.git#egg=tensorflow-examples[model_customization]
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_customization.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_customization.core.task import image_classifier
from tensorflow_examples.lite.model_customization.core.task.model_spec import efficientnet_b0_spec
from tensorflow_examples.lite.model_customization.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Simple End-to-End ExampleLet's get some images to play with this simple end-to-end example. You could replace it with your own image folders. Hundreds of images is a good start for model customization while more data could achieve better accuracy.
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process. 1. Load input data specific to an on-device ML app. Split it to training data and testing data.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
###Output
_____no_output_____
###Markdown
2. Customize the TensorFlow model.
###Code
model = image_classifier.create(train_data)
###Output
_____no_output_____
###Markdown
3. Evaluate the model.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
4. Export to TensorFlow Lite model.
###Code
model.export('image_classifier.tflite', 'image_labels.txt')
###Output
_____no_output_____
###Markdown
After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app. Detailed ProcessCurrently, we only include MobileNetV2 and EfficientNetB0 models as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.The following walks through this end-to-end example step by step to show more detail. Step 1: Load Input Data Specific to an On-device ML AppThe flower dataset contains 3670 images belonging to 5 classes. Download the archive version of the dataset and untar it.The dataset has the following directory structure:flower_photos|__ daisy |______ 100080576_f52e8ee070_n.jpg |______ 14167534527_781ceb1b7a_n.jpg |______ ...|__ dandelion |______ 10043234166_e6dd915111_n.jpg |______ 1426682852_e62169221f_m.jpg |______ ...|__ roses |______ 102501987_3cdb8e5394_n.jpg |______ 14982802401_a3dfb22afb.jpg |______ ...|__ sunflowers |______ 12471791574_bb1be83df4.jpg |______ 15122112402_cafa41934f.jpg |______ ...|__ tulips |______ 13976522214_ccec508fe7.jpg |______ 14487943607_651e8062a1_m.jpg |______ ...
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
Use `ImageClassifierDataLoader` class to load data.As for `from_folder()` method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
Split it to training data (80%), validation data (10%, optional) and testing data (10%).
###Code
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
###Output
_____no_output_____
###Markdown
Show 25 image examples with labels.
###Code
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.dataset.take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: Customize the TensorFlow ModelCreate a custom image classifier model based on the loaded data. The default model is MobileNetV2.
###Code
model = image_classifier.create(train_data, validation_data=validation_data)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Step 3: Evaluate the Customized ModelEvaluate the result of the model, get the loss and accuracy of the model.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
###Code
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_top_k(test_data)
for i, (image, label) in enumerate(test_data.dataset.take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
###Output
_____no_output_____
###Markdown
If the accuracy doesn't meet the app requirement, one could refer to [Advanced Usage](scrollTo=zNDBP2qA54aK) to explore alternatives such as changing to a larger model, adjusting re-training parameters etc. Step 4: Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format and save the image labels in label file.
###Code
model.export('flower_classifier.tflite', 'flower_labels.txt')
###Output
_____no_output_____
###Markdown
The TensorFlow Lite model file and label file could be used in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app.As for android reference app as an example, we could add `flower_classifier.tflite` and `flower_label.txt` in [assets](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android/app/src/main/assets) folder. Meanwhile, change label filename in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL65) and TensorFlow Lite file name in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL60). Thus, we could run the retrained float TensorFlow Lite model on the android app. Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.
###Code
# Read TensorFlow Lite model from TensorFlow Lite file.
with tf.io.gfile.GFile('flower_classifier.tflite', 'rb') as f:
model_content = f.read()
# Read label names from label file.
with tf.io.gfile.GFile('flower_labels.txt', 'r') as f:
label_names = f.read().split('\n')
# Initialze TensorFlow Lite inpterpreter.
interpreter = tf.lite.Interpreter(model_content=model_content)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
# Run predictions on each test image data and calculate accuracy.
accurate_count = 0
for i, (image, label) in enumerate(test_data.dataset):
# Pre-processing should remain the same. Currently, just normalize each pixel value and resize image according to the model's specification.
image, _ = model.preprocess(image, label)
# Add batch dimension and convert to float32 to match with the model's input
# data format.
image = tf.expand_dims(image, 0).numpy()
# Run inference.
interpreter.set_tensor(input_index, image)
interpreter.invoke()
# Post-processing: remove batch dimension and find the label with highest
# probability.
predict_label = np.argmax(output()[0])
# Get label name with label index.
predict_label_name = label_names[predict_label]
accurate_count += (predict_label == label.numpy())
accuracy = accurate_count * 1.0 / test_data.size
print('TensorFlow Lite model accuracy = %.4f' % accuracy)
###Output
_____no_output_____
###Markdown
Note that preprocessing for inference should be the same as training. Currently, preprocessing contains normalizing each pixel value and resizing the image according to the model's specification. For MobileNetV2, input image should be normalized to `[0, 1]` and resized to `[224, 224, 3]`. Advanced UsageThe `create` function is the critical part of this library. It use transfer learning with a pretrained model similiar to the [tutorial](https://www.tensorflow.org/tutorials/images/transfer_learning).The `create`function contains the following steps:1. Split the data into training, validation, testing data according to parameter `validation_ratio` and `test_ratio`. The default value of `validation_ratio` and `test_ratio` are `0.1` and `0.1`.2. Download a [Image Feature Vector](https://www.tensorflow.org/hub/common_signatures/imagesimage_feature_vector) as the base model from TensorFlow Hub. The default pre-trained model is MobileNetV2.3. Add a classifier head with a Dropout Layer with `dropout_rate` between head layer and pre-trained model. The default `dropout_rate` is the default `dropout_rate` value from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub.4. Preprocess the raw input data. Currently, preprocessing steps including normalizing the value of each image pixel to model input scale and resizing it to model input size. MobileNetV2 have the input scale `[0, 1]` and the input image size `[224, 224, 3]`.5. Feed the data into the classifier model. By default, the training parameters such as training epochs, batch size, learning rate, momentum are the default values from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub. Only the classifier head is trained.In this section, we describe several advanced topics, including switching to a different image classification model, changing the training hyperparameters etc. Change the model Change to the model that's supported in this library.This library supports MobileNetV2 and EfficientNetB0 model by now. The default model is MobileNetV2.[EfficientNets](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) are a family of image classification models that could acheive state-of-art accuracy. EfficinetNetB0 is one of the EfficientNet models that's small and suitable for on-device applications. It's larger than MobileNetV2 while might achieve better performance.We could switch model to EfficientNetB0 by just setting parameter `model_spec` to `efficientnet_b0_spec` in `create` method.
###Code
model = image_classifier.create(train_data, model_spec=efficientnet_b0_spec, validation_data=validation_data)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained EfficientNetB0 model to see the accuracy and loss in testing data.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
Change to the model in TensorFlow HubMoreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.As [Inception V3](https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1) model as an example, we could define `inception_v3_spec` which is an object of `ImageModelSpec` and contains the specification of the Inception V3 model.We need to specify the model name `name`, the url of the TensorFlow Hub model `uri`. Meanwhile, the default value of `input_image_shape` is `[224, 224]`. We need to change it to `[299, 299]` for Inception V3 model.
###Code
inception_v3_spec = ImageModelSpec(
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
###Output
_____no_output_____
###Markdown
Then, by setting parameter `model_spec` to `inception_v3_spec` in `create` method, we could retrain the Inception V3 model. The remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end. Change your own custom model If we'd like to use the custom model that's not in TensorFlow Hub, we should create and export [ModelSpec](https://www.tensorflow.org/hub/api_docs/python/hub/ModuleSpec) in TensorFlow Hub.Then start to define `ImageModelSpec` object like the process above. Change the training hyperparametersWe could also change the training hyperparameters like `epochs`, `dropout_rate` and `batch_size` that could affect the model accuracy. For instance,* `epochs`: more epochs could achieve better accuracy until converage but training for too many epochs may lead to overfitting.* `dropout_rate`: avoid overfitting.* `batch_size`: number of samples to use in one training step.* `validation_data`: number of samples to use in one training step.For example, we could train with more epochs.
###Code
model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained model with 10 training epochs.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification with TensorFlow Lite model customization with TensorFlow 2.0 Run in Google Colab View source on GitHub The model customization library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes this model customization library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device. PrerequisitesTo run this example, we first need to install serveral required packages, including model customization package that in github [repo](https://github.com/tensorflow/examples).
###Code
%tensorflow_version 2.x
!pip install -q tf-hub-nightly==0.8.0.dev201911110007
!pip install -q git+https://github.com/tensorflow/examples
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_customization.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_customization.core.task import image_classifier
from tensorflow_examples.lite.model_customization.core.task.model_spec import efficientnet_b0_spec
from tensorflow_examples.lite.model_customization.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Simple End-to-End ExampleLet's get some images to play with this simple end-to-end example. You could replace it with your own image folders. Hundreds of images is a good start for model customization while more data could achieve better accuracy.
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process. 1. Load input data specific to an on-device ML app.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
2. Customize the TensorFlow model.
###Code
model = image_classifier.create(data)
###Output
_____no_output_____
###Markdown
3. Evaluate the model.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
4. Export to TensorFlow Lite model.
###Code
model.export('image_classifier.tflite', 'image_labels.txt')
###Output
_____no_output_____
###Markdown
After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app. Detailed ProcessCurrently, we only include MobileNetV2 and EfficientNetB0 models as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.The following walks through this end-to-end example step by step to show more detail. Step 1: Load Input Data Specific to an On-device ML AppThe flower dataset contains 3670 images belonging to 5 classes. Download the archive version of the dataset and untar it.The dataset has the following directory structure:flower_photos|__ daisy |______ 100080576_f52e8ee070_n.jpg |______ 14167534527_781ceb1b7a_n.jpg |______ ...|__ dandelion |______ 10043234166_e6dd915111_n.jpg |______ 1426682852_e62169221f_m.jpg |______ ...|__ roses |______ 102501987_3cdb8e5394_n.jpg |______ 14982802401_a3dfb22afb.jpg |______ ...|__ sunflowers |______ 12471791574_bb1be83df4.jpg |______ 15122112402_cafa41934f.jpg |______ ...|__ tulips |______ 13976522214_ccec508fe7.jpg |______ 14487943607_651e8062a1_m.jpg |______ ...
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
Use `ImageClassifierDataLoader` class to load data.As for `from_folder()` method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
Show 25 image examples with labels.
###Code
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.dataset.take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: Customize the TensorFlow ModelCreate a custom image classifier model based on the loaded data. The default model is MobileNetV2.
###Code
model = image_classifier.create(data)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Step 3: Evaluate the Customized ModelEvaluate the result of the model, get the loss and accuracy of the model.By default, the results are evaluated on the test data that's splitted in `create` method. Other test data could also be evaluated if served as a parameter.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
###Code
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
for i, (image, label) in enumerate(model.test_data.dataset.take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
# Pre-processing should remain the same. Currently, just normalize each pixel value to [0, 1] and resize image to [224, 224, 3].
image, _ = model.preprocess_image(image, label)
# Add batch dimension and convert to float32 to match with the model's input
# data format.
image = tf.expand_dims(image, 0).numpy()
predict_prob = model.model.predict(image)
predict_label = np.argmax(predict_prob, axis=1)[0]
ax.xaxis.label.set_color(get_label_color(predict_label,\
label.numpy()))
plt.xlabel('Predicted: %s' % model.test_data.index_to_label[predict_label])
plt.show()
###Output
_____no_output_____
###Markdown
If the accuracy doesn't meet the app requirement, one could refer to [Advanced Usage](scrollTo=zNDBP2qA54aK) to explore alternatives such as changing to a larger model, adjusting re-training parameters etc. Step 4: Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format and save the image labels in label file.
###Code
model.export('flower_classifier.tflite', 'flower_labels.txt')
###Output
_____no_output_____
###Markdown
The TensorFlow Lite model file and label file could be used in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app.As for android reference app as an example, we could add `flower_classifier.tflite` and `flower_label.txt` in [assets](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android/app/src/main/assets) folder. Meanwhile, change label filename in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL65) and TensorFlow Lite file name in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL60). Thus, we could run the retrained float TensorFlow Lite model on the android app. Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.
###Code
# Read TensorFlow Lite model from TensorFlow Lite file.
with tf.io.gfile.GFile('flower_classifier.tflite', 'rb') as f:
model_content = f.read()
# Read label names from label file.
with tf.io.gfile.GFile('flower_labels.txt', 'r') as f:
label_names = f.read().split('\n')
# Initialze TensorFlow Lite inpterpreter.
interpreter = tf.lite.Interpreter(model_content=model_content)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
# Run predictions on each test image data and calculate accuracy.
accurate_count = 0
for i, (image, label) in enumerate(model.test_data.dataset):
# Pre-processing should remain the same. Currently, just normalize each pixel value and resize image according to the model's specification.
image, _ = model.preprocess_image(image, label)
# Add batch dimension and convert to float32 to match with the model's input
# data format.
image = tf.expand_dims(image, 0).numpy()
# Run inference.
interpreter.set_tensor(input_index, image)
interpreter.invoke()
# Post-processing: remove batch dimension and find the label with highest
# probability.
predict_label = np.argmax(output()[0])
# Get label name with label index.
predict_label_name = label_names[predict_label]
accurate_count += (predict_label == label.numpy())
accuracy = accurate_count * 1.0 / model.test_data.size
print('TensorFlow Lite model accuracy = %.4f' % accuracy)
###Output
_____no_output_____
###Markdown
Note that preprocessing for inference should be the same as training. Currently, preprocessing contains normalizing each pixel value and resizing the image according to the model's specification. For MobileNetV2, input image should be normalized to `[0, 1]` and resized to `[224, 224, 3]`. Advanced UsageThe `create` function is the critical part of this library. It use transfer learning with a pretrained model similiar to the [tutorial](https://www.tensorflow.org/tutorials/images/transfer_learning).The `create`function contains the following steps:1. Split the data into training, validation, testing data according to parameter `validation_ratio` and `test_ratio`. The default value of `validation_ratio` and `test_ratio` are `0.1` and `0.1`.2. Download a [Image Feature Vector](https://www.tensorflow.org/hub/common_signatures/imagesimage_feature_vector) as the base model from TensorFlow Hub. The default pre-trained model is MobileNetV2.3. Add a classifier head with a Dropout Layer with `dropout_rate` between head layer and pre-trained model. The default `dropout_rate` is the default `dropout_rate` value from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub.4. Preprocess the raw input data. Currently, preprocessing steps including normalizing the value of each image pixel to model input scale and resizing it to model input size. MobileNetV2 have the input scale `[0, 1]` and the input image size `[224, 224, 3]`.5. Feed the data into the classifier model. By default, the training parameters such as training epochs, batch size, learning rate, momentum are the default values from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub. Only the classifier head is trained.In this section, we describe several advanced topics, including switching to a different image classification model, changing the training hyperparameters etc. Change the model Change to the model that's supported in this library.This library supports MobileNetV2 and EfficientNetB0 model by now. The default model is MobileNetV2.[EfficientNets](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) are a family of image classification models that could acheive state-of-art accuracy. EfficinetNetB0 is one of the EfficientNet models that's small and suitable for on-device applications. It's larger than MobileNetV2 while might achieve better performance.We could switch model to EfficientNetB0 by just setting parameter `model_spec` to `efficientnet_b0_spec` in `create` method.
###Code
model = image_classifier.create(data, model_spec=efficientnet_b0_spec)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained EfficientNetB0 model to see the accuracy and loss in testing data.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
Change to the model in TensorFlow HubMoreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.As [Inception V3](https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1) model as an example, we could define `inception_v3_spec` which is an object of `ImageModelSpec` and contains the specification of the Inception V3 model.We need to specify the model name `name`, the url of the TensorFlow Hub model `uri`. Meanwhile, the default value of `input_image_shape` is `[224, 224]`. We need to change it to `[299, 299]` for Inception V3 model.
###Code
inception_v3_spec = ImageModelSpec(
name='inception_v3',
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
###Output
_____no_output_____
###Markdown
Then, by setting parameter `model_spec` to `inception_v3_spec` in `create` method, we could retrain the Inception V3 model. The remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end. Change your own custom model If we'd like to use the custom model that's not in TensorFlow Hub, we should create and export [ModelSpec](https://www.tensorflow.org/hub/api_docs/python/hub/ModuleSpec) in TensorFlow Hub.Then start to define `ImageModelSpec` object like the process above. Change the training hyperparametersWe could also change the training hyperparameters like `epochs`, `dropout_rate` and `batch_size` that could affect the model accuracy. For instance,* `epochs`: more epochs could achieve better accuracy until converage but training for too many epochs may lead to overfitting.* `dropout_rate`: avoid overfitting.* `batch_size`: number of samples to use in one training step.For example, we could train with more epochs.
###Code
model = image_classifier.create(data, epochs=10)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained model with 10 training epochs.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification with TensorFlow Lite model customization with TensorFlow 2.0 Run in Google Colab View source on GitHub The model customization library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes this model customization library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device. PrerequisitesTo run this example, we first need to install serveral required packages, including model customization package that in github [repo](https://github.com/tensorflow/examples).
###Code
%tensorflow_version 2.x
!pip install -q tf-hub-nightly==0.8.0.dev201911110007
!pip install -q git+https://github.com/tensorflow/examples
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_customization.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_customization.core.task import image_classifier
from tensorflow_examples.lite.model_customization.core.task.model_spec import efficientnet_b0_spec
from tensorflow_examples.lite.model_customization.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Simple End-to-End ExampleLet's get some images to play with this simple end-to-end example. You could replace it with your own image folders. Hundreds of images is a good start for model customization while more data could achieve better accuracy.
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process. 1. Load input data specific to an on-device ML app.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
2. Customize the TensorFlow model.
###Code
model = image_classifier.create(data)
###Output
_____no_output_____
###Markdown
3. Evaluate the model.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
4. Export to TensorFlow Lite model.
###Code
model.export('image_classifier.tflite', 'image_labels.txt')
###Output
_____no_output_____
###Markdown
After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app. Detailed ProcessCurrently, we only include MobileNetV2 and EfficientNetB0 models as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.The following walks through this end-to-end example step by step to show more detail. Step 1: Load Input Data Specific to an On-device ML AppThe flower dataset contains 3670 images belonging to 5 classes. Download the archive version of the dataset and untar it.The dataset has the following directory structure:flower_photos|__ daisy |______ 100080576_f52e8ee070_n.jpg |______ 14167534527_781ceb1b7a_n.jpg |______ ...|__ dandelion |______ 10043234166_e6dd915111_n.jpg |______ 1426682852_e62169221f_m.jpg |______ ...|__ roses |______ 102501987_3cdb8e5394_n.jpg |______ 14982802401_a3dfb22afb.jpg |______ ...|__ sunflowers |______ 12471791574_bb1be83df4.jpg |______ 15122112402_cafa41934f.jpg |______ ...|__ tulips |______ 13976522214_ccec508fe7.jpg |______ 14487943607_651e8062a1_m.jpg |______ ...
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
Use `ImageClassifierDataLoader` class to load data.As for `from_folder()` method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
Show 25 image examples with labels.
###Code
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.dataset.take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: Customize the TensorFlow ModelCreate a custom image classifier model based on the loaded data. The default model is MobileNetV2.
###Code
model = image_classifier.create(data)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Step 3: Evaluate the Customized ModelEvaluate the result of the model, get the loss and accuracy of the model.By default, the results are evaluated on the test data that's splitted in `create` method. Other test data could also be evaluated if served as a parameter.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
###Code
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_topk(model.test_data)
for i, (image, label) in enumerate(model.test_data.dataset.take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
model.test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
###Output
_____no_output_____
###Markdown
If the accuracy doesn't meet the app requirement, one could refer to [Advanced Usage](scrollTo=zNDBP2qA54aK) to explore alternatives such as changing to a larger model, adjusting re-training parameters etc. Step 4: Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format and save the image labels in label file.
###Code
model.export('flower_classifier.tflite', 'flower_labels.txt')
###Output
_____no_output_____
###Markdown
The TensorFlow Lite model file and label file could be used in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app.As for android reference app as an example, we could add `flower_classifier.tflite` and `flower_label.txt` in [assets](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android/app/src/main/assets) folder. Meanwhile, change label filename in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL65) and TensorFlow Lite file name in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL60). Thus, we could run the retrained float TensorFlow Lite model on the android app. Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.
###Code
# Read TensorFlow Lite model from TensorFlow Lite file.
with tf.io.gfile.GFile('flower_classifier.tflite', 'rb') as f:
model_content = f.read()
# Read label names from label file.
with tf.io.gfile.GFile('flower_labels.txt', 'r') as f:
label_names = f.read().split('\n')
# Initialze TensorFlow Lite inpterpreter.
interpreter = tf.lite.Interpreter(model_content=model_content)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
# Run predictions on each test image data and calculate accuracy.
accurate_count = 0
for i, (image, label) in enumerate(model.test_data.dataset):
# Pre-processing should remain the same. Currently, just normalize each pixel value and resize image according to the model's specification.
image, _ = model.preprocess(image, label)
# Add batch dimension and convert to float32 to match with the model's input
# data format.
image = tf.expand_dims(image, 0).numpy()
# Run inference.
interpreter.set_tensor(input_index, image)
interpreter.invoke()
# Post-processing: remove batch dimension and find the label with highest
# probability.
predict_label = np.argmax(output()[0])
# Get label name with label index.
predict_label_name = label_names[predict_label]
accurate_count += (predict_label == label.numpy())
accuracy = accurate_count * 1.0 / model.test_data.size
print('TensorFlow Lite model accuracy = %.4f' % accuracy)
###Output
_____no_output_____
###Markdown
Note that preprocessing for inference should be the same as training. Currently, preprocessing contains normalizing each pixel value and resizing the image according to the model's specification. For MobileNetV2, input image should be normalized to `[0, 1]` and resized to `[224, 224, 3]`. Advanced UsageThe `create` function is the critical part of this library. It use transfer learning with a pretrained model similiar to the [tutorial](https://www.tensorflow.org/tutorials/images/transfer_learning).The `create`function contains the following steps:1. Split the data into training, validation, testing data according to parameter `validation_ratio` and `test_ratio`. The default value of `validation_ratio` and `test_ratio` are `0.1` and `0.1`.2. Download a [Image Feature Vector](https://www.tensorflow.org/hub/common_signatures/imagesimage_feature_vector) as the base model from TensorFlow Hub. The default pre-trained model is MobileNetV2.3. Add a classifier head with a Dropout Layer with `dropout_rate` between head layer and pre-trained model. The default `dropout_rate` is the default `dropout_rate` value from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub.4. Preprocess the raw input data. Currently, preprocessing steps including normalizing the value of each image pixel to model input scale and resizing it to model input size. MobileNetV2 have the input scale `[0, 1]` and the input image size `[224, 224, 3]`.5. Feed the data into the classifier model. By default, the training parameters such as training epochs, batch size, learning rate, momentum are the default values from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub. Only the classifier head is trained.In this section, we describe several advanced topics, including switching to a different image classification model, changing the training hyperparameters etc. Change the model Change to the model that's supported in this library.This library supports MobileNetV2 and EfficientNetB0 model by now. The default model is MobileNetV2.[EfficientNets](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) are a family of image classification models that could acheive state-of-art accuracy. EfficinetNetB0 is one of the EfficientNet models that's small and suitable for on-device applications. It's larger than MobileNetV2 while might achieve better performance.We could switch model to EfficientNetB0 by just setting parameter `model_spec` to `efficientnet_b0_spec` in `create` method.
###Code
model = image_classifier.create(data, model_spec=efficientnet_b0_spec)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained EfficientNetB0 model to see the accuracy and loss in testing data.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
Change to the model in TensorFlow HubMoreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.As [Inception V3](https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1) model as an example, we could define `inception_v3_spec` which is an object of `ImageModelSpec` and contains the specification of the Inception V3 model.We need to specify the model name `name`, the url of the TensorFlow Hub model `uri`. Meanwhile, the default value of `input_image_shape` is `[224, 224]`. We need to change it to `[299, 299]` for Inception V3 model.
###Code
inception_v3_spec = ImageModelSpec(
name='inception_v3',
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
###Output
_____no_output_____
###Markdown
Then, by setting parameter `model_spec` to `inception_v3_spec` in `create` method, we could retrain the Inception V3 model. The remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end. Change your own custom model If we'd like to use the custom model that's not in TensorFlow Hub, we should create and export [ModelSpec](https://www.tensorflow.org/hub/api_docs/python/hub/ModuleSpec) in TensorFlow Hub.Then start to define `ImageModelSpec` object like the process above. Change the training hyperparametersWe could also change the training hyperparameters like `epochs`, `dropout_rate` and `batch_size` that could affect the model accuracy. For instance,* `epochs`: more epochs could achieve better accuracy until converage but training for too many epochs may lead to overfitting.* `dropout_rate`: avoid overfitting.* `batch_size`: number of samples to use in one training step.For example, we could train with more epochs.
###Code
model = image_classifier.create(data, epochs=10)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained model with 10 training epochs.
###Code
loss, accuracy = model.evaluate()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Image classification with TensorFlow Lite model customization with TensorFlow 2.0 Run in Google Colab View source on GitHub The model customization library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.This notebook shows an end-to-end example that utilizes this model customization library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device. PrerequisitesTo run this example, we first need to install serveral required packages, including model customization package that in github [repo](https://github.com/tensorflow/examples).
###Code
%tensorflow_version 2.x
!pip install -q tf-hub-nightly==0.8.0.dev201911110007
!pip install -q git+https://github.com/tensorflow/examples
###Output
_____no_output_____
###Markdown
Import the required packages.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
assert tf.__version__.startswith('2')
from tensorflow_examples.lite.model_customization.core.data_util.image_dataloader import ImageClassifierDataLoader
from tensorflow_examples.lite.model_customization.core.task import image_classifier
from tensorflow_examples.lite.model_customization.core.task.model_spec import efficientnet_b0_spec
from tensorflow_examples.lite.model_customization.core.task.model_spec import ImageModelSpec
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Simple End-to-End ExampleLet's get some images to play with this simple end-to-end example. You could replace it with your own image folders. Hundreds of images is a good start for model customization while more data could achieve better accuracy.
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
The example just consists of 4 lines of code as shown below, each of which representing one step of the overall process. 1. Load input data specific to an on-device ML app. Split it to training data and testing data.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
train_data, test_data = data.split(0.9)
###Output
_____no_output_____
###Markdown
2. Customize the TensorFlow model.
###Code
model = image_classifier.create(train_data)
###Output
_____no_output_____
###Markdown
3. Evaluate the model.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
4. Export to TensorFlow Lite model.
###Code
model.export('image_classifier.tflite', 'image_labels.txt')
###Output
_____no_output_____
###Markdown
After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app. Detailed ProcessCurrently, we only include MobileNetV2 and EfficientNetB0 models as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.The following walks through this end-to-end example step by step to show more detail. Step 1: Load Input Data Specific to an On-device ML AppThe flower dataset contains 3670 images belonging to 5 classes. Download the archive version of the dataset and untar it.The dataset has the following directory structure:flower_photos|__ daisy |______ 100080576_f52e8ee070_n.jpg |______ 14167534527_781ceb1b7a_n.jpg |______ ...|__ dandelion |______ 10043234166_e6dd915111_n.jpg |______ 1426682852_e62169221f_m.jpg |______ ...|__ roses |______ 102501987_3cdb8e5394_n.jpg |______ 14982802401_a3dfb22afb.jpg |______ ...|__ sunflowers |______ 12471791574_bb1be83df4.jpg |______ 15122112402_cafa41934f.jpg |______ ...|__ tulips |______ 13976522214_ccec508fe7.jpg |______ 14487943607_651e8062a1_m.jpg |______ ...
###Code
image_path = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
###Output
_____no_output_____
###Markdown
Use `ImageClassifierDataLoader` class to load data.As for `from_folder()` method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.
###Code
data = ImageClassifierDataLoader.from_folder(image_path)
###Output
_____no_output_____
###Markdown
Split it to training data (80%), validation data (10%, optional) and testing data (10%).
###Code
train_data, rest_data = data.split(0.8)
validation_data, test_data = rest_data.split(0.5)
###Output
_____no_output_____
###Markdown
Show 25 image examples with labels.
###Code
plt.figure(figsize=(10,10))
for i, (image, label) in enumerate(data.dataset.take(25)):
plt.subplot(5,5,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
plt.xlabel(data.index_to_label[label.numpy()])
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: Customize the TensorFlow ModelCreate a custom image classifier model based on the loaded data. The default model is MobileNetV2.
###Code
model = image_classifier.create(train_data, validation_data=validation_data)
###Output
_____no_output_____
###Markdown
Have a look at the detailed model structure.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Step 3: Evaluate the Customized ModelEvaluate the result of the model, get the loss and accuracy of the model.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.
###Code
# A helper function that returns 'red'/'black' depending on if its two input
# parameter matches or not.
def get_label_color(val1, val2):
if val1 == val2:
return 'black'
else:
return 'red'
# Then plot 100 test images and their predicted labels.
# If a prediction result is different from the label provided label in "test"
# dataset, we will highlight it in red color.
plt.figure(figsize=(20, 20))
predicts = model.predict_top_k(test_data)
for i, (image, label) in enumerate(test_data.dataset.take(100)):
ax = plt.subplot(10, 10, i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(image.numpy(), cmap=plt.cm.gray)
predict_label = predicts[i][0][0]
color = get_label_color(predict_label,
test_data.index_to_label[label.numpy()])
ax.xaxis.label.set_color(color)
plt.xlabel('Predicted: %s' % predict_label)
plt.show()
###Output
_____no_output_____
###Markdown
If the accuracy doesn't meet the app requirement, one could refer to [Advanced Usage](scrollTo=zNDBP2qA54aK) to explore alternatives such as changing to a larger model, adjusting re-training parameters etc. Step 4: Export to TensorFlow Lite ModelConvert the existing model to TensorFlow Lite model format and save the image labels in label file.
###Code
model.export('flower_classifier.tflite', 'flower_labels.txt')
###Output
_____no_output_____
###Markdown
The TensorFlow Lite model file and label file could be used in [image classification](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification) reference app.As for android reference app as an example, we could add `flower_classifier.tflite` and `flower_label.txt` in [assets](https://github.com/tensorflow/examples/tree/master/lite/examples/image_classification/android/app/src/main/assets) folder. Meanwhile, change label filename in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL65) and TensorFlow Lite file name in [code](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/classification/tflite/ClassifierFloatMobileNet.javaL60). Thus, we could run the retrained float TensorFlow Lite model on the android app. Here, we also demonstrate how to use the above files to run and evaluate the TensorFlow Lite model.
###Code
# Read TensorFlow Lite model from TensorFlow Lite file.
with tf.io.gfile.GFile('flower_classifier.tflite', 'rb') as f:
model_content = f.read()
# Read label names from label file.
with tf.io.gfile.GFile('flower_labels.txt', 'r') as f:
label_names = f.read().split('\n')
# Initialze TensorFlow Lite inpterpreter.
interpreter = tf.lite.Interpreter(model_content=model_content)
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]['index']
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
# Run predictions on each test image data and calculate accuracy.
accurate_count = 0
for i, (image, label) in enumerate(test_data.dataset):
# Pre-processing should remain the same. Currently, just normalize each pixel value and resize image according to the model's specification.
image, _ = model.preprocess(image, label)
# Add batch dimension and convert to float32 to match with the model's input
# data format.
image = tf.expand_dims(image, 0).numpy()
# Run inference.
interpreter.set_tensor(input_index, image)
interpreter.invoke()
# Post-processing: remove batch dimension and find the label with highest
# probability.
predict_label = np.argmax(output()[0])
# Get label name with label index.
predict_label_name = label_names[predict_label]
accurate_count += (predict_label == label.numpy())
accuracy = accurate_count * 1.0 / test_data.size
print('TensorFlow Lite model accuracy = %.4f' % accuracy)
###Output
_____no_output_____
###Markdown
Note that preprocessing for inference should be the same as training. Currently, preprocessing contains normalizing each pixel value and resizing the image according to the model's specification. For MobileNetV2, input image should be normalized to `[0, 1]` and resized to `[224, 224, 3]`. Advanced UsageThe `create` function is the critical part of this library. It use transfer learning with a pretrained model similiar to the [tutorial](https://www.tensorflow.org/tutorials/images/transfer_learning).The `create`function contains the following steps:1. Split the data into training, validation, testing data according to parameter `validation_ratio` and `test_ratio`. The default value of `validation_ratio` and `test_ratio` are `0.1` and `0.1`.2. Download a [Image Feature Vector](https://www.tensorflow.org/hub/common_signatures/imagesimage_feature_vector) as the base model from TensorFlow Hub. The default pre-trained model is MobileNetV2.3. Add a classifier head with a Dropout Layer with `dropout_rate` between head layer and pre-trained model. The default `dropout_rate` is the default `dropout_rate` value from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub.4. Preprocess the raw input data. Currently, preprocessing steps including normalizing the value of each image pixel to model input scale and resizing it to model input size. MobileNetV2 have the input scale `[0, 1]` and the input image size `[224, 224, 3]`.5. Feed the data into the classifier model. By default, the training parameters such as training epochs, batch size, learning rate, momentum are the default values from [make_image_classifier_lib](https://github.com/tensorflow/hub/blob/master/tensorflow_hub/tools/make_image_classifier/make_image_classifier_lib.pyL55) by TensorFlow Hub. Only the classifier head is trained.In this section, we describe several advanced topics, including switching to a different image classification model, changing the training hyperparameters etc. Change the model Change to the model that's supported in this library.This library supports MobileNetV2 and EfficientNetB0 model by now. The default model is MobileNetV2.[EfficientNets](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet) are a family of image classification models that could acheive state-of-art accuracy. EfficinetNetB0 is one of the EfficientNet models that's small and suitable for on-device applications. It's larger than MobileNetV2 while might achieve better performance.We could switch model to EfficientNetB0 by just setting parameter `model_spec` to `efficientnet_b0_spec` in `create` method.
###Code
model = image_classifier.create(train_data, model_spec=efficientnet_b0_spec, validation_data=validation_data)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained EfficientNetB0 model to see the accuracy and loss in testing data.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____
###Markdown
Change to the model in TensorFlow HubMoreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.As [Inception V3](https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1) model as an example, we could define `inception_v3_spec` which is an object of `ImageModelSpec` and contains the specification of the Inception V3 model.We need to specify the model name `name`, the url of the TensorFlow Hub model `uri`. Meanwhile, the default value of `input_image_shape` is `[224, 224]`. We need to change it to `[299, 299]` for Inception V3 model.
###Code
inception_v3_spec = ImageModelSpec(
uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')
inception_v3_spec.input_image_shape = [299, 299]
###Output
_____no_output_____
###Markdown
Then, by setting parameter `model_spec` to `inception_v3_spec` in `create` method, we could retrain the Inception V3 model. The remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end. Change your own custom model If we'd like to use the custom model that's not in TensorFlow Hub, we should create and export [ModelSpec](https://www.tensorflow.org/hub/api_docs/python/hub/ModuleSpec) in TensorFlow Hub.Then start to define `ImageModelSpec` object like the process above. Change the training hyperparametersWe could also change the training hyperparameters like `epochs`, `dropout_rate` and `batch_size` that could affect the model accuracy. For instance,* `epochs`: more epochs could achieve better accuracy until converage but training for too many epochs may lead to overfitting.* `dropout_rate`: avoid overfitting.* `batch_size`: number of samples to use in one training step.* `validation_data`: number of samples to use in one training step.For example, we could train with more epochs.
###Code
model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)
###Output
_____no_output_____
###Markdown
Evaluate the newly retrained model with 10 training epochs.
###Code
loss, accuracy = model.evaluate(test_data)
###Output
_____no_output_____ |
notebooks/test_new_scenario_classes.ipynb | ###Markdown
test out new RCP, SSP classes
###Code
pip install -e ../../FAIR #git+https://github.com/ClimateImpactLab/FAIR.git@cmip6_scenarios
%matplotlib inline
import sys
sys.path.append('../../FAIR')
import fair
fair.__version__
import numpy as np
from matplotlib import pyplot as plt
plt.style.use('seaborn-darkgrid')
plt.rcParams['figure.figsize'] = (20, 12)
from fair.Scenario import scenario
scenario.Emissions.get_available_scenarios()
CILcolors = ["#3393b0", "#4dc8d6", "#bae8e2", "#ffb35e", "#ff6553", "#ffed63", "#1a1a1a" ]
# - HEX codes for CIL colors
# - Deep blue - 3393b0
# - Medium blue - 4dc8d6
# - Light blue - bae8e2
# - Red - ff6553 --> error in the code. Use red for now.
# - Orange - ffb35e
# - Yellow - ffed63
# - Black - 1a1a1a
scenario.Emissions(scenario="ssp370")
# compile emissions from fair and save to disk
import xarray as xr
import pandas as pd
em=[]
for sii,scen in enumerate(["rcp45","rcp85","ssp119", "ssp126", "ssp434", "ssp245", "ssp460", "ssp370", "ssp585"]):
em.append(xr.DataArray(scenario.Emissions(scenario=scen).emissions[:,1:],dims=["year","gas"],
coords={"year":np.arange(1765,2501),
"gas":scenario.Emissions(scenario=scen).get_gas_species_names()}))
emds = xr.concat(em, dim=pd.Index(["rcp45","rcp85","ssp119", "ssp126", "ssp434",
"ssp245", "ssp460", "ssp370", "ssp585"], name="scenario")).to_dataset(name="emissions")
attrs = {"Description": ("Emissions from FaIR Scenario class for subset of available scenarios. "
"FaIR version is locally installed {} slightly updated from branch {}"
.format(fair.__version__,"git+https://github.com/ClimateImpactLab/FAIR.git@cmip6_scenarios")),
"Created by": "Kelly McCusker <[email protected]>",
"Date": "July 20 2021"}
emds.attrs.update(attrs)
emds
emds.to_netcdf("/gcs/impactlab-data/gcp/climate/probabilization/FAIR-joos-experiments-2021-06-03/rcp-montecarlo/"
"scenario_rcp45-rcp85-ssp245-ssp460-ssp370_baseline_FaIR_emissions.nc")
fig,axs = plt.subplots(2,2)
years = scenario.Emissions(scenario="ssp370").year
emdt = {}
for sii,scen in enumerate(["ssp119", "ssp126", "ssp434", "ssp245", "ssp460", "ssp370", "ssp585"]):
emdt[scen] = scenario.Emissions(scenario=scen).emissions
conc,forc,temp = fair.forward.fair_scm(emissions=emdt[scen])
axs[0,0].plot(years, scenario.Emissions(scenario=scen).co2_fossil, color=CILcolors[sii],label=scen)
axs[0,1].plot(years, conc[:,0], color=CILcolors[sii], label=scen)
axs[1,0].plot(years, np.sum(forc, axis=1), color=CILcolors[sii], label=scen)
axs[1,1].plot(years, temp, color=CILcolors[sii], label=scen)
for sii,scen in enumerate(["rcp26", "rcp45", "rcp60", "rcp85"]):
emdt[scen] = scenario.Emissions(scenario=scen).emissions
conc,forc,temp = fair.forward.fair_scm(emissions=emdt[scen])
axs[0,0].plot(years, scenario.Emissions(scenario=scen).co2_fossil, color=CILcolors[sii], linestyle='dashed', label=scen)
axs[0,1].plot(years, conc[:,0], color=CILcolors[sii], linestyle='dashed', label=scen)
axs[1,0].plot(years, np.sum(forc, axis=1), color=CILcolors[sii], linestyle='dashed', label=scen)
axs[1,1].plot(years, temp, color=CILcolors[sii], linestyle='dashed', label=scen)
axs[0,0].set_title("Fossil CO2 Emissions (GtC)")
axs[0,1].set_title("CO2 Concentration (ppm)")
axs[1,0].set_title("Total Radiative Forcing (W/m2)")
axs[1,1].set_title("Temperature Anomaly (C)")
axs[0,0].legend()
fig,axs = plt.subplots(2,2)
years = scenario.Emissions(scenario=scen).year
emdt = {}
for sii,scen in enumerate(["ssp126", "ssp245", "ssp460", "ssp370", "ssp585"]):
emdt[scen] = scenario.Emissions(scenario=scen).emissions
conc,forc,temp = fair.forward.fair_scm(emissions=emdt[scen])
axs[0,0].plot(years, scenario.Emissions(scenario=scen).co2_fossil, color=CILcolors[sii],label=scen)
axs[0,1].plot(years, conc[:,0], color=CILcolors[sii], label=scen)
axs[1,0].plot(years, np.sum(forc, axis=1), color=CILcolors[sii], label=scen)
axs[1,1].plot(years, temp, color=CILcolors[sii], label=scen)
for sii,scen in enumerate(["rcp26", "rcp45", "rcp60", "skip", "rcp85"]):
if scen == "skip":
continue
emdt[scen] = scenario.Emissions(scenario=scen).emissions
conc,forc,temp = fair.forward.fair_scm(emissions=emdt[scen])
axs[0,0].plot(years, scenario.Emissions(scenario=scen).co2_fossil, color=CILcolors[sii], linestyle='dashed', label=scen)
axs[0,1].plot(years, conc[:,0], color=CILcolors[sii], linestyle='dashed', label=scen)
axs[1,0].plot(years, np.sum(forc, axis=1), color=CILcolors[sii], linestyle='dashed', label=scen)
axs[1,1].plot(years, temp, color=CILcolors[sii], linestyle='dashed', label=scen)
axs[0,0].set_title("Fossil CO2 Emissions (GtC)")
axs[0,1].set_title("CO2 Concentration (ppm)")
axs[1,0].set_title("Total Radiative Forcing (W/m2)")
axs[1,1].set_title("Temperature Anomaly (C)")
axs[0,0].set_xlim((1950,2300))
axs[0,1].set_xlim((1950,2300))
axs[1,0].set_xlim((1950,2300))
axs[1,1].set_xlim((1950,2300))
axs[0,0].legend()
###Output
_____no_output_____ |
GradientBoostedTrees/GradientBoostedTrees_Exercise.ipynb | ###Markdown
MNIST Datasethttp://yann.lecun.com/exdb/mnist/MNIST ("Modified National Institute of Standards and Technology") is the de facto “hello world” dataset of computer vision. Since its release in 1999, this classic dataset of handwritten images has served as the basis for benchmarking classification algorithms. As new machine learning techniques emerge, MNIST remains a reliable resource for researchers and learners alike.The MNIST database contains 60,000 training images and 10,000 testing images.
###Code
import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.max_columns = None
random_state = 42
import time
def timer_start():
global t0
t0 = time.time()
def timer_end():
t1 = time.time()
total = t1-t0
print('Time elapsed', total)
return total
###Output
_____no_output_____
###Markdown
Load Data The MNIST data comes pre-loaded with sklearn. The first 60000 images are training data and next 1000 are test data
###Code
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
X, y = mnist['data'], mnist['target']
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
print('Training Shape {} Test Shape {}'.format(X_train.shape, X_test.shape))
###Output
Training Shape (60000, 784) Test Shape (10000, 784)
###Markdown
Create a Validation setIn real world ML scenarios we create separate Train, Validation and Test set. We train our model on Training set, optimize our model using validation set and evalaute on Test set so that we dont induce bias. Since we already have test set we need to split training set into separate traiining and validation sets. As we will see later that we can do K-fold cross validation which removes the necessaity of creating Validations set
###Code
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size = 0.2,
random_state = random_state, stratify= y_train )
print('Training Shape {} Validation Shape {}'.format(X_train.shape, X_valid.shape))
pd.DataFrame(X_train).head()
###Output
_____no_output_____
###Markdown
Display Sample Image
###Code
import matplotlib
def display_digit(digit):
digit_image = digit.reshape(28,28)
plt.imshow(digit_image, cmap = matplotlib.cm.binary, interpolation = 'nearest')
plt.axis('off')
plt.show()
digit = X_train[92]
display_digit(digit)
###Output
_____no_output_____
###Markdown
Each Image consist of 28 X 28 pixels with pixel values from 0 to 255. The pixel values represent the greyscale intensity increasing from 0 to 255. As we can see below digit 4 can be represented by pixel intensities of varying values and the region where pixel intensities has high value are assosciated with the image of 4
###Code
pd.DataFrame(digit.reshape(28,28))
###Output
_____no_output_____
###Markdown
Traget Value Counts
###Code
pd.DataFrame(y_train)[0].value_counts()
###Output
_____no_output_____
###Markdown
Train Model Using Gradient Boosted MachineThe training on GBM is extremely slow for dataset this large. It is not feasable to use this for practical purpose hence code is commented. We will use a better alogorithm for boosted trees. For small dataset this still can be used hence code is not deleted
###Code
# timer_start()
# from sklearn.ensemble import GradientBoostingClassifier
# model = GradientBoostingClassifier(random_state = random_state,
# verbose = 1)
# model.fit(X_train, y_train)
# timer_end()
###Output
_____no_output_____
###Markdown
Validation Set Accuracy
###Code
# from sklearn.metrics import accuracy_score
# y_pred = model.predict(X_valid)
# test_acc = accuracy_score(y_valid, y_pred)
# print('Validation accuracy', test_acc)
###Output
_____no_output_____
###Markdown
Train Model Using LightGBM: DefaultLightGBM devloped by Microsoft Research Team, is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed and efficient with the following advantages:Faster training speed and higher efficiencyLower memory usageBetter accuracyParallel and GPU learning supportedCapable of handling large-scale datahttps://lightgbm.readthedocs.io/en/latest/ Validation set accuracyThe defualt model gave an impressive accuracy of 97% comapared to 94.5% accuracy of RandomForest Train Model Using LightGBM:Tuned with Early StoppingThe Idea behind early stopping is that we train the model for large number of iterations, but stop when the validation score stops improving. This is a powerful mechanism to deal with overfiiting Validation Set Accuracy Test Set AccuracyThe Test Accuracy 98.21% of a tuned LightGBM Model is better than 97.06% of Tuned RandomForest. The increase of 1.2% may not seem much but it means 120 more correct predcition on test set of 10000 samples. Random Incorrect PredictionsLets display random 10 images in test data which were incorrectly predicted by our model. We can notice some of the images are difficult to identify even for humans
###Code
def display_incorrect_preds(y_test, y_pred):
test_labels = pd.DataFrame()
test_labels['actual'] = y_test
test_labels['pred'] = y_pred
incorrect_pred = test_labels[test_labels['actual'] != test_labels['pred'] ]
random_incorrect_pred = incorrect_pred.sample(n= 10)
for i, row in random_incorrect_pred.iterrows():
print('Actual Value:', row['actual'], 'Predicted Value:', row['pred'])
display_digit(X_test[i])
###Output
_____no_output_____ |
Regression/Poisson Regression.ipynb | ###Markdown
Looking into Poisson regressionstarting from https://docs.pymc.io/notebooks/GLM-linear.html
###Code
%matplotlib inline
from pymc3 import *
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
sns.set(font_scale=1.5)
###Output
_____no_output_____
###Markdown
Start with regular to understand tools
###Code
size = 200
true_intercept = 1
true_slope = 2
x = np.linspace(0, 1, size)
# y = a + b*x
true_regression_line = true_intercept + true_slope * x
# add noise
y = true_regression_line + np.random.normal(scale=.5, size=size)
data = dict(x=x, y=y)
df = pd.DataFrame(data)
df.head()
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, xlabel='x', ylabel='y', title='Generated data and underlying model')
ax.plot(x, y, 'x', label='sampled data')
ax.plot(x, true_regression_line, label='true regression line', lw=2.)
plt.legend(loc=0);
sns.lmplot('x','y', data=df)
with Model() as model:
# specify glm and pass in data. The resulting linear model, its likelihood and
# and all its parameters are automatically added to our model.
glm.GLM.from_formula('y ~ x', data)
trace = sample(3000, cores=2) # draw 3000 posterior samples using NUTS sampling
plt.figure(figsize=(7, 7))
traceplot(trace[100:])
plt.tight_layout();
plt.figure(figsize=(7, 7))
plt.plot(x, y, 'x', label='data')
plot_posterior_predictive_glm(trace, samples=100,
label='posterior predictive regression lines')
plt.plot(x, true_regression_line, label='true regression line', lw=3., c='y')
plt.title('Posterior predictive regression lines')
plt.legend(loc=0)
plt.xlabel('x')
plt.ylabel('y');
###Output
_____no_output_____
###Markdown
and now look into thissomething is not quite right with my undrstanding
###Code
df = pd.read_csv('http://stats.idre.ucla.edu/stat/data/poisson_sim.csv', index_col=0)
df['x'] = df['math']
df['y'] = df['num_awards']
df.head()
df.plot(kind='scatter', x='math', y='num_awards')
with Model() as model:
# specify glm and pass in data. The resulting linear model, its likelihood and
# and all its parameters are automatically added to our model.
glm.GLM.from_formula('y ~ x', df)
trace = sample(3000, cores=2) # draw 3000 posterior samples using NUTS sampling
plt.figure(figsize=(7, 7))
traceplot(trace[100:])
plt.tight_layout();
fig, ax = plt.subplots(figsize=(7, 7))
df.plot(kind='scatter', x='x', y='y', ax=ax)
plot_posterior_predictive_glm(trace, eval=np.linspace(0, 80, 100), samples=100)
with Model() as model:
# specify glm and pass in data. The resulting linear model, its likelihood and
# and all its parameters are automatically added to our model.
glm.GLM.from_formula('y ~ x', df, family=glm.families.NegativeBinomial())
step = NUTS()
trace = sample(3000, cores=2, step=step) # draw 3000 posterior samples using NUTS sampling
plt.figure(figsize=(7, 7))
traceplot(trace[100:])
plt.tight_layout();
autocorrplot(trace);
fig, ax = plt.subplots(figsize=(7, 7))
df.plot(kind='scatter', x='x', y='y', ax=ax)
plot_posterior_predictive_glm(trace, eval=np.linspace(0, 80, 100), samples=100)
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
|
content/lessons/04/Watch-Me-Code/WMC2-The-Need-For-Exception-Handling.ipynb | ###Markdown
Watch Me Code 2: The Need For Exception HandlingThis demonstrates the need for exception handling.
###Code
# this generates a run-time error when you enter a non-number
# for example enter "heavy" and you get a ValueError
weight = float(input("Enter product weight in Kg: "))
# This example uses try..except to catch the ValueError
try:
weight = float(input("Enter product weight in Kg: "))
print ("Weight is:", weight)
except ValueError:
print("You did not enter a number! ")
###Output
Enter product weight in Kg: fsdgjsdfg
You did not enter a number!
###Markdown
Watch Me Code 2: The Need For Exception HandlingThis demonstrates the need for exception handling.
###Code
# this generates a run-time error when you enter a non-number
# for example enter "heavy" and you get a ValueError
weight = float(input("Enter product weight in Kg: "))
# This example uses try..except to catch the ValueError
try:
weight = float(input("Enter product weight in Kg: "))
print ("Weight is:", weight)
except ValueError:
print("You did not enter a number! ")
###Output
Enter product weight in Kg: fsdgjsdfg
You did not enter a number!
###Markdown
Watch Me Code 2: The Need For Exception HandlingThis demonstrates the need for exception handling.
###Code
# this generates a run-time error when you enter a non-number
# for example enter "heavy" and you get a ValueError
weight = float(input("Enter product weight in Kg: "))
# This example uses try..except to catch the ValueError
try:
weight = float(input("Enter product weight in Kg: "))
print ("Weight is:", weight)
except ValueError:
print("You did not enter a number! ")
###Output
Enter product weight in Kg: fsdgjsdfg
You did not enter a number!
###Markdown
Watch Me Code 2: The Need For Exception HandlingThis demonstrates the need for exception handling.
###Code
# this generates a run-time error when you enter a non-number
# for example enter "heavy" and you get a ValueError
weight = float(input("Enter product weight in Kg: "))
# This example uses try..except to catch the ValueError
try:
weight = float(input("Enter product weight in Kg: "))
print ("Weight is:", weight)
except ValueError:
print("You did not enter a number! ")
###Output
Enter product weight in Kg: fsdgjsdfg
You did not enter a number!
###Markdown
Watch Me Code 2: The Need For Exception HandlingThis demonstrates the need for exception handling.
###Code
# this generates a run-time error when you enter a non-number
# for example enter "heavy" and you get a ValueError
weight = float(input("Enter product weight in Kg: "))
# This example uses try..except to catch the ValueError
try:
weight = float(input("Enter product weight in Kg: "))
print ("Weight is:", weight)
except ValueError:
print("You did not enter a number! ")
###Output
Enter product weight in Kg: fsdgjsdfg
You did not enter a number!
###Markdown
Watch Me Code 2: The Need For Exception HandlingThis demonstrates the need for exception handling.
###Code
# this generates a run-time error when you enter a non-number
# for example enter "heavy" and you get a ValueError
weight = float(input("Enter product weight in Kg: "))
# This example uses try..except to catch the ValueError
try:
weight = float(input("Enter product weight in Kg: "))
print ("Weight is:", weight)
except ValueError:
print("You did not enter a number! ")
###Output
Enter product weight in Kg: fsdgjsdfg
You did not enter a number!
###Markdown
Watch Me Code 2: The Need For Exception HandlingThis demonstrates the need for exception handling.
###Code
# this generates a run-time error when you enter a non-number
# for example enter "heavy" and you get a ValueError
weight = float(input("Enter product weight in Kg: "))
# This example uses try..except to catch the ValueError
try:
weight = float(input("Enter product weight in Kg: "))
print ("Weight is:", weight)
except ValueError:
print("You did not enter a number! ")
###Output
Enter product weight in Kg: fsdgjsdfg
You did not enter a number!
###Markdown
Watch Me Code 2: The Need For Exception HandlingThis demonstrates the need for exception handling.
###Code
# this generates a run-time error when you enter a non-number
# for example enter "heavy" and you get a ValueError
weight = float(input("Enter product weight in Kg: "))
# This example uses try..except to catch the ValueError
try:
weight = float(input("Enter product weight in Kg: "))
print ("Weight is:", weight)
except ValueError:
print("You did not enter a number! ")
###Output
Enter product weight in Kg: fsdgjsdfg
You did not enter a number!
|
docs/Tutorial/CoxRegression.ipynb | ###Markdown
Cox Regression Cox Proportional Hazrds RegressionCox Proportional Hazrds (CoxPH) regression is to describe the survival according to several corvariates. The difference between CoxPH regression and Kaplan-Meier curves or the logrank tests is that the latter only focus on modeling the survival according to one factor (categorical predictor is best) while the former is able to take into consideration any covariates simultaneouly, regardless of whether they're quantitatrive or categorical. The model is as follow:$$h(t) = h_0(t)\exp(\eta).$$where,- $\eta = x\beta.$- $t$ is the survival time.- $h(t)$ is the hazard function which evaluate the risk of dying at time $t$.- $h_0(t)$ is called the baseline hazard. It describes value of the hazard if all the predictors are zero.- $\beta$ measures the impact of covariates.Consider two case $i$ and $i'$ that have different x values. Their hazard function can be simply written as follow$$h_i(t) = h_0(t)\exp(\eta_i) = h_0(t)\exp(x_i\beta),$$and$$h_{i'}(t) = h_0(t)\exp(\eta_{i'}) = h_0(t)\exp(x_{i'}\beta).$$The hazard ratio for these two cases is$$\begin{aligned}\frac{h_i(t)}{h_{i'}(t)} & = \frac{h_0(t)\exp(\eta_i)}{h_0(t)\exp(\eta_{i'})} \\ & = \frac{\exp(\eta_i)}{\exp(\eta_{i'})},\end{aligned}$$which is independent of time. Real Data Example Lung Cancer DatasetWe are going to apply best subset selection to the NCCTG Lung Cancer Dataset from [https://www.kaggle.com/ukveteran/ncctg-lung-cancer-data](https://www.kaggle.com/ukveteran/ncctg-lung-cancer-data). This dataset consists of survival informatoin of patients with advanced lung cancer from the North Central Cancer Treatment Group. The proportional hazards model allows the analysis of survival data by regression modeling. Linearity is assumed on the log scale of the hazard. The hazard ratio in Cox proportional hazard model is assumed constant. First, we load the data.
###Code
import pandas as pd
data = pd.read_csv('./cancer.csv')
data = data.drop(data.columns[[0, 1]], axis = 1)
print(data.head())
###Output
time status age sex ph.ecog ph.karno pat.karno meal.cal wt.loss
0 306 2 74 1 1.0 90.0 100.0 1175.0 NaN
1 455 2 68 1 0.0 90.0 90.0 1225.0 15.0
2 1010 1 56 1 0.0 90.0 90.0 NaN 15.0
3 210 2 57 1 1.0 90.0 60.0 1150.0 11.0
4 883 2 60 1 0.0 100.0 90.0 NaN 0.0
###Markdown
Then we remove the rows containing any missing data. After that, we have a total of 168 observations.
###Code
data = data.dropna()
print(data.shape)
###Output
(168, 9)
###Markdown
Then we change the factors `ph.ecog` into dummy variables:
###Code
data['ph.ecog'] = data['ph.ecog'].astype("category")
data = pd.get_dummies(data)
data = data.drop('ph.ecog_0.0', axis = 1)
print(data.head())
###Output
time status age sex ph.karno pat.karno meal.cal wt.loss \
1 455 2 68 1 90.0 90.0 1225.0 15.0
3 210 2 57 1 90.0 60.0 1150.0 11.0
5 1022 1 74 1 50.0 80.0 513.0 0.0
6 310 2 68 2 70.0 60.0 384.0 10.0
7 361 2 71 2 60.0 80.0 538.0 1.0
ph.ecog_1.0 ph.ecog_2.0 ph.ecog_3.0
1 0 0 0
3 1 0 0
5 1 0 0
6 0 1 0
7 0 1 0
###Markdown
We split the dataset into a training set and a test set. The model is going to be built on the training set and later we will test the model performance on the test set.
###Code
import numpy as np
np.random.seed(0)
ind = np.linspace(1, 168, 168) <= round(168*2/3)
train = np.array(data[ind])
test = np.array(data[~ind])
print('train size: ', train.shape[0])
print('test size:', test.shape[0])
###Output
train size: 112
test size: 56
###Markdown
Model FittingThe `CoxPHSurvivalAnalysis()` function in the `abess` package allows you to perform best subset selection in a highly efficient way. By default, the function implements the abess algorithm with the support size (sparsity level) changing from 0 to $\min\{p,n/log(n)p \}$ and the best support size is determined by EBIC. You can change the tunging criterion by specifying the argument `ic_type` and the support size by `support_size`. The available tuning criterion now are gic, aic, bic, ebic. Here we give an example.
###Code
from abess import CoxPHSurvivalAnalysis
model = CoxPHSurvivalAnalysis(ic_type = 'gic')
model.fit(train[:, 2:], train[:, :2])
###Output
_____no_output_____
###Markdown
After fitting, the coefficients are stored in `model.coef_`, and the non-zero values indicate the variables used in our model.
###Code
print(model.coef_)
###Output
[ 0. -0.379564 0.02248522 0. 0. 0.
0.43729712 1.42127851 2.42095755]
###Markdown
This result shows that 4 variables (the 2nd, 3rd, 7th, 8th, 9th) are chosen into the Cox model. Then a further analysis can be based on them. More on the resultsHold on, we aren’t finished yet. After get the estimator, we can further do more exploring work. For example, you can use some generic steps to quickly draw some information of those estimators.Simply fix the `support_size` in different level, you can plot a path of coefficients like:
###Code
import matplotlib.pyplot as plt
coef = np.zeros((10, 9))
ic = np.zeros(10)
for s in range(10):
model = CoxPHSurvivalAnalysis(support_size = s, ic_type = 'gic')
model.fit(train[:, 2:], train[:, :2])
coef[s, :] = model.coef_
ic[s] = model.ic_
for i in range(9):
plt.plot(coef[:, i], label = i)
plt.xlabel('support_size')
plt.ylabel('coefficients')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Or a view of decreasing of information criterion:
###Code
plt.plot(ic, 'o-')
plt.xlabel('support_size')
plt.ylabel('GIC')
plt.show()
###Output
_____no_output_____
###Markdown
Prediction is allowed for all the estimated model. Just call `predict()` function under the model you are interested in. The values it return are $\exp(\eta)=\exp(x\beta)$, which is part of Cox PH hazard function.Here he give the prediction on the `test` data.
###Code
pred = model.predict(test[:, 2:])
print(pred)
###Output
[11.0015887 11.97954111 8.11705612 3.32130081 2.9957487 3.23167938
5.88030263 8.83474265 6.94981468 2.79778448 4.80124013 8.32868839
6.18472356 7.36597245 2.79540785 7.07729092 3.57284073 6.95551265
3.59051464 8.73668805 3.51029827 4.28617052 5.21830511 5.11465146
2.92670651 2.31996184 7.04845409 4.30246362 7.14805341 3.83570919
6.27832924 6.54442227 8.39353611 5.41713824 4.17823079 4.01469621
8.99693705 3.98562593 3.9922459 2.79743549 3.47347931 4.40471703
6.77413094 4.33542254 6.62834299 9.99006885 8.1177072 20.28383502
14.67346807 2.27915833 5.78151822 4.31221688 3.25950636 6.99318596
7.4368521 3.86339324]
###Markdown
With these predictions, we can compute the hazard ratio between every two observations (by deviding their values). Or, we can also compute the C-Index for our model, i.e., the probability that, for a pair of randomly chosen comparable samples, the sample with the higher risk prediction will experience an event before the other sample or belong to a higher binary class.
###Code
from sksurv.metrics import concordance_index_censored
cindex = concordance_index_censored(test[:, 1] == 2, test[:, 0], pred)
print(cindex[0])
###Output
0.6839080459770115
###Markdown
Cox Regression Cox Proportional Hazrds RegressionCox Proportional Hazrds (CoxPH) regression is to describe the survival according to several corvariates. The difference between CoxPH regression and Kaplan-Meier curves or the logrank tests is that the latter only focus on modeling the survival according to one factor (categorical predictor is best) while the former is able to take into consideration any covariates simultaneouly, regardless of whether they're quantitatrive or categorical. The model is as follow:$$h(t) = h_0(t)\exp(\eta).$$where,- $\eta = x\beta.$- $t$ is the survival time.- $h(t)$ is the hazard function which evaluate the risk of dying at time $t$.- $h_0(t)$ is called the baseline hazard. It describes value of the hazard if all the predictors are zero.- $\beta$ measures the impact of covariates.Consider two case $i$ and $i'$ that have different x values. Their hazard function can be simply written as follow$$h_i(t) = h_0(t)\exp(\eta_i) = h_0(t)\exp(x_i\beta),$$and$$h_{i'}(t) = h_0(t)\exp(\eta_{i'}) = h_0(t)\exp(x_{i'}\beta).$$The hazard ratio for these two cases is$$\begin{aligned}\frac{h_i(t)}{h_{i'}(t)} & = \frac{h_0(t)\exp(\eta_i)}{h_0(t)\exp(\eta_{i'})} \\ & = \frac{\exp(\eta_i)}{\exp(\eta_{i'})},\end{aligned}$$which is independent of time. Real Data Example Lung Cancer DatasetWe are going to apply best subset selection to the NCCTG Lung Cancer Dataset from [https://www.kaggle.com/ukveteran/ncctg-lung-cancer-data](https://www.kaggle.com/ukveteran/ncctg-lung-cancer-data). This dataset consists of survival informatoin of patients with advanced lung cancer from the North Central Cancer Treatment Group. The proportional hazards model allows the analysis of survival data by regression modeling. Linearity is assumed on the log scale of the hazard. The hazard ratio in Cox proportional hazard model is assumed constant. First, we load the data.
###Code
import pandas as pd
data = pd.read_csv('./cancer.csv')
data = data.drop(data.columns[[0, 1]], axis = 1)
print(data.head())
###Output
time status age sex ph.ecog ph.karno pat.karno meal.cal wt.loss
0 306 2 74 1 1.0 90.0 100.0 1175.0 NaN
1 455 2 68 1 0.0 90.0 90.0 1225.0 15.0
2 1010 1 56 1 0.0 90.0 90.0 NaN 15.0
3 210 2 57 1 1.0 90.0 60.0 1150.0 11.0
4 883 2 60 1 0.0 100.0 90.0 NaN 0.0
###Markdown
Then we remove the rows containing any missing data. After that, we have a total of 168 observations.
###Code
data = data.dropna()
print(data.shape)
###Output
(168, 9)
###Markdown
Then we change the factors `ph.ecog` into dummy variables:
###Code
data['ph.ecog'] = data['ph.ecog'].astype("category")
data = pd.get_dummies(data)
data = data.drop('ph.ecog_0.0', axis = 1)
print(data.head())
###Output
time status age sex ph.karno pat.karno meal.cal wt.loss \
1 455 2 68 1 90.0 90.0 1225.0 15.0
3 210 2 57 1 90.0 60.0 1150.0 11.0
5 1022 1 74 1 50.0 80.0 513.0 0.0
6 310 2 68 2 70.0 60.0 384.0 10.0
7 361 2 71 2 60.0 80.0 538.0 1.0
ph.ecog_1.0 ph.ecog_2.0 ph.ecog_3.0
1 0 0 0
3 1 0 0
5 1 0 0
6 0 1 0
7 0 1 0
###Markdown
We split the dataset into a training set and a test set. The model is going to be built on the training set and later we will test the model performance on the test set.
###Code
import numpy as np
np.random.seed(0)
ind = np.linspace(1, 168, 168) <= round(168*2/3)
train = np.array(data[ind])
test = np.array(data[~ind])
print('train size: ', train.shape[0])
print('test size:', test.shape[0])
###Output
train size: 112
test size: 56
###Markdown
Model FittingThe `abessCox()` function in the `abess` package allows you to perform best subset selection in a highly efficient way. By default, the function implements the abess algorithm with the support size (sparsity level) changing from 0 to $\min\{p,n/log(n)p \}$ and the best support size is determined by EBIC. You can change the tunging criterion by specifying the argument `ic_type` and the support size by `support_size`. The available tuning criterion now are gic, aic, bic, ebic. Here we give an example.
###Code
from abess import abessCox
model = abessCox(ic_type = 'gic')
model.fit(train[:, 2:], train[:, :2])
###Output
_____no_output_____
###Markdown
After fitting, the coefficients are stored in `model.coef_`, and the non-zero values indicate the variables used in our model.
###Code
print(model.coef_)
###Output
[ 0. -0.379564 0.02248522 0. 0. 0.
0.43729712 1.42127851 2.42095755]
###Markdown
This result shows that 4 variables (the 2nd, 3rd, 7th, 8th, 9th) are chosen into the Cox model. Then a further analysis can be based on them. More on the resultsHold on, we aren’t finished yet. After get the estimator, we can further do more exploring work. For example, you can use some generic steps to quickly draw some information of those estimators.Simply fix the `support_size` in different level, you can plot a path of coefficients like:
###Code
import matplotlib.pyplot as plt
coef = np.zeros((10, 9))
ic = np.zeros(10)
for s in range(10):
model = abessCox(support_size = s, ic_type = 'gic')
model.fit(train[:, 2:], train[:, :2])
coef[s, :] = model.coef_
ic[s] = model.ic_
for i in range(9):
plt.plot(coef[:, i], label = i)
plt.xlabel('support_size')
plt.ylabel('coefficients')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Or a view of decreasing of information criterion:
###Code
plt.plot(ic, 'o-')
plt.xlabel('support_size')
plt.ylabel('GIC')
plt.show()
###Output
_____no_output_____
###Markdown
Prediction is allowed for all the estimated model. Just call `predict()` function under the model you are interested in. The values it return are $\exp(\eta)=\exp(x\beta)$, which is part of Cox PH hazard function.Here he give the prediction on the `test` data.
###Code
pred = model.predict(test[:, 2:])
print(pred)
###Output
[11.0015887 11.97954111 8.11705612 3.32130081 2.9957487 3.23167938
5.88030263 8.83474265 6.94981468 2.79778448 4.80124013 8.32868839
6.18472356 7.36597245 2.79540785 7.07729092 3.57284073 6.95551265
3.59051464 8.73668805 3.51029827 4.28617052 5.21830511 5.11465146
2.92670651 2.31996184 7.04845409 4.30246362 7.14805341 3.83570919
6.27832924 6.54442227 8.39353611 5.41713824 4.17823079 4.01469621
8.99693705 3.98562593 3.9922459 2.79743549 3.47347931 4.40471703
6.77413094 4.33542254 6.62834299 9.99006885 8.1177072 20.28383502
14.67346807 2.27915833 5.78151822 4.31221688 3.25950636 6.99318596
7.4368521 3.86339324]
###Markdown
With these predictions, we can compute the hazard ratio between every two observations (by deviding their values). Or, we can also compute the C-Index for our model, i.e., the probability that, for a pair of randomly chosen comparable samples, the sample with the higher risk prediction will experience an event before the other sample or belong to a higher binary class.
###Code
from sksurv.metrics import concordance_index_censored
cindex = concordance_index_censored(test[:, 1] == 2, test[:, 0], pred)
print(cindex[0])
###Output
0.6839080459770115
###Markdown
Cox Regression Cox Proportional Hazrds RegressionCox Proportional Hazrds (CoxPH) regression is to describe the survival according to several corvariates. The difference between CoxPH regression and Kaplan-Meier curves or the logrank tests is that the latter only focus on modeling the survival according to one factor (categorical predictor is best) while the former is able to take into consideration any covariates simultaneouly, regardless of whether they're quantitatrive or categorical. The model is as follow:$$h(t) = h_0(t)\exp(\eta).$$where,- $\eta = x\beta.$- $t$ is the survival time.- $h(t)$ is the hazard function which evaluate the risk of dying at time $t$.- $h_0(t)$ is called the baseline hazard. It describes value of the hazard if all the predictors are zero.- $\beta$ measures the impact of covariates.Consider two case $i$ and $i'$ that have different x values. Their hazard function can be simply written as follow$$h_i(t) = h_0(t)\exp(\eta_i) = h_0(t)\exp(x_i\beta),$$and$$h_{i'}(t) = h_0(t)\exp(\eta_{i'}) = h_0(t)\exp(x_{i'}\beta).$$The hazard ratio for these two cases is$$\begin{aligned}\frac{h_i(t)}{h_{i'}(t)} & = \frac{h_0(t)\exp(\eta_i)}{h_0(t)\exp(\eta_{i'})} \\ & = \frac{\exp(\eta_i)}{\exp(\eta_{i'})},\end{aligned}$$which is independent of time. Real Data Example Lung Cancer DatasetWe are going to apply best subset selection to the NCCTG Lung Cancer Dataset from [https://www.kaggle.com/ukveteran/ncctg-lung-cancer-data](https://www.kaggle.com/ukveteran/ncctg-lung-cancer-data). This dataset consists of survival informatoin of patients with advanced lung cancer from the North Central Cancer Treatment Group. The proportional hazards model allows the analysis of survival data by regression modeling. Linearity is assumed on the log scale of the hazard. The hazard ratio in Cox proportional hazard model is assumed constant. First, we load the data.
###Code
import pandas as pd
data = pd.read_csv('./cancer.csv')
data = data.drop(data.columns[[0, 1]], axis = 1)
print(data.head())
###Output
time status age sex ph.ecog ph.karno pat.karno meal.cal wt.loss
0 306 2 74 1 1.0 90.0 100.0 1175.0 NaN
1 455 2 68 1 0.0 90.0 90.0 1225.0 15.0
2 1010 1 56 1 0.0 90.0 90.0 NaN 15.0
3 210 2 57 1 1.0 90.0 60.0 1150.0 11.0
4 883 2 60 1 0.0 100.0 90.0 NaN 0.0
###Markdown
Then we remove the rows containing any missing data. After that, we have a total of 168 observations.
###Code
data = data.dropna()
print(data.shape)
###Output
(168, 9)
###Markdown
Then we change the factors `ph.ecog` into dummy variables:
###Code
data['ph.ecog'] = data['ph.ecog'].astype("category")
data = pd.get_dummies(data)
data = data.drop('ph.ecog_0.0', axis = 1)
print(data.head())
###Output
time status age sex ph.karno pat.karno meal.cal wt.loss \
1 455 2 68 1 90.0 90.0 1225.0 15.0
3 210 2 57 1 90.0 60.0 1150.0 11.0
5 1022 1 74 1 50.0 80.0 513.0 0.0
6 310 2 68 2 70.0 60.0 384.0 10.0
7 361 2 71 2 60.0 80.0 538.0 1.0
ph.ecog_1.0 ph.ecog_2.0 ph.ecog_3.0
1 0 0 0
3 1 0 0
5 1 0 0
6 0 1 0
7 0 1 0
###Markdown
We split the dataset into a training set and a test set. The model is going to be built on the training set and later we will test the model performance on the test set.
###Code
import numpy as np
np.random.seed(0)
ind = np.linspace(1, 168, 168) <= round(168*2/3)
train = np.array(data[ind])
test = np.array(data[~ind])
print('train size: ', train.shape[0])
print('test size:', test.shape[0])
###Output
train size: 112
test size: 56
###Markdown
Model FittingThe `abessCox()` function in the `abess` package allows you to perform best subset selection in a highly efficient way. By default, the function implements the abess algorithm with the support size (sparsity level) changing from 0 to $\min\{p,n/log(n)p \}$ and the best support size is determined by EBIC. You can change the tunging criterion by specifying the argument `ic_type` and the support size by `support_size`. The available tuning criterion now are gic, aic, bic, ebic. Here we give an example.
###Code
from abess import abessCox
model = abessCox(ic_type = 'gic')
model.fit(train[:, 2:], train[:, :2])
###Output
_____no_output_____
###Markdown
After fitting, the coefficients are stored in `model.coef_`, and the non-zero values indicate the variables used in our model.
###Code
print(model.coef_)
###Output
[ 0. -0.379564 0.02248522 0. 0. 0.
0.43729712 1.42127851 2.42095755]
###Markdown
This result shows that 4 variables (the 2nd, 3rd, 7th, 8th, 9th) are chosen into the Cox model. Then a further analysis can be based on them. More on the resultsHold on, we aren’t finished yet. After get the estimator, we can further do more exploring work. For example, you can use some generic steps to quickly draw some information of those estimators.Simply fix the `support_size` in different level, you can plot a path of coefficients like:
###Code
import matplotlib.pyplot as plt
pt = np.zeros((10, 9))
ic = np.zeros(10)
for sz in range(10):
model = abessCox(support_size = [sz], ic_type = 'gic')
model.fit(train[:, 2:], train[:, :2])
pt[sz, :] = model.coef_
ic[sz] = model.ic_
for i in range(9):
plt.plot(pt[:, i], label = i)
plt.xlabel('support_size')
plt.ylabel('coefficients')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Or a view of decreasing of information criterion:
###Code
plt.plot(ic, 'o-')
plt.xlabel('support_size')
plt.ylabel('GIC')
plt.show()
###Output
_____no_output_____
###Markdown
Prediction is allowed for all the estimated model. Just call `predict()` function under the model you are interested in. The values it return are $\exp(\eta)=\exp(x\beta)$, which is part of Cox PH hazard function.Here he give the prediction on the `test` data.
###Code
pred = model.predict(test[:, 2:])
print(pred)
###Output
[11.0015887 11.97954111 8.11705612 3.32130081 2.9957487 3.23167938
5.88030263 8.83474265 6.94981468 2.79778448 4.80124013 8.32868839
6.18472356 7.36597245 2.79540785 7.07729092 3.57284073 6.95551265
3.59051464 8.73668805 3.51029827 4.28617052 5.21830511 5.11465146
2.92670651 2.31996184 7.04845409 4.30246362 7.14805341 3.83570919
6.27832924 6.54442227 8.39353611 5.41713824 4.17823079 4.01469621
8.99693705 3.98562593 3.9922459 2.79743549 3.47347931 4.40471703
6.77413094 4.33542254 6.62834299 9.99006885 8.1177072 20.28383502
14.67346807 2.27915833 5.78151822 4.31221688 3.25950636 6.99318596
7.4368521 3.86339324]
###Markdown
With these predictions, we can compute the hazard ratio between every two observations (by deviding their values). Or, we can also compute the C-Index for our model, i.e., the probability that, for a pair of randomly chosen comparable samples, the sample with the higher risk prediction will experience an event before the other sample or belong to a higher binary class.
###Code
from sksurv.metrics import concordance_index_censored
cindex = concordance_index_censored(test[:, 1] == 2, test[:, 0], pred)
print(cindex[0])
###Output
0.6839080459770115
|
notes/graph_partition.ipynb | ###Markdown
Necessary conditions for Graph partition Install pyQUBO from Recruit Communications Co. Ltd. pip install pyqubo Install openJij from Jij Inc. (startup from Tohoku University) pip install openjij Add networkx for dealing with graph theory pip install networkx Solve Graph Partition import pyQUBO, openJij and numpy
###Code
from pyqubo import Array,Constraint, Placeholder
import openjij as jij
import numpy as np
###Output
_____no_output_____
###Markdown
Array, Constrains and Placeholders are convenient classes from pyQUBO import networkx
###Code
import networkx as nx
###Output
_____no_output_____
###Markdown
Prepare some graph
###Code
nodes = [0, 1, 2, 3, 4, 5]
edges = [
(0, 1), (1, 2), (2, 0),
(1, 5), (0, 3),
(3, 4), (4, 5), (5, 1)
]
###Output
_____no_output_____
###Markdown
Set nodes and edges on Graph G
###Code
G = nx.Graph()
G.add_nodes_from(nodes)
G.add_edges_from(edges)
import matplotlib.pyplot as plt
nx.draw_networkx(G)
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Prepare spin variables
###Code
N = 6
vartype = "SPIN"
x = Array.create("x",shape=N,vartype=vartype)
###Output
_____no_output_____
###Markdown
"x" is name of variables shape specifies the shape of variables as vector, matrix, or... vartype selects -1 or 1 by "SPIN" and 0 or 1by "BINARY"
###Code
print(x)
###Output
Array([Spin(x[0]), Spin(x[1]), Spin(x[2]), Spin(x[3]), Spin(x[4]), Spin(x[5])])
###Markdown
Define cost function
###Code
E1 = Constraint((np.sum(x))**2,"equal")
E2 = 0
for e in edges:
E2 += 0.5*(1-x[e[0]]*x[e[1]])
Lam = Placeholder('Lam')
E_cost = Lam*E1+E2
###Output
_____no_output_____
###Markdown
Compile the cost function
###Code
model = E_cost.compile()
###Output
_____no_output_____
###Markdown
Get qubo matrix
###Code
feed_dict = {'Lam': 5.0}
h,J, offset = model.to_ising(feed_dict=feed_dict)
###Output
_____no_output_____
###Markdown
Prepare simulation of quantum annealing
###Code
#simulated quantum annealing
sampler = jij.SQASampler(beta=10.0, gamma=1.0, trotter=4, num_sweeps=100)
#simulated annealing
#sampler = jij.SASampler(num_sweeps=1000)
###Output
_____no_output_____
###Markdown
This is done by quantum Monte-Carlo simulation gamma = strength of quantum fluctuation trotter = Trotter number num_sweeps = length of MCS Let's simulate quantum annealing
###Code
response = sampler.sample_ising(h,J)
###Output
_____no_output_____
###Markdown
Show results
###Code
print(response)
response.record["sample"]
###Output
_____no_output_____
###Markdown
show resulting graph
###Code
spin = response.record["sample"][0]
node_colors = [spin[node]>0 for node in G.nodes()]
nx.draw_networkx(G,node_color=node_colors)
plt.axis("off")
plt.show()
###Output
_____no_output_____ |
DeepLearningFromScratch-Chapter3.ipynb | ###Markdown
3章 ニューラルネットワーク 3.2 活性化関数 3.2.2 ステップ関数の実装
###Code
def step_function(x):
if x > 0:
return 1
else:
return 0
def step_function(x):
y = x > 0
return y.astype(np.int)
import numpy as np
x = np.array([-1.0, 1.0, 2.0])
x
y = x > 0
y
y = y.astype(np.int)
y
###Output
_____no_output_____
###Markdown
3.2.3 ステップ関数のグラフ
###Code
import numpy as np
import matplotlib.pyplot as plt
def step_function(x):
return np.array(x > 0, dtype=np.int)
x = np.arange(-5.0, 5.0, 0.1)
y = step_function(x)
plt.plot(x, y)
plt.ylim(-0.1, 1.1)
plt.show()
###Output
_____no_output_____
###Markdown
3.2.4 シグモイド関数の実装
###Code
def sigmoid(x):
return 1 / ( 1 + np.exp( -x ))
x = np.array([-1.0, 1.0, 2.0])
sigmoid(x)
t = np.array([1.0, 2.0, 3.0])
1.0 + t
1.0 / t
x = np.arange(-5.0, 5.0, 0.1)
y = sigmoid(x)
plt.plot(x, y)
plt.ylim(-0.1, 1.1)
plt.show()
###Output
_____no_output_____
###Markdown
3.2.7 ReLU関数
###Code
def relu(x):
return np.maximum(0, x)
x = np.arange(-5.0, 5.0, 0.1)
y = relu(x)
plt.plot(x, y)
plt.ylim(-0.1, 5.1)
plt.show()
###Output
_____no_output_____
###Markdown
3.3 多次元配列の計算 3.3.1 多次元配列
###Code
import numpy as np
A = np.array([1, 2, 3, 4])
print(A)
np.ndim(A)
A.shape
A.shape[0]
B = np.array([[1, 2], [3, 4], [5, 6]])
print(B)
np.ndim(B)
B.shape
###Output
_____no_output_____
###Markdown
3.3.2 行列の積
###Code
A = np.array([[1, 2], [3, 4]])
A.shape
B = np.array([[5, 6], [7, 8]])
B.shape
np.dot(A, B)
A = np.array([[1, 2, 3], [4, 5, 6]])
A.shape
B = np.array([[1, 2], [3, 4], [5, 6]])
B.shape
np.dot(A, B)
C = np.array([[1, 2], [3, 4]])
C.shape
A.shape
np.dot(A, C)
A = np.array([[1, 2], [3, 4], [5, 6]])
A.shape
B = np.array([7, 8])
B.shape
np.dot(A, B)
###Output
_____no_output_____
###Markdown
3.3.3 ニューラルネットワークの行列の積
###Code
X = np.array([1, 2])
X.shape
W = np.array([[1, 3, 5], [2, 4, 6]])
print(W)
W.shape
Y = np.dot(X, W)
print(Y)
###Output
[ 5 11 17]
###Markdown
3.4 3層ニューラルネットワークの実装 3.4.2 各層における信号伝達の実装
###Code
X = np.array([1.0, 0.5])
W1 = np.array([[0.1, 0.3, 0.5], [0.2, 0.4, 0.6]])
B1 = np.array([0.1, 0.2, 0.3])
print(W1.shape)
print(X.shape)
print(B.shape)
A1 = np.dot(X, W1) + B1
Z1 = sigmoid(A1)
print(A1)
print(Z1)
W2 = np.array([[0.1, 0.4], [0.2, 0.5], [0.3, 0.6]])
B2 = np.array([0.1, 0.2])
print(Z1.shape)
print(W2.shape)
print(B2.shape)
A2 = np.dot(Z1, W2) + B2
Z2 = sigmoid(A2)
W3 = np.array([[0.1, 0.3], [0.2, 0.4]])
B3 = np.array([0.1, 0.2])
A3 = np.dot(Z2, W3) + B3
#Y = identity_function(A3)
###Output
_____no_output_____
###Markdown
3.4.3 実装のまとめ
###Code
def init_network():
network = {}
network['W1'] = np.array([[0.1, 0.3, 0.5], [0.2, 0.4, 0.6]])
network['b1'] = np.array([0.1, 0.2, 0.3])
network['W2'] = np.array([[0.1, 0.4], [0.2, 0.5], [0.3, 0.6]])
network['b2'] = np.array([0.1, 0.2])
network['W3'] = np.array([[0.1, 0.3], [0.2, 0.4]])
network['b3'] = np.array([0.1, 0.2])
return network
def forward(network, x):
W1, W2, W3 = network['W1'], network['W2'], network['W3']
b1, b2, b3 = network['b1'], network['b2'], network['b3']
a1 = np.dot(x, W1) + b1
z1 = sigmoid(a1)
a2 = np.dot(z1, W2) + b2
z2 = sigmoid(a2)
a3 = np.dot(z2, W3) + b3
# y = identify_function(a3)
y = a3
return y
network = init_network()
x = np.array([1.0, 0.5])
y = forward(network, x)
print(y)
###Output
[0.31682708 0.69627909]
###Markdown
3.5 出力層の設計 3.5.1 恒等関数とソフトマックス関数
###Code
a = np.array([0.3, 2.9, 4.0])
exp_a = np.exp(a)
print(exp_a)
sum_exp_a = np.sum(exp_a)
print(sum_exp_a)
y = exp_a / sum_exp_a
print(y)
def softmax(a):
exp_a = np.exp(a)
sum_exp_a = np.sum(exp_a)
y = exp_a / sum_exp_a
return y
###Output
_____no_output_____
###Markdown
3.5.2 ソフトマックス関数の実装上の注意
###Code
a = np.array([1010, 1000, 990])
np.exp(a) / np.sum(np.exp(a))
c = np.max(a)
a - c
np.exp(a - c) / np.sum(np.exp(a - c))
def softmax(a):
c = np.max(a)
exp_a = np.exp(a - c)
sum_exp_a = np.sum(exp_a)
y = exp_a / sum_exp_a
return y
###Output
_____no_output_____
###Markdown
3.5.3 ソフトマックス関数の特徴
###Code
a = np.array([0.3, 2.9, 4.0])
y = softmax(a)
print(y)
np.sum(y)
###Output
_____no_output_____
###Markdown
3.6 手書き数字認識 3.6.1 MNISTデータセット
###Code
import sys, os
sys.path.append('/content/drive/My Drive/Colab Notebooks/DeepLearningFromScratch/official')
from dataset.mnist import load_mnist
(x_train, t_train), (x_test, t_test) = \
load_mnist(flatten=True, normalize=False)
print(x_train.shape)
print(t_train.shape)
print(x_test.shape)
print(t_test.shape)
import sys, os
sys.path.append('/content/drive/My Drive/Colab Notebooks/DeepLearningFromScratch/official')
import numpy as np
from dataset.mnist import load_mnist
from PIL import Image
from matplotlib.pyplot import imshow
def img_show(img):
pil_img = Image.fromarray(np.uint8(img))
imshow(pil_img)
(x_train, t_train), (x_test, t_test) = load_mnist(flatten=True, normalize=False)
img = x_train[0]
label = t_train[0]
print(label)
print(img.shape)
img = img.reshape(28, 28)
print(img.shape)
img_show(img)
###Output
(784,)
(28, 28)
###Markdown
3.6.2 ニューラルネットワークの推論処理ここまでに作った関数を一部functions.pyに書き出しておいた。
###Code
import sys
sys.path.append('/content/drive/My Drive/Colab Notebooks/DeepLearningFromScratch')
import numpy as np
import pickle
from official.dataset.mnist import load_mnist
from functions import sigmoid, softmax
def get_data():
(x_train, t_train), (x_test, t_test) = load_mnist(normalize=True, flatten=True, one_hot_label=False)
return x_test, t_test
def init_network():
with open('/content/drive/My Drive/Colab Notebooks/DeepLearningFromScratch/official/ch03/sample_weight.pkl', 'rb') as f:
network = pickle.load(f)
return network
def predict(network, x):
W1, W2, W3 = network['W1'], network['W2'], network['W3']
b1, b2, b3 = network['b1'], network['b2'], network['b3']
a1 = np.dot(x, W1) + b1
z1 = sigmoid(a1)
a2 = np.dot(z1, W2) + b2
z2 = sigmoid(a2)
a3 = np.dot(z2, W3) + b3
y = softmax(a3)
return y
x, t = get_data()
network = init_network()
accuracy_cnt = 0
for i in range(len(x)):
y = predict(network, x[i])
p = np.argmax(y)
if p == t[i]:
accuracy_cnt += 1
print("Accuracy:" + str(float(accuracy_cnt) / len(x)))
###Output
Accuracy:0.9352
###Markdown
3.6.3 バッチ処理
###Code
x, _ = get_data()
network = init_network()
W1, W2, W3 = network['W1'], network['W2'], network['W3']
x.shape
x[0].shape
W1.shape
W2.shape
W3.shape
x, t = get_data()
network = init_network()
batch_size = 100
accuracy_cnt = 0
for i in range(0, len(x), batch_size):
x_batch = x[i : i+batch_size]
y_batch = predict(network, x_batch)
p = np.argmax(y_batch, axis = 1)
accuracy_cnt += np.sum(p == t[i : i+batch_size])
list( range(0, 10))
list( range(0, 10, 3))
x = np.array([[0.1, 0.8, 0.1], [0.3, 0.1, 0.6], [0.2, 0.5, 0.3], [0.8, 0.1, 0.1]])
y = np.argmax(x, axis=1)
print(y)
y = np.array([1, 2, 1, 0])
t = np.array([1, 2, 0, 0])
print(y==t)
np.sum(y==t)
###Output
_____no_output_____
###Markdown
3章終わり
###Code
###Output
_____no_output_____ |
notebooks/Step_by_Step_dl0_to_dl1.ipynb | ###Markdown
Notebook to go step by step in the selection/reduction/calibration of DL0 data to DL1**Content:**- Data loading- Calibration: - Pedestal substraction - Peak integration - Conversion of digital counts to photoelectrons. - High gain/low gain combination- Cleaning- Hillas parameters- Disp reconstruction (from Hillas pars)- TEST: High gain/Low gain - Using of Pyhessio to access more MC information: - Simulated phe, number of simulated events, simulated energy range, etc. - Calculation of the spectral weight for one event.- TEST: Comparison of Hillas intensity with simulated number of phe.- Spectral weighting for a set of events. Some imports...
###Code
from ctapipe.utils import get_dataset_path
from ctapipe.io import event_source
from ctapipe.io.eventseeker import EventSeeker
import astropy.units as u
from copy import deepcopy
from lstchain.calib import lst_calibration
from ctapipe.image import hillas_parameters
import pyhessio
import lstchain.reco.utils as utils
from lstchain.reco import dl0_to_dl1
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data loadingGet the origin file with dl0 data which is a simtelarray file
###Code
#input_filename=get_dataset_path('gamma_test_large.simtel.gz')
input_filename="/home/queenmab/DATA/LST1/Gamma/gamma_20deg_0deg_run8___cta-prod3-lapalma-2147m-LaPalma-FlashCam.simtel.gz"
###Output
_____no_output_____
###Markdown
Get the data events into a ctapipe event container. We are only interested in LST1 events
###Code
pyhessio.close_file()
tel_id = 1
allowed_tels = {tel_id}
source = event_source(input_filename)
source.allowed_tels = allowed_tels
## Load the first event
#event = next(iter(source))
## OR select an event manually
seeker = EventSeeker(source)
event = seeker[4]
# OR Find an event that saturates the high gain waveform
'''
counter = 0
howmany = 4
for event in source:
if np.any(event.r0.tel[1].waveform > 4094):
bright_event = deepcopy(event)
tel_id = tid
counter = counter + 1
if counter > howmany:
break
event = bright_event
'''
## OR find a bright LST event:
# intensity = 0
# for event in source:
# for tid in event.r0.tels_with_data:
# if event.r0.tel[tid].image.sum() > intensity and tid in np.arange(8):
# intensity = event.r0.tel[tid].image.sum()
# bright_event = deepcopy(event)
# tel_id = tid
# event = bright_event
###Output
WARNING:ctapipe.io.eventseeker.EventSeeker:Seeking to event by looping through events... (potentially long process)
###Markdown
Take a look at the event container. Select any event using the event seeker
###Code
event.r0.tel[1]
EvID = event.r0.event_id
print(EvID)
###Output
26107
###Markdown
Get the waveform data
###Code
data = event.r0.tel[tel_id].waveform
data.shape
###Output
_____no_output_____
###Markdown
The waveform is a matrix, has 30 samples in each of the 1855 pixels, for 2 gains. We can plot the waveforms and have an idea of their shapes. Lame loop to find a pixel with signal:
###Code
maxvalue=0
for pixel in enumerate(data[0]):
maxsample = max(pixel[1])
if maxsample > maxvalue:
maxvalue = maxsample
pixelwithsignal = pixel[0]
plt.rcParams['figure.figsize'] = (8,5)
plt.rcParams['font.size'] = 14
nsamples = data.shape[2]
sample = np.linspace(0,30,nsamples)
plt.plot(sample,data[0][pixelwithsignal],label="Pixel with signal",color = "blue")
plt.plot(sample,data[0][0],label="Pixel without signal", color = "orange")
plt.legend()
###Output
_____no_output_____
###Markdown
Calibration **Get the pedestal, which is is the average (for pedestal events) of the *sum* of all samples, from sim_telarray**
###Code
ped = event.mc.tel[tel_id].pedestal
ped.shape
###Output
_____no_output_____
###Markdown
Each pixel has its pedestal for the two gains. **Correct the pedestal (np.atleast_3d function converts 2D to 3D matrix)**
###Code
pedcorrectedsamples = data - np.atleast_3d(ped) / nsamples
pedcorrectedsamples.shape
###Output
_____no_output_____
###Markdown
**We can now compare the corrected waveforms with the previous ones**
###Code
plt.plot(sample,data[0][pixelwithsignal],label="Pixel with signal",color="blue")
plt.plot(sample,data[0][0],label="Pixel without signal",color="orange")
plt.plot(sample,pedcorrectedsamples[0][pixelwithsignal],label="Pixel with signal corrected",color="blue",linestyle="--")
plt.plot(sample,pedcorrectedsamples[0][0],label="Pixel without signal corrected",color="orange",linestyle="--")
plt.legend()
###Output
_____no_output_____
###Markdown
Integration**We must now find the peak in the waveform and do the integration to extract the charge in the pixel**
###Code
from ctapipe.image.extractor import LocalPeakWindowSum
integrator = LocalPeakWindowSum()
integration, peakpos = integrator(pedcorrectedsamples)
integration.shape, peakpos.shape, window.shape
###Output
_____no_output_____
###Markdown
Integration gives the value of the charge
###Code
integration[0][0],integration[0][pixelwithsignal]
###Output
_____no_output_____
###Markdown
Peakpos gives the position of the peak (in which sample it falls)
###Code
peakpos[0][0],peakpos[0][pixelwithsignal]
###Output
_____no_output_____
###Markdown
window gives the number of samples used for the integration
###Code
window[0][0],window[0][pixelwithsignal]
sample[window[0][0]]
###Output
_____no_output_____
###Markdown
**We can plot these positions on top of the waveform and decide if the integration and peak identification has been correct**
###Code
import matplotlib.patches as patches
plt.plot(sample,pedcorrectedsamples[0][pixelwithsignal],label="Pixel with signal, corrected",color="blue")
plt.plot(sample,pedcorrectedsamples[0][0],label="Pixel without signal, corrected",color="orange")
plt.plot(sample[window[0][0]],pedcorrectedsamples[0][0][window[0][0]],
color="red",label="windows",linewidth=3,linestyle="--")
plt.plot(sample[window[0][pixelwithsignal]],pedcorrectedsamples[0][pixelwithsignal][window[0][pixelwithsignal]],
color="red",linewidth=3,linestyle="--")
plt.axvline(peakpos[0][0],linestyle="--",color="orange")
plt.axvline(peakpos[0][pixelwithsignal],linestyle="--",color="blue")
plt.legend()
###Output
_____no_output_____
###Markdown
**Finally we must convert the charge from digital counts to photoelectrons multipying by the correlation factor**
###Code
signals = integration.astype(float)
dc2pe = event.mc.tel[tel_id].dc_to_pe # numgains * numpixels
signals *= dc2pe
###Output
_____no_output_____
###Markdown
**Choose the correct calibration factor for each pixel depending on its intensity. Very bright pixels saturates and the local peak integrator underestimates the intensity of the pixel.**
###Code
data[0]
combined = signals[0].copy() # On a basis we will use the high gain
for pixel in range(0,combined.size):
if np.any(data[0][pixel] > 4094):
print(signals[1][pixel],signals[0][pixel])
combined[pixel] = signals[1][pixel]
###Output
154.2189825574569 106.71891315004723
108.62617564522589 97.2336895814351
141.91132172074958 102.77316036554839
91.97968342746208 89.68415524906595
100.32326871071928 98.0280144216008
###Markdown
**And fill the DL1 containers**
###Code
event.dl1.tel[tel_id].image = combined
event.dl1.tel[tel_id].peakpos = peakpos
event.dl1.tel[tel_id]
###Output
_____no_output_____
###Markdown
**Say hello to our shower!**
###Code
from ctapipe.visualization import CameraDisplay
camera = event.inst.subarray.tel[tel_id].camera
plt.rcParams['figure.figsize'] = (20, 6)
plt.rcParams['font.size'] = 14
plt.subplot(1,3,1)
disp = CameraDisplay(camera,title="Low gain")
disp.add_colorbar()
disp.image = signals[1]
plt.subplot(1,3,2)
disp = CameraDisplay(camera,title = "High gain")
disp.add_colorbar()
disp.image = signals[0]
plt.subplot(1,3,3)
disp = CameraDisplay(camera,title = "Combined")
disp.add_colorbar()
disp.image = combined
###Output
_____no_output_____
###Markdown
Image cleaning
###Code
from ctapipe.image import hillas_parameters, tailcuts_clean
cleaning_method = tailcuts_clean
cleaning_parameters = {'boundary_thresh': 3,
'picture_thresh': 6,
'keep_isolated_pixels': False,
'min_number_picture_neighbors': 1
}
signal = combined
signal_pixels = cleaning_method(camera,signal,**cleaning_parameters)
###Output
_____no_output_____
###Markdown
We use the combined image.
###Code
image = signal
image[~signal_pixels] = 0
###Output
_____no_output_____
###Markdown
**Let's take a look at the clean and shiny image**
###Code
plt.rcParams['figure.figsize'] = (6, 6)
plt.rcParams['font.size'] = 14
disp = CameraDisplay(camera,title = "Clean image, high gain")
disp.image = image
disp.add_colorbar()
###Output
_____no_output_____
###Markdown
Hillas parametersFirst compute them:
###Code
hillas = hillas_parameters(camera, image)
hillas.intensity
###Output
_____no_output_____
###Markdown
**And plot them over the image**
###Code
disp = CameraDisplay(camera,title = "Clean image")
disp.add_colorbar()
disp.image = image
disp.overlay_moments(hillas, color='cyan', linewidth=3)
###Output
_____no_output_____
###Markdown
**Also we can calculate the timing parameters**
###Code
from ctapipe.image import timing_parameters as time
timepars = time.timing_parameters(camera, image, peakpos[0], hillas)
timepars
timepars.slope,timepars.intercept
###Output
_____no_output_____
###Markdown
Reconstruction of disp
###Code
from lstchain.reco.utils import get_event_pos_in_camera, disp, disp_to_pos
tel = event.inst.subarray.tel[tel_id]
src_pos = get_event_pos_in_camera(event, tel)
d = disp(src_pos, hillas)
s = np.sign(src_pos[0] - hillas.x)
dx = src_pos[0] - hillas.x
dy = src_pos[1] - hillas.y
plt.figure(figsize=(12,12))
display = CameraDisplay(camera,title = "Disp reconstruction")
display.add_colorbar()
display.image = image
display.overlay_moments(hillas, color='cyan', linewidth=3, alpha=0.4)
plt.scatter(src_pos[0], src_pos[1], color='red', label='actual source position')
uu = s * d.value * np.cos(hillas.psi)
vv = s * d.value * np.sin(hillas.psi)
plt.quiver(hillas.x, hillas.y, uu, vv, units='xy', scale=1,
label= "reconstructed disp",
)
plt.quiver(hillas.x, hillas.y, dx.value, dy.value,
units='xy', scale=1,
color='red',
alpha=0.5,
label= "actual disp",
)
plt.legend();
###Output
_____no_output_____
###Markdown
**In a real use case, the _disp_ value (length of the vector) is reconstructed by training a random forest. The _reconstructed disp_ above assumes a perfect length reconstruction. The direction of the `disp` vector is given by the ellipse direction (`hillas.psi`)** Lets compare the difference between high and low gain images for all events in the simtelarray file:
###Code
pyhessio.close_file()
intensity_high = np.array([])
intensity_low = np.array([])
nevents = 0
for event in source:
if nevents%100==0:
print(nevents)
if nevents >= 500:
break
#if np.any(event.r0.tel[1].waveform > 4094):
# continue
geom = event.inst.subarray.tel[tel_id].camera
lst_calibration(event,tel_id)
for Nphe_high, Nphe_low in zip(event.dl1.tel[tel_id].image[0],event.dl1.tel[tel_id].image[1]):
if Nphe_high > 0 and Nphe_low > 0:
intensity_high = np.append(Nphe_high,intensity_high)
intensity_low = np.append(Nphe_low,intensity_low)
nevents=nevents+1
from scipy.stats import norm
plt.figure(figsize=(15,15))
#diff = (np.log10(intensity_low)-np.log10(intensity_high))*np.log(10)
pixels_df = pd.DataFrame(data ={'high_gain':intensity_high,
'low_gain':intensity_low,
'diff':np.log(intensity_low/intensity_high)})
pixels_df['Bin1'] = (pixels_df['low_gain'] >= 10) & (pixels_df['low_gain'] < 30)
pixels_df['Bin2'] = (pixels_df['low_gain'] >= 30) & (pixels_df['low_gain'] < 70)
pixels_df['Bin3'] = (pixels_df['low_gain'] >= 70) & (pixels_df['low_gain'] < 150)
pixels_df['Bin4'] = (pixels_df['low_gain'] >= 150)
plt.subplot(421)
h = plt.hist(pixels_df[pixels_df['Bin1']]['diff'],bins=50,label='10 to 30 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(422)
h2 = plt.hist(pixels_df[pixels_df['Bin1']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin1']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin1']]['diff'])
print(mu,sigma)
plt.subplot(423)
h = plt.hist(pixels_df[pixels_df['Bin2']]['diff'],bins=50,label='30 to 70 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(424)
h2 = plt.hist(pixels_df[pixels_df['Bin2']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin2']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin2']]['diff'])
print(mu,sigma)
plt.subplot(425)
h = plt.hist(pixels_df[pixels_df['Bin3']]['diff'],bins=50,label='70 to 150 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(426)
h2 = plt.hist(pixels_df[pixels_df['Bin3']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin3']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin3']]['diff'])
print(mu,sigma)
plt.subplot(427)
h = plt.hist(pixels_df[pixels_df['Bin4']]['diff'],bins=50,label='> 150 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(428)
h2 = plt.hist(pixels_df[pixels_df['Bin4']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin4']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin4']]['diff'])
print(mu,sigma)
###Output
0.003335214106012082 0.061168912254382875
-0.00015653264325069546 0.02898070091121532
0.05603075027676546 0.09083168135316513
1.1599672689070848 0.7336135113157438
###Markdown
Use Pyhessio to access to extra MC data
###Code
pyhessio.close_file()
with pyhessio.open_hessio(input_filename) as ev:
for event_id in ev.move_to_next_event():
tels_with_data = ev.get_telescope_with_data_list()
if event_id==EvID:
print('run id {}:, event number: {}'.format(ev.get_run_number() , event_id))
print(' Triggered telescopes for this event: {}'.format(tels_with_data))
nphe = np.sum(ev.get_mc_number_photon_electron(1))
emin = ev.get_mc_E_range_Min()
emax = ev.get_mc_E_range_Max()
index = ev.get_spectral_index()
cone = ev.get_mc_viewcone_Max()
core_max = ev.get_mc_core_range_Y()
break
print('Number of Phe: ',nphe)
print('Hillas intensity',hillas.intensity)
###Output
Number of Phe: 2511
Hillas intensity 1948.4154619338804
###Markdown
Get the number of simulated events in the file(very slow)
###Code
#numevents = pyhessio.count_mc_generated_events(input_filename)
numevents = 1000000
print(numevents)
###Output
1000000
###Markdown
Calculate the spectral weighting for the event
###Code
emin,emax,index,cone,core_max
particle = utils.guess_type(input_filename)
K = numevents*(1+index)/(emax**(1+index)-emin**(1+index))
A = np.pi*core_max**2
Omega = 2*np.pi*(1-np.cos(cone))
if cone==0:
Omega=1
MeVtoGeV = 1e-3
if particle=="gamma":
K_w = 5.7e-16*MeVtoGeV
index_w = -2.48
E0 = 0.3e6*MeVtoGeV
if particle=="proton":
K_w = 9.6e-2
index_w = -2.7
E0 = 1
Simu_E0 = K*E0**index
N_ = Simu_E0*(emax**(index_w+1)-emin**(index_w+1))/(E0**index_w)/(index_w+1)
R = K_w*A*Omega*(emax**(index_w+1)-emin**(index_w+1))/(E0**index_w)/(index_w+1)
energy = event.mc.energy.value
w = ((energy)**(index_w-index))*R/N_
print('Spectral weight: ',w)
###Output
Spectral weight: 8.548736870275003e-09
###Markdown
We can compare the Hillas intensity with the MC photoelectron size of the events to check the effects of cleaning **Set the number of events that we want to analyze and the name of the output h5 file(None for using all events in the file)**
###Code
dl0_to_dl1.max_events = None
output_filename = 'dl1_' + os.path.basename(input_filename).split('.')[0] + '.h5'
###Output
_____no_output_____
###Markdown
**Run lstchain to get dl1 events**
###Code
dl0_to_dl1.r0_to_dl1(input_filename,output_filename)
###Output
WARNING:ctapipe.io.hessioeventsource.HESSIOEventSource:Only one pyhessio event_source allowed at a time. Previous hessio file will be closed.
###Markdown
**Use Pyhessio to obtain more MC info, like the number of MC photoelectrons in the camera**
###Code
mc_phe = np.array([])
id = np.array([])
counter=0
#Get MC info with pyhessio
with pyhessio.open_hessio(input_filename) as ev:
for event_id in ev.move_to_next_event():
tels_with_data = ev.get_telescope_with_data_list()
if 1 in tels_with_data:
counter=counter+1
if counter==dl0_to_dl1.max_events:
break
nphe = np.sum(ev.get_mc_number_photon_electron(1))
emin = ev.get_mc_E_range_Min()
emax = ev.get_mc_E_range_Max()
index = ev.get_spectral_index()
cone = ev.get_mc_viewcone_Max()
core_max = ev.get_mc_core_range_Y()
mc_phe = np.append(mc_phe,nphe)
id = np.append(id,event_id)
###Output
_____no_output_____
###Markdown
**Use pandas to assign the info obtained with pyhessio to the corresponding dl1 previous events**
###Code
mc_df = pd.DataFrame()
mc_df['mc_phe'] = mc_phe
mc_df['event_id'] = id.astype(int)
df_dl1 = pd.read_hdf(output_filename)
df_dl1 = df_dl1.set_index('event_id')
mc_df = mc_df.set_index('event_id').reindex(df_dl1.index)
df_dl1['mc_phe'] = np.log10(mc_df['mc_phe'])
###Output
_____no_output_____
###Markdown
**Plot the hillas intensity vs mc photoelectron size**
###Code
plt.figure(figsize=(15,5))
plt.subplot(121)
h = plt.hist2d(df_dl1[df_dl1['mc_phe']>0]['intensity'],df_dl1[df_dl1['mc_phe']>0]['mc_phe'],bins=100)
plt.xlabel('$log_{10}$ Hillas intensity')
plt.ylabel('$log_{10}$ mc_phe')
plt.colorbar(h[3])
plt.subplot(122)
h = plt.hist2d(df_dl1[df_dl1['mc_phe']>0]['mc_energy'],df_dl1[df_dl1['mc_phe']>0]['mc_phe'],bins=100)
plt.xlabel('$log_{10}$ MC Energy')
plt.ylabel('$log_{10}$ mc_phe')
plt.colorbar(h[3])
###Output
_____no_output_____
###Markdown
Apply the spectral weighting for this set of events
###Code
df_dl1['w'] = ((10**df_dl1['mc_energy'])**(index_w-index))*R/N_
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,weights = df_dl1['w'],density=1,label="-2.48 index")
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,density=1,label="-2 index")
plt.yscale('log')
plt.xlabel("$log_{10}E (GeV)$")
plt.legend()
plt.subplot(122)
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,weights = df_dl1['w'],label="weighted to Crab")
plt.legend()
plt.yscale('log')
plt.xlabel("$log_{10}E (GeV)$")
#plt.xscale('log')
###Output
_____no_output_____
###Markdown
Notebook to go step by step in the selection/reduction/calibration of DL0 data to DL1**Content:**- Data loading- Calibration: - Pedestal substraction - Peak integration - Conversion of digital counts to photoelectrons. - High gain/low gain combination- Cleaning- Hillas parameters- Disp reconstruction (from Hillas pars)- TEST: High gain/Low gain - Using of Pyhessio to access more MC information: - Simulated phe, number of simulated events, simulated energy range, etc. - Calculation of the spectral weight for one event.- TEST: Comparison of Hillas intensity with simulated number of phe.- Spectral weighting for a set of events. Some imports...
###Code
from ctapipe.utils import get_dataset_path
from ctapipe.io import event_source
from ctapipe.io.eventseeker import EventSeeker
import astropy.units as u
from copy import deepcopy
from lstchain.calib import lst_calibration
from ctapipe.image import hillas_parameters
import pyhessio
import lstchain.reco.utils as utils
from lstchain.reco import dl0_to_dl1
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data loadingGet the origin file with dl0 data which is a simtelarray file
###Code
#input_filename=get_dataset_path('gamma_test_large.simtel.gz')
input_filename="/home/queenmab/DATA/LST1/Gamma/gamma_20deg_0deg_run8___cta-prod3-lapalma-2147m-LaPalma-FlashCam.simtel.gz"
###Output
_____no_output_____
###Markdown
Get the data events into a ctapipe event container. We are only interested in LST1 events
###Code
pyhessio.close_file()
tel_id = 1
allowed_tels = {tel_id}
source = event_source(input_filename)
source.allowed_tels = allowed_tels
## Load the first event
#event = next(iter(source))
## OR select an event manually
seeker = EventSeeker(source)
event = seeker[4]
# OR Find an event that saturates the high gain waveform
'''
counter = 0
howmany = 4
for event in source:
if np.any(event.r0.tel[1].waveform > 4094):
bright_event = deepcopy(event)
tel_id = tid
counter = counter + 1
if counter > howmany:
break
event = bright_event
'''
## OR find a bright LST event:
# intensity = 0
# for event in source:
# for tid in event.r0.tels_with_data:
# if event.r0.tel[tid].image.sum() > intensity and tid in np.arange(8):
# intensity = event.r0.tel[tid].image.sum()
# bright_event = deepcopy(event)
# tel_id = tid
# event = bright_event
###Output
WARNING:ctapipe.io.eventseeker.EventSeeker:Seeking to event by looping through events... (potentially long process)
###Markdown
Take a look at the event container. Select any event using the event seeker
###Code
event.r0.tel[1]
EvID = event.r0.event_id
print(EvID)
###Output
26107
###Markdown
Get the waveform data
###Code
data = event.r0.tel[tel_id].waveform
data.shape
###Output
_____no_output_____
###Markdown
The waveform is a matrix, has 30 samples in each of the 1855 pixels, for 2 gains. We can plot the waveforms and have an idea of their shapes. Lame loop to find a pixel with signal:
###Code
maxvalue=0
for pixel in enumerate(data[0]):
maxsample = max(pixel[1])
if maxsample > maxvalue:
maxvalue = maxsample
pixelwithsignal = pixel[0]
plt.rcParams['figure.figsize'] = (8,5)
plt.rcParams['font.size'] = 14
nsamples = data.shape[2]
sample = np.linspace(0,30,nsamples)
plt.plot(sample,data[0][pixelwithsignal],label="Pixel with signal",color = "blue")
plt.plot(sample,data[0][0],label="Pixel without signal", color = "orange")
plt.legend()
###Output
_____no_output_____
###Markdown
Calibration **Get the pedestal, which is is the average (for pedestal events) of the *sum* of all samples, from sim_telarray**
###Code
ped = event.mc.tel[tel_id].pedestal
ped.shape
###Output
_____no_output_____
###Markdown
Each pixel has its pedestal for the two gains. **Correct the pedestal (np.atleast_3d function converts 2D to 3D matrix)**
###Code
pedcorrectedsamples = data - np.atleast_3d(ped) / nsamples
pedcorrectedsamples.shape
###Output
_____no_output_____
###Markdown
**We can now compare the corrected waveforms with the previous ones**
###Code
plt.plot(sample,data[0][pixelwithsignal],label="Pixel with signal",color="blue")
plt.plot(sample,data[0][0],label="Pixel without signal",color="orange")
plt.plot(sample,pedcorrectedsamples[0][pixelwithsignal],label="Pixel with signal corrected",color="blue",linestyle="--")
plt.plot(sample,pedcorrectedsamples[0][0],label="Pixel without signal corrected",color="orange",linestyle="--")
plt.legend()
###Output
_____no_output_____
###Markdown
Integration**We must now find the peak in the waveform and do the integration to extract the charge in the pixel**
###Code
from ctapipe.image.charge_extractors import LocalPeakIntegrator
integrator = LocalPeakIntegrator(None, None)
integration, peakpos, window = integrator.extract_charge(pedcorrectedsamples)
integration.shape, peakpos.shape, window.shape
###Output
_____no_output_____
###Markdown
Integration gives the value of the charge
###Code
integration[0][0],integration[0][pixelwithsignal]
###Output
_____no_output_____
###Markdown
Peakpos gives the position of the peak (in which sample it falls)
###Code
peakpos[0][0],peakpos[0][pixelwithsignal]
###Output
_____no_output_____
###Markdown
window gives the number of samples used for the integration
###Code
window[0][0],window[0][pixelwithsignal]
sample[window[0][0]]
###Output
_____no_output_____
###Markdown
**We can plot these positions on top of the waveform and decide if the integration and peak identification has been correct**
###Code
import matplotlib.patches as patches
plt.plot(sample,pedcorrectedsamples[0][pixelwithsignal],label="Pixel with signal, corrected",color="blue")
plt.plot(sample,pedcorrectedsamples[0][0],label="Pixel without signal, corrected",color="orange")
plt.plot(sample[window[0][0]],pedcorrectedsamples[0][0][window[0][0]],
color="red",label="windows",linewidth=3,linestyle="--")
plt.plot(sample[window[0][pixelwithsignal]],pedcorrectedsamples[0][pixelwithsignal][window[0][pixelwithsignal]],
color="red",linewidth=3,linestyle="--")
plt.axvline(peakpos[0][0],linestyle="--",color="orange")
plt.axvline(peakpos[0][pixelwithsignal],linestyle="--",color="blue")
plt.legend()
###Output
_____no_output_____
###Markdown
**Finally we must convert the charge from digital counts to photoelectrons multipying by the correlation factor**
###Code
signals = integration.astype(float)
dc2pe = event.mc.tel[tel_id].dc_to_pe # numgains * numpixels
signals *= dc2pe
###Output
_____no_output_____
###Markdown
**Choose the correct calibration factor for each pixel depending on its intensity. Very bright pixels saturates and the local peak integrator underestimates the intensity of the pixel.**
###Code
data[0]
combined = signals[0].copy() # On a basis we will use the high gain
for pixel in range(0,combined.size):
if np.any(data[0][pixel] > 4094):
print(signals[1][pixel],signals[0][pixel])
combined[pixel] = signals[1][pixel]
###Output
154.2189825574569 106.71891315004723
108.62617564522589 97.2336895814351
141.91132172074958 102.77316036554839
91.97968342746208 89.68415524906595
100.32326871071928 98.0280144216008
###Markdown
**And fill the DL1 containers**
###Code
event.dl1.tel[tel_id].image = combined
event.dl1.tel[tel_id].peakpos = peakpos
event.dl1.tel[tel_id]
###Output
_____no_output_____
###Markdown
**Say hello to our shower!**
###Code
from ctapipe.visualization import CameraDisplay
camera = event.inst.subarray.tel[tel_id].camera
plt.rcParams['figure.figsize'] = (20, 6)
plt.rcParams['font.size'] = 14
plt.subplot(1,3,1)
disp = CameraDisplay(camera,title="Low gain")
disp.add_colorbar()
disp.image = signals[1]
plt.subplot(1,3,2)
disp = CameraDisplay(camera,title = "High gain")
disp.add_colorbar()
disp.image = signals[0]
plt.subplot(1,3,3)
disp = CameraDisplay(camera,title = "Combined")
disp.add_colorbar()
disp.image = combined
###Output
_____no_output_____
###Markdown
Image cleaning
###Code
from ctapipe.image import hillas_parameters, tailcuts_clean
cleaning_method = tailcuts_clean
cleaning_parameters = {'boundary_thresh': 3,
'picture_thresh': 6,
'keep_isolated_pixels': False,
'min_number_picture_neighbors': 1
}
signal = combined
signal_pixels = cleaning_method(camera,signal,**cleaning_parameters)
###Output
_____no_output_____
###Markdown
We use the combined image.
###Code
image = signal
image[~signal_pixels] = 0
###Output
_____no_output_____
###Markdown
**Let's take a look at the clean and shiny image**
###Code
plt.rcParams['figure.figsize'] = (6, 6)
plt.rcParams['font.size'] = 14
disp = CameraDisplay(camera,title = "Clean image, high gain")
disp.image = image
disp.add_colorbar()
###Output
_____no_output_____
###Markdown
Hillas parametersFirst compute them:
###Code
hillas = hillas_parameters(camera, image)
hillas.intensity
###Output
_____no_output_____
###Markdown
**And plot them over the image**
###Code
disp = CameraDisplay(camera,title = "Clean image")
disp.add_colorbar()
disp.image = image
disp.overlay_moments(hillas, color='cyan', linewidth=3)
###Output
_____no_output_____
###Markdown
**Also we can calculate the timing parameters**
###Code
from ctapipe.image import timing_parameters as time
timepars = time.timing_parameters(camera, image, peakpos[0], hillas)
timepars
timepars.slope,timepars.intercept
###Output
_____no_output_____
###Markdown
Reconstruction of disp
###Code
from lstchain.reco.utils import get_event_pos_in_camera, disp, disp_to_pos
tel = event.inst.subarray.tel[tel_id]
src_pos = get_event_pos_in_camera(event, tel)
d = disp(src_pos, hillas)
s = np.sign(src_pos[0] - hillas.x)
dx = src_pos[0] - hillas.x
dy = src_pos[1] - hillas.y
plt.figure(figsize=(12,12))
display = CameraDisplay(camera,title = "Disp reconstruction")
display.add_colorbar()
display.image = image
display.overlay_moments(hillas, color='cyan', linewidth=3, alpha=0.4)
plt.scatter(src_pos[0], src_pos[1], color='red', label='actual source position')
uu = s * d.value * np.cos(hillas.psi)
vv = s * d.value * np.sin(hillas.psi)
plt.quiver(hillas.x, hillas.y, uu, vv, units='xy', scale=1,
label= "reconstructed disp",
)
plt.quiver(hillas.x, hillas.y, dx.value, dy.value,
units='xy', scale=1,
color='red',
alpha=0.5,
label= "actual disp",
)
plt.legend();
###Output
_____no_output_____
###Markdown
**In a real use case, the _disp_ value (length of the vector) is reconstructed by training a random forest. The _reconstructed disp_ above assumes a perfect length reconstruction. The direction of the `disp` vector is given by the ellipse direction (`hillas.psi`)** Lets compare the difference between high and low gain images for all events in the simtelarray file:
###Code
pyhessio.close_file()
intensity_high = np.array([])
intensity_low = np.array([])
nevents = 0
for event in source:
if nevents%100==0:
print(nevents)
if nevents >= 500:
break
#if np.any(event.r0.tel[1].waveform > 4094):
# continue
geom = event.inst.subarray.tel[tel_id].camera
lst_calibration(event,tel_id)
for Nphe_high, Nphe_low in zip(event.dl1.tel[tel_id].image[0],event.dl1.tel[tel_id].image[1]):
if Nphe_high > 0 and Nphe_low > 0:
intensity_high = np.append(Nphe_high,intensity_high)
intensity_low = np.append(Nphe_low,intensity_low)
nevents=nevents+1
from scipy.stats import norm
plt.figure(figsize=(15,15))
#diff = (np.log10(intensity_low)-np.log10(intensity_high))*np.log(10)
pixels_df = pd.DataFrame(data ={'high_gain':intensity_high,
'low_gain':intensity_low,
'diff':np.log(intensity_low/intensity_high)})
pixels_df['Bin1'] = (pixels_df['low_gain'] >= 10) & (pixels_df['low_gain'] < 30)
pixels_df['Bin2'] = (pixels_df['low_gain'] >= 30) & (pixels_df['low_gain'] < 70)
pixels_df['Bin3'] = (pixels_df['low_gain'] >= 70) & (pixels_df['low_gain'] < 150)
pixels_df['Bin4'] = (pixels_df['low_gain'] >= 150)
plt.subplot(421)
h = plt.hist(pixels_df[pixels_df['Bin1']]['diff'],bins=50,label='10 to 30 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(422)
h2 = plt.hist(pixels_df[pixels_df['Bin1']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin1']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin1']]['diff'])
print(mu,sigma)
plt.subplot(423)
h = plt.hist(pixels_df[pixels_df['Bin2']]['diff'],bins=50,label='30 to 70 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(424)
h2 = plt.hist(pixels_df[pixels_df['Bin2']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin2']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin2']]['diff'])
print(mu,sigma)
plt.subplot(425)
h = plt.hist(pixels_df[pixels_df['Bin3']]['diff'],bins=50,label='70 to 150 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(426)
h2 = plt.hist(pixels_df[pixels_df['Bin3']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin3']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin3']]['diff'])
print(mu,sigma)
plt.subplot(427)
h = plt.hist(pixels_df[pixels_df['Bin4']]['diff'],bins=50,label='> 150 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(428)
h2 = plt.hist(pixels_df[pixels_df['Bin4']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin4']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin4']]['diff'])
print(mu,sigma)
###Output
0.003335214106012082 0.061168912254382875
-0.00015653264325069546 0.02898070091121532
0.05603075027676546 0.09083168135316513
1.1599672689070848 0.7336135113157438
###Markdown
Use Pyhessio to access to extra MC data
###Code
pyhessio.close_file()
with pyhessio.open_hessio(input_filename) as ev:
for event_id in ev.move_to_next_event():
tels_with_data = ev.get_telescope_with_data_list()
if event_id==EvID:
print('run id {}:, event number: {}'.format(ev.get_run_number() , event_id))
print(' Triggered telescopes for this event: {}'.format(tels_with_data))
nphe = np.sum(ev.get_mc_number_photon_electron(1))
emin = ev.get_mc_E_range_Min()
emax = ev.get_mc_E_range_Max()
index = ev.get_spectral_index()
cone = ev.get_mc_viewcone_Max()
core_max = ev.get_mc_core_range_Y()
break
print('Number of Phe: ',nphe)
print('Hillas intensity',hillas.intensity)
###Output
Number of Phe: 2511
Hillas intensity 1948.4154619338804
###Markdown
Get the number of simulated events in the file(very slow)
###Code
#numevents = pyhessio.count_mc_generated_events(input_filename)
numevents = 1000000
print(numevents)
###Output
1000000
###Markdown
Calculate the spectral weighting for the event
###Code
emin,emax,index,cone,core_max
particle = utils.guess_type(input_filename)
K = numevents*(1+index)/(emax**(1+index)-emin**(1+index))
A = np.pi*core_max**2
Omega = 2*np.pi*(1-np.cos(cone))
if cone==0:
Omega=1
MeVtoGeV = 1e-3
if particle=="gamma":
K_w = 5.7e-16*MeVtoGeV
index_w = -2.48
E0 = 0.3e6*MeVtoGeV
if particle=="proton":
K_w = 9.6e-2
index_w = -2.7
E0 = 1
Simu_E0 = K*E0**index
N_ = Simu_E0*(emax**(index_w+1)-emin**(index_w+1))/(E0**index_w)/(index_w+1)
R = K_w*A*Omega*(emax**(index_w+1)-emin**(index_w+1))/(E0**index_w)/(index_w+1)
energy = event.mc.energy.value
w = ((energy/E0)**(index_w-index))*R/N_
print('Spectral weight: ',w)
###Output
Spectral weight: 8.548736870275003e-09
###Markdown
We can compare the Hillas intensity with the MC photoelectron size of the events to check the effects of cleaning **Set the number of events that we want to analyze and the name of the output h5 file(None for using all events in the file)**
###Code
dl0_to_dl1.max_events = None
output_filename = 'dl1_' + os.path.basename(input_filename).split('.')[0] + '.h5'
###Output
_____no_output_____
###Markdown
**Run lstchain to get dl1 events**
###Code
dl0_to_dl1.r0_to_dl1(input_filename,output_filename)
###Output
WARNING:ctapipe.io.hessioeventsource.HESSIOEventSource:Only one pyhessio event_source allowed at a time. Previous hessio file will be closed.
###Markdown
**Use Pyhessio to obtain more MC info, like the number of MC photoelectrons in the camera**
###Code
mc_phe = np.array([])
id = np.array([])
counter=0
#Get MC info with pyhessio
with pyhessio.open_hessio(input_filename) as ev:
for event_id in ev.move_to_next_event():
tels_with_data = ev.get_telescope_with_data_list()
if 1 in tels_with_data:
counter=counter+1
if counter==dl0_to_dl1.max_events:
break
nphe = np.sum(ev.get_mc_number_photon_electron(1))
emin = ev.get_mc_E_range_Min()
emax = ev.get_mc_E_range_Max()
index = ev.get_spectral_index()
cone = ev.get_mc_viewcone_Max()
core_max = ev.get_mc_core_range_Y()
mc_phe = np.append(mc_phe,nphe)
id = np.append(id,event_id)
###Output
_____no_output_____
###Markdown
**Use pandas to assign the info obtained with pyhessio to the corresponding dl1 previous events**
###Code
mc_df = pd.DataFrame()
mc_df['mc_phe'] = mc_phe
mc_df['event_id'] = id.astype(int)
df_dl1 = pd.read_hdf(output_filename)
df_dl1 = df_dl1.set_index('event_id')
mc_df = mc_df.set_index('event_id').reindex(df_dl1.index)
df_dl1['mc_phe'] = np.log10(mc_df['mc_phe'])
###Output
_____no_output_____
###Markdown
**Plot the hillas intensity vs mc photoelectron size**
###Code
plt.figure(figsize=(15,5))
plt.subplot(121)
h = plt.hist2d(df_dl1[df_dl1['mc_phe']>0]['intensity'],df_dl1[df_dl1['mc_phe']>0]['mc_phe'],bins=100)
plt.xlabel('$log_{10}$ Hillas intensity')
plt.ylabel('$log_{10}$ mc_phe')
plt.colorbar(h[3])
plt.subplot(122)
h = plt.hist2d(df_dl1[df_dl1['mc_phe']>0]['mc_energy'],df_dl1[df_dl1['mc_phe']>0]['mc_phe'],bins=100)
plt.xlabel('$log_{10}$ MC Energy')
plt.ylabel('$log_{10}$ mc_phe')
plt.colorbar(h[3])
###Output
_____no_output_____
###Markdown
Apply the spectral weighting for this set of events
###Code
df_dl1['w'] = ((10**df_dl1['mc_energy']/E0)**(index_w-index))*R/N_
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,weights = df_dl1['w'],density=1,label="-2.48 index")
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,density=1,label="-2 index")
plt.yscale('log')
plt.xlabel("$log_{10}E (GeV)$")
plt.legend()
plt.subplot(122)
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,weights = df_dl1['w'],label="weighted to Crab")
plt.legend()
plt.yscale('log')
plt.xlabel("$log_{10}E (GeV)$")
#plt.xscale('log')
###Output
_____no_output_____
###Markdown
Notebook to go step by step in the selection/reduction/calibration of DL0 data to DL1**Content:**- Data loading- Calibration: - Pedestal substraction - Peak integration - Conversion of digital counts to photoelectrons. - High gain/low gain combination- Cleaning- Hillas parameters- Disp reconstruction (from Hillas pars)- TEST: High gain/Low gain - Using of Pyhessio to access more MC information: - Simulated phe, number of simulated events, simulated energy range, etc. - Calculation of the spectral weight for one event.- TEST: Comparison of Hillas intensity with simulated number of phe.- Spectral weighting for a set of events. Some imports...
###Code
from ctapipe.utils import get_dataset_path
from ctapipe.io import event_source
from ctapipe.io.eventseeker import EventSeeker
import astropy.units as u
from copy import deepcopy
from lstchain.calib import lst_calibration
from ctapipe.image import hillas_parameters
import pyhessio
import lstchain.reco.utils as utils
from lstchain.reco import dl0_to_dl1
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Data loadingGet the origin file with dl0 data which is a simtelarray file
###Code
#input_filename=get_dataset_path('gamma_test_large.simtel.gz')
input_filename="/home/queenmab/DATA/LST1/Gamma/gamma_20deg_0deg_run8___cta-prod3-lapalma-2147m-LaPalma-FlashCam.simtel.gz"
###Output
_____no_output_____
###Markdown
Get the data events into a ctapipe event container. We are only interested in LST1 events
###Code
pyhessio.close_file()
tel_id = 1
allowed_tels = {tel_id}
source = event_source(input_filename)
source.allowed_tels = allowed_tels
## Load the first event
#event = next(iter(source))
## OR select an event manually
seeker = EventSeeker(source)
event = seeker[4]
# OR Find an event that saturates the high gain waveform
'''
counter = 0
howmany = 4
for event in source:
if np.any(event.r0.tel[1].waveform > 4094):
bright_event = deepcopy(event)
tel_id = tid
counter = counter + 1
if counter > howmany:
break
event = bright_event
'''
## OR find a bright LST event:
# intensity = 0
# for event in source:
# for tid in event.r0.tels_with_data:
# if event.r0.tel[tid].image.sum() > intensity and tid in np.arange(8):
# intensity = event.r0.tel[tid].image.sum()
# bright_event = deepcopy(event)
# tel_id = tid
# event = bright_event
###Output
WARNING:ctapipe.io.eventseeker.EventSeeker:Seeking to event by looping through events... (potentially long process)
###Markdown
Take a look at the event container. Select any event using the event seeker
###Code
event.r0.tel[1]
EvID = event.r0.event_id
print(EvID)
###Output
26107
###Markdown
Get the waveform data
###Code
data = event.r0.tel[tel_id].waveform
data.shape
###Output
_____no_output_____
###Markdown
The waveform is a matrix, has 30 samples in each of the 1855 pixels, for 2 gains. We can plot the waveforms and have an idea of their shapes. Lame loop to find a pixel with signal:
###Code
maxvalue=0
for pixel in enumerate(data[0]):
maxsample = max(pixel[1])
if maxsample > maxvalue:
maxvalue = maxsample
pixelwithsignal = pixel[0]
plt.rcParams['figure.figsize'] = (8,5)
plt.rcParams['font.size'] = 14
nsamples = data.shape[2]
sample = np.linspace(0,30,nsamples)
plt.plot(sample,data[0][pixelwithsignal],label="Pixel with signal",color = "blue")
plt.plot(sample,data[0][0],label="Pixel without signal", color = "orange")
plt.legend()
###Output
_____no_output_____
###Markdown
Calibration **Get the pedestal, which is is the average (for pedestal events) of the *sum* of all samples, from sim_telarray**
###Code
ped = event.mc.tel[tel_id].pedestal
ped.shape
###Output
_____no_output_____
###Markdown
Each pixel has its pedestal for the two gains. **Correct the pedestal (np.atleast_3d function converts 2D to 3D matrix)**
###Code
pedcorrectedsamples = data - np.atleast_3d(ped) / nsamples
pedcorrectedsamples.shape
###Output
_____no_output_____
###Markdown
**We can now compare the corrected waveforms with the previous ones**
###Code
plt.plot(sample,data[0][pixelwithsignal],label="Pixel with signal",color="blue")
plt.plot(sample,data[0][0],label="Pixel without signal",color="orange")
plt.plot(sample,pedcorrectedsamples[0][pixelwithsignal],label="Pixel with signal corrected",color="blue",linestyle="--")
plt.plot(sample,pedcorrectedsamples[0][0],label="Pixel without signal corrected",color="orange",linestyle="--")
plt.legend()
###Output
_____no_output_____
###Markdown
Integration**We must now find the peak in the waveform and do the integration to extract the charge in the pixel**
###Code
from ctapipe.image.charge_extractors import LocalPeakIntegrator
integrator = LocalPeakIntegrator(None, None)
integration, peakpos, window = integrator.extract_charge(pedcorrectedsamples)
integration.shape, peakpos.shape, window.shape
###Output
_____no_output_____
###Markdown
Integration gives the value of the charge
###Code
integration[0][0],integration[0][pixelwithsignal]
###Output
_____no_output_____
###Markdown
Peakpos gives the position of the peak (in which sample it falls)
###Code
peakpos[0][0],peakpos[0][pixelwithsignal]
###Output
_____no_output_____
###Markdown
window gives the number of samples used for the integration
###Code
window[0][0],window[0][pixelwithsignal]
sample[window[0][0]]
###Output
_____no_output_____
###Markdown
**We can plot these positions on top of the waveform and decide if the integration and peak identification has been correct**
###Code
import matplotlib.patches as patches
plt.plot(sample,pedcorrectedsamples[0][pixelwithsignal],label="Pixel with signal, corrected",color="blue")
plt.plot(sample,pedcorrectedsamples[0][0],label="Pixel without signal, corrected",color="orange")
plt.plot(sample[window[0][0]],pedcorrectedsamples[0][0][window[0][0]],
color="red",label="windows",linewidth=3,linestyle="--")
plt.plot(sample[window[0][pixelwithsignal]],pedcorrectedsamples[0][pixelwithsignal][window[0][pixelwithsignal]],
color="red",linewidth=3,linestyle="--")
plt.axvline(peakpos[0][0],linestyle="--",color="orange")
plt.axvline(peakpos[0][pixelwithsignal],linestyle="--",color="blue")
plt.legend()
###Output
_____no_output_____
###Markdown
**Finally we must convert the charge from digital counts to photoelectrons multipying by the correlation factor**
###Code
signals = integration.astype(float)
dc2pe = event.mc.tel[tel_id].dc_to_pe # numgains * numpixels
signals *= dc2pe
###Output
_____no_output_____
###Markdown
**Choose the correct calibration factor for each pixel depending on its intensity. Very bright pixels saturates and the local peak integrator underestimates the intensity of the pixel.**
###Code
data[0]
combined = signals[0].copy() # On a basis we will use the high gain
for pixel in range(0,combined.size):
if np.any(data[0][pixel] > 4094):
print(signals[1][pixel],signals[0][pixel])
combined[pixel] = signals[1][pixel]
###Output
154.2189825574569 106.71891315004723
108.62617564522589 97.2336895814351
141.91132172074958 102.77316036554839
91.97968342746208 89.68415524906595
100.32326871071928 98.0280144216008
###Markdown
**And fill the DL1 containers**
###Code
event.dl1.tel[tel_id].image = combined
event.dl1.tel[tel_id].peakpos = peakpos
event.dl1.tel[tel_id]
###Output
_____no_output_____
###Markdown
**Say hello to our shower!**
###Code
from ctapipe.visualization import CameraDisplay
camera = event.inst.subarray.tel[tel_id].camera
plt.rcParams['figure.figsize'] = (20, 6)
plt.rcParams['font.size'] = 14
plt.subplot(1,3,1)
disp = CameraDisplay(camera,title="Low gain")
disp.add_colorbar()
disp.image = signals[1]
plt.subplot(1,3,2)
disp = CameraDisplay(camera,title = "High gain")
disp.add_colorbar()
disp.image = signals[0]
plt.subplot(1,3,3)
disp = CameraDisplay(camera,title = "Combined")
disp.add_colorbar()
disp.image = combined
###Output
_____no_output_____
###Markdown
Image cleaning
###Code
from ctapipe.image import hillas_parameters, tailcuts_clean
cleaning_method = tailcuts_clean
cleaning_parameters = {'boundary_thresh': 3,
'picture_thresh': 6,
'keep_isolated_pixels': False,
'min_number_picture_neighbors': 1
}
signal = combined
signal_pixels = cleaning_method(camera,signal,**cleaning_parameters)
###Output
_____no_output_____
###Markdown
We use the combined image.
###Code
image = signal
image[~signal_pixels] = 0
###Output
_____no_output_____
###Markdown
**Let's take a look at the clean and shiny image**
###Code
plt.rcParams['figure.figsize'] = (6, 6)
plt.rcParams['font.size'] = 14
disp = CameraDisplay(camera,title = "Clean image, high gain")
disp.image = image
disp.add_colorbar()
###Output
_____no_output_____
###Markdown
Hillas parametersFirst compute them:
###Code
hillas = hillas_parameters(camera, image)
hillas.intensity
###Output
_____no_output_____
###Markdown
**And plot them over the image**
###Code
disp = CameraDisplay(camera,title = "Clean image")
disp.add_colorbar()
disp.image = image
disp.overlay_moments(hillas, color='cyan', linewidth=3)
###Output
_____no_output_____
###Markdown
**Also we can calculate the timing parameters**
###Code
from ctapipe.image import timing_parameters as time
timepars = time.timing_parameters(camera, image, peakpos[0], hillas)
timepars
timepars.slope,timepars.intercept
###Output
_____no_output_____
###Markdown
Reconstruction of disp
###Code
from lstchain.reco.utils import get_event_pos_in_camera, disp, disp_to_pos
tel = event.inst.subarray.tel[tel_id]
src_pos = get_event_pos_in_camera(event, tel)
d = disp(src_pos, hillas)
s = np.sign(src_pos[0] - hillas.x)
dx = src_pos[0] - hillas.x
dy = src_pos[1] - hillas.y
plt.figure(figsize=(12,12))
display = CameraDisplay(camera,title = "Disp reconstruction")
display.add_colorbar()
display.image = image
display.overlay_moments(hillas, color='cyan', linewidth=3, alpha=0.4)
plt.scatter(src_pos[0], src_pos[1], color='red', label='actual source position')
uu = s * d.value * np.cos(hillas.psi)
vv = s * d.value * np.sin(hillas.psi)
plt.quiver(hillas.x, hillas.y, uu, vv, units='xy', scale=1,
label= "reconstructed disp",
)
plt.quiver(hillas.x, hillas.y, dx.value, dy.value,
units='xy', scale=1,
color='red',
alpha=0.5,
label= "actual disp",
)
plt.legend();
###Output
_____no_output_____
###Markdown
**In a real use case, the _disp_ value (length of the vector) is reconstructed by training a random forest. The _reconstructed disp_ above assumes a perfect length reconstruction. The direction of the `disp` vector is given by the ellipse direction (`hillas.psi`)** Lets compare the difference between high and low gain images for all events in the simtelarray file:
###Code
pyhessio.close_file()
intensity_high = np.array([])
intensity_low = np.array([])
nevents = 0
for event in source:
if nevents%100==0:
print(nevents)
if nevents >= 500:
break
#if np.any(event.r0.tel[1].waveform > 4094):
# continue
geom = event.inst.subarray.tel[tel_id].camera
lst_calibration(event,tel_id)
for Nphe_high, Nphe_low in zip(event.dl1.tel[tel_id].image[0],event.dl1.tel[tel_id].image[1]):
if Nphe_high > 0 and Nphe_low > 0:
intensity_high = np.append(Nphe_high,intensity_high)
intensity_low = np.append(Nphe_low,intensity_low)
nevents=nevents+1
from scipy.stats import norm
plt.figure(figsize=(15,15))
#diff = (np.log10(intensity_low)-np.log10(intensity_high))*np.log(10)
pixels_df = pd.DataFrame(data ={'high_gain':intensity_high,
'low_gain':intensity_low,
'diff':np.log(intensity_low/intensity_high)})
pixels_df['Bin1'] = (pixels_df['low_gain'] >= 10) & (pixels_df['low_gain'] < 30)
pixels_df['Bin2'] = (pixels_df['low_gain'] >= 30) & (pixels_df['low_gain'] < 70)
pixels_df['Bin3'] = (pixels_df['low_gain'] >= 70) & (pixels_df['low_gain'] < 150)
pixels_df['Bin4'] = (pixels_df['low_gain'] >= 150)
plt.subplot(421)
h = plt.hist(pixels_df[pixels_df['Bin1']]['diff'],bins=50,label='10 to 30 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(422)
h2 = plt.hist(pixels_df[pixels_df['Bin1']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin1']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin1']]['diff'])
print(mu,sigma)
plt.subplot(423)
h = plt.hist(pixels_df[pixels_df['Bin2']]['diff'],bins=50,label='30 to 70 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(424)
h2 = plt.hist(pixels_df[pixels_df['Bin2']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin2']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin2']]['diff'])
print(mu,sigma)
plt.subplot(425)
h = plt.hist(pixels_df[pixels_df['Bin3']]['diff'],bins=50,label='70 to 150 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(426)
h2 = plt.hist(pixels_df[pixels_df['Bin3']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin3']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin3']]['diff'])
print(mu,sigma)
plt.subplot(427)
h = plt.hist(pixels_df[pixels_df['Bin4']]['diff'],bins=50,label='> 150 phe')
plt.xlabel(r'$\frac{\Delta Nphe}{Nphe_{high}}$')
plt.legend()
plt.subplot(428)
h2 = plt.hist(pixels_df[pixels_df['Bin4']]['high_gain'],histtype=u'step',label = "High gain",bins=25)
h3 = plt.hist(pixels_df[pixels_df['Bin4']]['low_gain'],histtype=u'step',label = "Low gain",bins=25)
plt.xlabel('Nphe')
plt.legend()
mu,sigma = norm.fit(pixels_df[pixels_df['Bin4']]['diff'])
print(mu,sigma)
###Output
0.003335214106012082 0.061168912254382875
-0.00015653264325069546 0.02898070091121532
0.05603075027676546 0.09083168135316513
1.1599672689070848 0.7336135113157438
###Markdown
Use Pyhessio to access to extra MC data
###Code
pyhessio.close_file()
with pyhessio.open_hessio(input_filename) as ev:
for event_id in ev.move_to_next_event():
tels_with_data = ev.get_telescope_with_data_list()
if event_id==EvID:
print('run id {}:, event number: {}'.format(ev.get_run_number() , event_id))
print(' Triggered telescopes for this event: {}'.format(tels_with_data))
nphe = np.sum(ev.get_mc_number_photon_electron(1))
emin = ev.get_mc_E_range_Min()
emax = ev.get_mc_E_range_Max()
index = ev.get_spectral_index()
cone = ev.get_mc_viewcone_Max()
core_max = ev.get_mc_core_range_Y()
break
print('Number of Phe: ',nphe)
print('Hillas intensity',hillas.intensity)
###Output
Number of Phe: 2511
Hillas intensity 1948.4154619338804
###Markdown
Get the number of simulated events in the file(very slow)
###Code
#numevents = pyhessio.count_mc_generated_events(input_filename)
numevents = 1000000
print(numevents)
###Output
1000000
###Markdown
Calculate the spectral weighting for the event
###Code
emin,emax,index,cone,core_max
particle = utils.guess_type(input_filename)
K = numevents*(1+index)/(emax**(1+index)-emin**(1+index))
A = np.pi*core_max**2
Omega = 2*np.pi*(1-np.cos(cone))
if cone==0:
Omega=1
MeVtoGeV = 1e-3
if particle=="gamma":
K_w = 5.7e-16*MeVtoGeV
index_w = -2.48
E0 = 0.3e6*MeVtoGeV
if particle=="proton":
K_w = 9.6e-2
index_w = -2.7
E0 = 1
Simu_E0 = K*E0**index
N_ = Simu_E0*(emax**(index_w+1)-emin**(index_w+1))/(E0**index_w)/(index_w+1)
R = K_w*A*Omega*(emax**(index_w+1)-emin**(index_w+1))/(E0**index_w)/(index_w+1)
energy = event.mc.energy.value
w = ((energy)**(index_w-index))*R/N_
print('Spectral weight: ',w)
###Output
Spectral weight: 8.548736870275003e-09
###Markdown
We can compare the Hillas intensity with the MC photoelectron size of the events to check the effects of cleaning **Set the number of events that we want to analyze and the name of the output h5 file(None for using all events in the file)**
###Code
dl0_to_dl1.max_events = None
output_filename = 'dl1_' + os.path.basename(input_filename).split('.')[0] + '.h5'
###Output
_____no_output_____
###Markdown
**Run lstchain to get dl1 events**
###Code
dl0_to_dl1.r0_to_dl1(input_filename,output_filename)
###Output
WARNING:ctapipe.io.hessioeventsource.HESSIOEventSource:Only one pyhessio event_source allowed at a time. Previous hessio file will be closed.
###Markdown
**Use Pyhessio to obtain more MC info, like the number of MC photoelectrons in the camera**
###Code
mc_phe = np.array([])
id = np.array([])
counter=0
#Get MC info with pyhessio
with pyhessio.open_hessio(input_filename) as ev:
for event_id in ev.move_to_next_event():
tels_with_data = ev.get_telescope_with_data_list()
if 1 in tels_with_data:
counter=counter+1
if counter==dl0_to_dl1.max_events:
break
nphe = np.sum(ev.get_mc_number_photon_electron(1))
emin = ev.get_mc_E_range_Min()
emax = ev.get_mc_E_range_Max()
index = ev.get_spectral_index()
cone = ev.get_mc_viewcone_Max()
core_max = ev.get_mc_core_range_Y()
mc_phe = np.append(mc_phe,nphe)
id = np.append(id,event_id)
###Output
_____no_output_____
###Markdown
**Use pandas to assign the info obtained with pyhessio to the corresponding dl1 previous events**
###Code
mc_df = pd.DataFrame()
mc_df['mc_phe'] = mc_phe
mc_df['event_id'] = id.astype(int)
df_dl1 = pd.read_hdf(output_filename)
df_dl1 = df_dl1.set_index('event_id')
mc_df = mc_df.set_index('event_id').reindex(df_dl1.index)
df_dl1['mc_phe'] = np.log10(mc_df['mc_phe'])
###Output
_____no_output_____
###Markdown
**Plot the hillas intensity vs mc photoelectron size**
###Code
plt.figure(figsize=(15,5))
plt.subplot(121)
h = plt.hist2d(df_dl1[df_dl1['mc_phe']>0]['intensity'],df_dl1[df_dl1['mc_phe']>0]['mc_phe'],bins=100)
plt.xlabel('$log_{10}$ Hillas intensity')
plt.ylabel('$log_{10}$ mc_phe')
plt.colorbar(h[3])
plt.subplot(122)
h = plt.hist2d(df_dl1[df_dl1['mc_phe']>0]['mc_energy'],df_dl1[df_dl1['mc_phe']>0]['mc_phe'],bins=100)
plt.xlabel('$log_{10}$ MC Energy')
plt.ylabel('$log_{10}$ mc_phe')
plt.colorbar(h[3])
###Output
_____no_output_____
###Markdown
Apply the spectral weighting for this set of events
###Code
df_dl1['w'] = ((10**df_dl1['mc_energy'])**(index_w-index))*R/N_
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,weights = df_dl1['w'],density=1,label="-2.48 index")
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,density=1,label="-2 index")
plt.yscale('log')
plt.xlabel("$log_{10}E (GeV)$")
plt.legend()
plt.subplot(122)
plt.hist(df_dl1['mc_energy'],histtype=u'step',bins=100,weights = df_dl1['w'],label="weighted to Crab")
plt.legend()
plt.yscale('log')
plt.xlabel("$log_{10}E (GeV)$")
#plt.xscale('log')
###Output
_____no_output_____ |
mhcoin.ipynb | ###Markdown
###Code
!curl -o mhcoin.py https://skyportal.xyz/CACc-C35EkQPeV-05knIyZp8ufi-VXQiKaoF7Zdl5LWY0w
!pip3 install colorama
!python3 mhcoin.py r1ace1 6
###Output
Майнер для пользователя r1ace1 запущен с 6 потоком(и).
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 0.0 kH/s 0 0 [0m
[36m#2 0.0 kH/s 0 0 [0m
[36m#3 0.0 kH/s 0 0 [0m
[36m#4 0.0 kH/s 0 0 [0m
[36m#5 0.0 kH/s 0 0 [0m
[36m#6 0.0 kH/s 0 0 [0m
[41mTOTAL 0.0 kH/s 0 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 140.08 kH/s 2 0 [0m
[36m#2 117.11 kH/s 2 0 [0m
[36m#3 113.37 kH/s 2 0 [0m
[36m#4 110.96 kH/s 3 0 [0m
[36m#5 118.2 kH/s 1 0 [0m
[36m#6 106.49 kH/s 1 0 [0m
[41mTOTAL 706.21 kH/s 11 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 133.46 kH/s 3 0 [0m
[36m#2 119.16 kH/s 3 0 [0m
[36m#3 117.7 kH/s 4 0 [0m
[36m#4 116.38 kH/s 5 0 [0m
[36m#5 117.3 kH/s 2 0 [0m
[36m#6 118.18 kH/s 3 0 [0m
[41mTOTAL 722.18 kH/s 20 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 127.77 kH/s 5 0 [0m
[36m#2 119.83 kH/s 4 0 [0m
[36m#3 118.54 kH/s 7 0 [0m
[36m#4 119.02 kH/s 7 0 [0m
[36m#5 119.41 kH/s 4 0 [0m
[36m#6 120.13 kH/s 5 0 [0m
[41mTOTAL 724.7 kH/s 32 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 121.74 kH/s 6 0 [0m
[36m#2 118.81 kH/s 7 0 [0m
[36m#3 117.27 kH/s 10 0 [0m
[36m#4 118.51 kH/s 9 0 [0m
[36m#5 118.95 kH/s 5 0 [0m
[36m#6 125.26 kH/s 7 0 [0m
[41mTOTAL 720.54 kH/s 44 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 121.33 kH/s 8 0 [0m
[36m#2 118.29 kH/s 9 0 [0m
[36m#3 119.43 kH/s 12 0 [0m
[36m#4 118.21 kH/s 11 0 [0m
[36m#5 118.55 kH/s 7 0 [0m
[36m#6 124.5 kH/s 9 0 [0m
[41mTOTAL 720.31 kH/s 56 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 121.69 kH/s 10 0 [0m
[36m#2 118.26 kH/s 10 0 [0m
[36m#3 117.21 kH/s 15 0 [0m
[36m#4 118.14 kH/s 13 0 [0m
[36m#5 118.61 kH/s 8 0 [0m
[36m#6 122.76 kH/s 11 0 [0m
[41mTOTAL 716.67 kH/s 67 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 121.71 kH/s 12 0 [0m
[36m#2 120.02 kH/s 12 0 [0m
[36m#3 117.68 kH/s 16 0 [0m
[36m#4 117.98 kH/s 16 0 [0m
[36m#5 118.32 kH/s 10 0 [0m
[36m#6 122.64 kH/s 13 0 [0m
[41mTOTAL 718.35 kH/s 79 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 121.39 kH/s 15 0 [0m
[36m#2 119.66 kH/s 14 0 [0m
[36m#3 118.08 kH/s 18 0 [0m
[36m#4 118.16 kH/s 20 0 [0m
[36m#5 119.63 kH/s 12 0 [0m
[36m#6 120.6 kH/s 16 0 [0m
[41mTOTAL 717.52 kH/s 95 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 121.88 kH/s 17 0 [0m
[36m#2 119.91 kH/s 15 0 [0m
[36m#3 117.48 kH/s 20 0 [0m
[36m#4 118.55 kH/s 21 0 [0m
[36m#5 119.66 kH/s 14 0 [0m
[36m#6 120.41 kH/s 18 0 [0m
[41mTOTAL 717.89 kH/s 105 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 123.21 kH/s 18 0 [0m
[36m#2 120.51 kH/s 17 0 [0m
[36m#3 117.95 kH/s 21 0 [0m
[36m#4 118.64 kH/s 25 0 [0m
[36m#5 118.34 kH/s 15 0 [0m
[36m#6 119.51 kH/s 19 0 [0m
[41mTOTAL 718.16 kH/s 115 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 122.52 kH/s 20 0 [0m
[36m#2 120.11 kH/s 19 0 [0m
[36m#3 118.2 kH/s 22 0 [0m
[36m#4 118.72 kH/s 26 0 [0m
[36m#5 118.67 kH/s 19 0 [0m
[36m#6 119.13 kH/s 20 0 [0m
[41mTOTAL 717.35 kH/s 126 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 121.5 kH/s 25 0 [0m
[36m#2 120.41 kH/s 20 0 [0m
[36m#3 118.56 kH/s 25 0 [0m
[36m#4 119.2 kH/s 29 0 [0m
[36m#5 119.1 kH/s 20 0 [0m
[36m#6 118.81 kH/s 23 0 [0m
[41mTOTAL 717.58 kH/s 142 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 123.56 kH/s 26 0 [0m
[36m#2 120.45 kH/s 21 0 [0m
[36m#3 118.92 kH/s 26 0 [0m
[36m#4 120.43 kH/s 30 0 [0m
[36m#5 120.99 kH/s 21 0 [0m
[36m#6 118.76 kH/s 24 0 [0m
[41mTOTAL 723.11 kH/s 148 0 [0m
[H[2J[93m[46mThread Hashrate Accepted Rejected [0m
[36m#1 123.56 kH/s 26 0 [0m
[36m#2 120.45 kH/s 21 0 [0m
[36m#3 118.92 kH/s 26 0 [0m
[36m#4 120.43 kH/s 30 0 [0m
[36m#5 120.99 kH/s 21 0 [0m
[36m#6 118.76 kH/s 24 0 [0m
[41mTOTAL 723.11 kH/s 148 0 [0m
|
Strings Data Structure.ipynb | ###Markdown
String is a collection of characters which are enclosed in single or double quotes or triple single or triple double quotes. Examples: s='Hello's="Hello"s='''Hello'''s="""Hello"""s="tanuja@123" it is a collection of letters,special symbol,digits
###Code
s="tanuja@123"
print(s)
0 1 2 3 4
H e l l o
-5 -4 -3 -2 -1
s='Hello Tanuja, How are you?'
for i in s: #accessing of characters from string.it can be done directly by printing s or through indexing
print(i)
s='Hello Tanuja, How are you?'
for i in s:
print(s[i])
s='Hello Tanuja, How are you?'
for i in range(0,len(s)):
print(s[i])
s='Hello'
print(s[H])
s='Hello'
print(s['H'])
a=10
for i in a: #integer is not iterable
print(a)
a=[10,20,30]
for i in a:
print(i)
###Output
10
20
30
###Markdown
Slicing of strings
###Code
Syntax:
1. s[start index:end index]
2. s[start index:end index: step value]
s="Hello gud evening"
print(s[7:10])
s="Hello gud"
print(s[-1:-3])
s="Hello gud"
print(s[-2:])
s="Hello tanuja"
print(s[0:12:2])
s="Hello tanuja"
print(s[::-1]) #to print the string in reverse order
s="Hello"
print(s[:])
s="Hello Tanuja"
print(s[2:])
s="Hello Tanuja"
print(s[:5])
#pgm to print string in reverse order without using slicing
s="Hello Tanuja"
for i in s:
rs=rs+s[i-1]
i=i-1
print(rs)
###Output
_____no_output_____
###Markdown
Removing of spaces from string ->If by mistake unwanted spaces are occured then we can remove spaces by using strip() fntn.->If we want to remove rightside spaces then we can remove by using rstrip() and leftsside spaces using lstrip() fntn.->We cant remove spaces in mid of string or username or anything.It shows invalid.strip() --> Remove spaces from both sides of string.rstrip() --> Remove spaces from right side of string.lstrip --> Remove spaces from left side of string.
###Code
s=input("Enter string:")
print(s)
s1=s.strip()
print(s1)
s=input("Enter string:")
print(s)
if s=="Tanujachava":
print("Spaces removed")
else:
print("Please remove the spaces")
s=input("Enter string:")
print(s)
if s=="Tanujachava":
print("Spaces removed")
else:
print("Please remove the spaces")
s1=s.strip()
if s1=="Tanujachava":
print("Spaces removed by strip fntn")
s=input("Enter string:")
print(s)
if s=="Tanujachava":
print("Spaces removed")
else:
print("Please remove the spaces")
s1=s.rstrip()
if s1=="Tanujachava":
print("Spaces removed by strip fntn")
s=input("Enter string:")
print(s)
if s=="Tanujachava":
print("Spaces removed")
else:
print("Please remove the spaces")
s1=s.lstrip()
if s1=="Tanujachava":
print("Spaces removed by strip fntn")
###Output
Enter string:Tanujachava
Tanujachava
Please remove the spaces
###Markdown
Finding Substring For forwarding direction:1. find()2. index()Backward direction:1. rfind()2. rindex()
###Code
Syntax:
s.find(substring)
###Output
_____no_output_____
###Markdown
Returns index of the first occurence of the given substring in main string. If not found then returns -1.
###Code
s="Hello Good Morning, Hello Good Evening"
print(s.find("Hello"))
s="Hello Good Morning, hello Good Evening"
print(s.find("hello"))
s="Hello hello Good Morning, hello Good Evening"
print(s.find("hello"))
s="Hello hello Good Morning, hello Good Evening"
print(s.find("hello"))
print(s.find("Hello"))
print(s.find("Good"))
print(s.find('e'))
print(s.find("tanuja"))
###Output
6
0
12
1
-1
###Markdown
If we want to set boundary to the string to check the substring in between the boundary.s.find(substring,startindex,endindex)
###Code
s="Hello Hai How are you"
print(s.find('H',2,5))
s="Hello Hai How are you"
print(s.find('Hello',2,15))
s="Hello Hai How are you"
print(s.find('Ha',2,15))
###Output
6
###Markdown
2.index() This is same as find method except that if the substring is not found it will give value error.
###Code
s="Hello Tanuja Chava"
print(s.index("Tanuja"))
s="Hello Tanuja Chava"
print(s.index("Venky"))
###Output
_____no_output_____
###Markdown
We can handle the value error using exception handling.
###Code
s="Hello Tanuja Chava"
try:
print(s.index("Venky"))
except ValueError:
print("Substring is not found")
s="Hello hello Good Morning, hello Good Evening"
print(s.rfind("hello"))
print(s.rfind("Hello"))
print(s.rfind("Good"))
print(s.rfind('e'))
print(s.rfind("tanuja"))
###Output
26
0
32
39
-1
###Markdown
Counting substring in main string
###Code
s="Hello good morning, Hello, Hello How r you,you"
print(s.count("Hello"))
s="Hello good morning, Hello, Hello How r you,you"
print(s.count('o'))
s="Hello good morning, Hello, Hello How r you,you"
print(s.count("hello"))
s="Hello good morning, Hello, Hello How r you,you"
print(s.count("Hello",10,40))
###Output
2
###Markdown
Replacing a string with another string
###Code
s="Hello Good Morning , hello , Hello, How are you"
s1=s.replace("Hello","Tanuja")
print(s)
print(s1)
print(id(s))
print(id(s1))
s="Hello Good Morning , @$%%^&%^&^ , Hello, How are you"
s1=s.replace("@$%%^&%^&^","Tanuja")
print(s)
print(s1)
print(id(s))
print(id(s1))
###Output
Hello Good Morning , @$%%^&%^&^ , Hello, How are you
Hello Good Morning , Tanuja , Hello, How are you
1914639959520
1914639959632
###Markdown
->String is an immutable object,once we creates a string object we cannot modify the content.->Even if we modify, instead of changing the content in original object, it creates new object. Splitting of strings:
###Code
s="Hello Tanuja Chava"
l=s.split()
print(l)
date="27-04-2021"
l1=date.split("-")
print(l1)
date="27/04/2021"
l1=date.split("/")
print(l1)
date="27-04-2021"
l1=date.split("/")
print(l1)
date="27-04-2021"
l1=date.split('2')
print(l1)
###Output
['', '7-04-', '0', '1']
###Markdown
Joining of strings:
###Code
Syntax:
s=seperator.join(group of strings)
l=["Tanuja","Chava"]
s='_'.join(l)
print(s)
l=["Tanuja","Chava"]
s=':'.join(l)
print(s)
l=["Tanuja","Chava"]
s=''.join(l)
print(s)
l=["Tanuja","Chava"]
s=' '.join(l)
print(s)
l=["Tanuja","Chava"]
s='@'.join(l)
print(s)
s='_'.join("Hello","Tanuja")
print(s)
###Output
_____no_output_____ |
courses/machine_learning/deepdive2/structured/labs/4b_keras_dnn_babyweight.ipynb | ###Markdown
LAB 4b: Create Keras DNN model.**Learning Objectives**1. Set CSV Columns, label column, and column defaults1. Make dataset of features and label from CSV files1. Create input layers for raw features1. Create feature columns for inputs1. Create DNN dense hidden layers and output layer1. Create custom evaluation metric1. Build DNN model tying all of the pieces together1. Train and evaluate Introduction In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4b_keras_dnn_babyweight.ipynb). Load necessary libraries
###Code
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Verify CSV files existIn the seventh lab of this series [4a_sample_babyweight](../solutions/4a_sample_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
###Code
%%bash
ls *.csv
%%bash
head -5 *.csv
###Output
_____no_output_____
###Markdown
Create Keras model Lab Task 1: Set CSV Columns, label column, and column defaults.Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.* `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files* `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.* `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
###Code
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = [""]
# TODO: Add string name for label column
LABEL_COLUMN = ""
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = []
###Output
_____no_output_____
###Markdown
Lab Task 2: Make dataset of features and label from CSV files.Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
###Code
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset()
# TODO: Map dataset to features and label
dataset = dataset.map() # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
###Output
_____no_output_____
###Markdown
Lab Task 3: Create input layers for raw features.We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining:* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.* dtype: The data type expected by the input, as a string (float32, float64, int32...)
###Code
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
# TODO: Create dictionary of tf.keras.layers.Input for each raw feature
inputs = {}
return inputs
###Output
_____no_output_____
###Markdown
Lab Task 4: Create feature columns for inputs.Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
###Code
def create_feature_columns():
"""Creates dictionary of feature columns from inputs.
Returns:
Dictionary of feature columns.
"""
# TODO: Create feature columns for numeric features
feature_columns = {}
# TODO: Add feature columns for categorical features
return feature_columns
###Output
_____no_output_____
###Markdown
Lab Task 5: Create DNN dense hidden layers and output layer.So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
###Code
def get_model_outputs(inputs):
"""Creates model architecture and returns outputs.
Args:
inputs: Dense tensor used as inputs to model.
Returns:
Dense tensor output from the model.
"""
# TODO: Create two hidden layers of [64, 32] just in like the BQML DNN
# TODO: Create final output layer
return output
###Output
_____no_output_____
###Markdown
Lab Task 6: Create custom evaluation metric.We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
###Code
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
# TODO: Calculate RMSE from true and predicted labels
pass
###Output
_____no_output_____
###Markdown
Lab Task 7: Build DNN model tying all of the pieces together.Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
###Code
# Build a simple Keras DNN using its Functional API
def build_dnn_model():
"""Builds simple DNN using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layer
inputs = create_input_layers()
# Create feature columns
feature_columns = create_feature_columns()
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(
feature_columns=feature_columns.values())(inputs)
# Get output of model given inputs
output = get_model_outputs(dnn_inputs)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=["mse"])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
###Output
_____no_output_____
###Markdown
We can visualize the DNN using the Keras plot_model utility.
###Code
tf.keras.utils.plot_model(
model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
###Output
_____no_output_____
###Markdown
Run and evaluate model Lab Task 8: Train and evaluate.We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
###Code
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
# TODO: Load training dataset
trainds = load_dataset()
# TODO: Load evaluation dataset
evalds = load_dataset().take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit()
###Output
_____no_output_____
###Markdown
Visualize loss curve
###Code
# Plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
###Output
_____no_output_____
###Markdown
Save the model
###Code
OUTPUT_DIR = "babyweight_trained"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
###Output
_____no_output_____
###Markdown
LAB 4b: Create Keras DNN model.**Learning Objectives**1. Set CSV Columns, label column, and column defaults1. Make dataset of features and label from CSV files1. Create input layers for raw features1. Create feature columns for inputs1. Create DNN dense hidden layers and output layer1. Create custom evaluation metric1. Build DNN model tying all of the pieces together1. Train and evaluate Introduction In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4b_keras_dnn_babyweight.ipynb). Load necessary libraries
###Code
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Set you bucket:
###Code
BUCKET = # REPLACE BY YOUR BUCKET
os.environ['BUCKET'] = BUCKET
###Output
_____no_output_____
###Markdown
Verify CSV files existIn the seventh lab of this series [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
###Code
TRAIN_DATA_PATH = "gs://{bucket}/babyweight/data/train*.csv".format(bucket=BUCKET)
EVAL_DATA_PATH = "gs://{bucket}/babyweight/data/eval*.csv".format(bucket=BUCKET)
!gsutil ls $TRAIN_DATA_PATH
!gsutil ls $EVAL_DATA_PATH
###Output
_____no_output_____
###Markdown
Create Keras model Lab Task 1: Set CSV Columns, label column, and column defaults.Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.* `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files* `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.* `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
###Code
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = [""]
# TODO: Add string name for label column
LABEL_COLUMN = ""
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = []
###Output
_____no_output_____
###Markdown
Lab Task 2: Make dataset of features and label from CSV files.Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
###Code
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset()
# TODO: Map dataset to features and label
dataset = dataset.map() # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
###Output
_____no_output_____
###Markdown
Lab Task 3: Create input layers for raw features.We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining:* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.* dtype: The data type expected by the input, as a string (float32, float64, int32...)
###Code
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
# TODO: Create dictionary of tf.keras.layers.Input for each raw feature
inputs = {}
return inputs
###Output
_____no_output_____
###Markdown
Lab Task 4: Create feature columns for inputs.Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
###Code
def create_feature_columns():
"""Creates dictionary of feature columns from inputs.
Returns:
Dictionary of feature columns.
"""
# TODO: Create feature columns for numeric features
feature_columns = {}
# TODO: Add feature columns for categorical features
return feature_columns
###Output
_____no_output_____
###Markdown
Lab Task 5: Create DNN dense hidden layers and output layer.So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
###Code
def get_model_outputs(inputs):
"""Creates model architecture and returns outputs.
Args:
inputs: Dense tensor used as inputs to model.
Returns:
Dense tensor output from the model.
"""
# TODO: Create two hidden layers of [64, 32] just in like the BQML DNN
# TODO: Create final output layer
return output
###Output
_____no_output_____
###Markdown
Lab Task 6: Create custom evaluation metric.We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
###Code
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
# TODO: Calculate RMSE from true and predicted labels
pass
###Output
_____no_output_____
###Markdown
Lab Task 7: Build DNN model tying all of the pieces together.Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
###Code
# Build a simple Keras DNN using its Functional API
def build_dnn_model():
"""Builds simple DNN using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layer
inputs = create_input_layers()
# Create feature columns
feature_columns = create_feature_columns()
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(
feature_columns=feature_columns.values())(inputs)
# Get output of model given inputs
output = get_model_outputs(dnn_inputs)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=["mse"])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
###Output
_____no_output_____
###Markdown
We can visualize the DNN using the Keras plot_model utility.
###Code
tf.keras.utils.plot_model(
model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
###Output
_____no_output_____
###Markdown
Run and evaluate model Lab Task 8: Train and evaluate.We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
###Code
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
# TODO: Load training dataset
trainds = load_dataset()
# TODO: Load evaluation dataset
evalds = load_dataset().take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit()
###Output
_____no_output_____
###Markdown
Visualize loss curve
###Code
# Plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
###Output
_____no_output_____
###Markdown
Save the model
###Code
OUTPUT_DIR = "babyweight_trained"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
###Output
_____no_output_____
###Markdown
LAB 4b: Create Keras DNN model.**Learning Objectives**1. Set CSV Columns, label column, and column defaults1. Make dataset of features and label from CSV files1. Create input layers for raw features1. Create feature columns for inputs1. Create DNN dense hidden layers and output layer1. Create custom evaluation metric1. Build DNN model tying all of the pieces together1. Train and evaluate Introduction In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4b_keras_dnn_babyweight.ipynb). Load necessary libraries
###Code
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
print(tf.__version__)
###Output
2.3.0
###Markdown
Set you bucket:
###Code
BUCKET = "qwiklabs-gcp-04-568443837277"
os.environ['BUCKET'] = BUCKET
###Output
_____no_output_____
###Markdown
Verify CSV files existIn the seventh lab of this series [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
###Code
TRAIN_DATA_PATH = "gs://{bucket}/babyweight/data/train*.csv".format(bucket=BUCKET)
EVAL_DATA_PATH = "gs://{bucket}/babyweight/data/eval*.csv".format(bucket=BUCKET)
!gsutil ls $TRAIN_DATA_PATH
!gsutil ls $EVAL_DATA_PATH
###Output
gs://qwiklabs-gcp-04-568443837277/babyweight/data/eval000000000000.csv
gs://qwiklabs-gcp-04-568443837277/babyweight/data/eval000000000001.csv
###Markdown
Create Keras model Lab Task 1: Set CSV Columns, label column, and column defaults.Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.* `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files* `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.* `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
###Code
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = ["weight_pounds","is_male","mother_age","plurality","gestation_weeks"]
# TODO: Add string name for label column
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0]]
###Output
_____no_output_____
###Markdown
Lab Task 2: Make dataset of features and label from CSV files.Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
###Code
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
# TODO: Map dataset to features and label
dataset = dataset.map(features_and_labels) # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
###Output
_____no_output_____
###Markdown
Lab Task 3: Create input layers for raw features.We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining:* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.* dtype: The data type expected by the input, as a string (float32, float64, int32...)
###Code
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
# TODO: Create dictionary of tf.keras.layers.Input for each raw feature
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in ['mother_age', 'gestation_weeks']
}
inputs.update({
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in ['is_male', 'plurality']
})
return inputs
###Output
_____no_output_____
###Markdown
Lab Task 4: Create feature columns for inputs.Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
###Code
def create_feature_columns():
"""Creates dictionary of feature columns from inputs.
Returns:
Dictionary of feature columns.
"""
# TODO: Create feature columns for numeric features
feature_columns = {
colname : tf.feature_column.numeric_column(colname)
for colname in ['mother_age', 'gestation_weeks']
}
if False:
# Until TF-serving supports 2.0, so as to get servable model
feature_columns['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown'])
feature_columns['plurality'] = categorical_fc('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)'])
# TODO: Add feature columns for categorical features
return feature_columns
###Output
_____no_output_____
###Markdown
Lab Task 5: Create DNN dense hidden layers and output layer.So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
###Code
def get_model_outputs(inputs):
"""Creates model architecture and returns outputs.
Args:
inputs: Dense tensor used as inputs to model.
Returns:
Dense tensor output from the model.
"""
# TODO: Create two hidden layers of [64, 32] just in like the BQML DNN
h1 = tf.keras.layers.Dense(64, activation='relu', name='h1')(inputs)
h2 = tf.keras.layers.Dense(32, activation='relu', name='h2')(h1)
# TODO: Create final output layer
output = tf.keras.layers.Dense(1, activation='linear', name='babyweight')(h2)
return output
###Output
_____no_output_____
###Markdown
Lab Task 6: Create custom evaluation metric.We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
###Code
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
# TODO: Calculate RMSE from true and predicted labels
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
###Output
_____no_output_____
###Markdown
Lab Task 7: Build DNN model tying all of the pieces together.Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
###Code
# Build a simple Keras DNN using its Functional API
def build_dnn_model():
"""Builds simple DNN using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layer
inputs = create_input_layers()
# Create feature columns
feature_columns = create_feature_columns()
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(
feature_columns=feature_columns.values())(inputs)
# Get output of model given inputs
output = get_model_outputs(dnn_inputs)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
###Output
Here is our DNN architecture so far:
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
gestation_weeks (InputLayer) [(None,)] 0
__________________________________________________________________________________________________
is_male (InputLayer) [(None,)] 0
__________________________________________________________________________________________________
mother_age (InputLayer) [(None,)] 0
__________________________________________________________________________________________________
plurality (InputLayer) [(None,)] 0
__________________________________________________________________________________________________
dense_features (DenseFeatures) (None, 2) 0 gestation_weeks[0][0]
is_male[0][0]
mother_age[0][0]
plurality[0][0]
__________________________________________________________________________________________________
h1 (Dense) (None, 64) 192 dense_features[0][0]
__________________________________________________________________________________________________
h2 (Dense) (None, 32) 2080 h1[0][0]
__________________________________________________________________________________________________
babyweight (Dense) (None, 1) 33 h2[0][0]
==================================================================================================
Total params: 2,305
Trainable params: 2,305
Non-trainable params: 0
__________________________________________________________________________________________________
None
###Markdown
We can visualize the DNN using the Keras plot_model utility.
###Code
tf.keras.utils.plot_model(
model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
###Output
_____no_output_____
###Markdown
Run and evaluate model Lab Task 8: Train and evaluate.We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
###Code
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 10 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 100000
# TODO: Load training dataset
trainds = load_dataset(TRAIN_DATA_PATH, TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
# TODO: Load evaluation dataset
evalds = load_dataset(EVAL_DATA_PATH, 1000, tf.estimator.ModeKeys.EVAL).take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
###Output
Epoch 1/10
156/156 [==============================] - 2s 12ms/step - loss: 8.0140 - rmse: 2.2285 - mse: 8.0140 - val_loss: 2.8313 - val_rmse: 1.6806 - val_mse: 2.8313
Epoch 2/10
156/156 [==============================] - 2s 10ms/step - loss: 2.6899 - rmse: 1.6271 - mse: 2.6899 - val_loss: 2.7039 - val_rmse: 1.6432 - val_mse: 2.7039
Epoch 3/10
156/156 [==============================] - 2s 10ms/step - loss: 2.6491 - rmse: 1.6119 - mse: 2.6491 - val_loss: 2.6656 - val_rmse: 1.6318 - val_mse: 2.6656
Epoch 4/10
156/156 [==============================] - 1s 10ms/step - loss: 2.7085 - rmse: 1.6329 - mse: 2.7085 - val_loss: 2.6660 - val_rmse: 1.6313 - val_mse: 2.6660
Epoch 5/10
156/156 [==============================] - 2s 11ms/step - loss: 2.6280 - rmse: 1.6084 - mse: 2.6280 - val_loss: 2.6775 - val_rmse: 1.6348 - val_mse: 2.6775
Epoch 6/10
156/156 [==============================] - 2s 11ms/step - loss: 2.4879 - rmse: 1.5596 - mse: 2.4879 - val_loss: 2.6689 - val_rmse: 1.6319 - val_mse: 2.6689
Epoch 7/10
156/156 [==============================] - 2s 10ms/step - loss: 2.5774 - rmse: 1.5906 - mse: 2.5774 - val_loss: 2.5550 - val_rmse: 1.5970 - val_mse: 2.5550
Epoch 8/10
156/156 [==============================] - 2s 10ms/step - loss: 2.5524 - rmse: 1.5801 - mse: 2.5524 - val_loss: 2.6817 - val_rmse: 1.6350 - val_mse: 2.6817
Epoch 9/10
156/156 [==============================] - 2s 12ms/step - loss: 2.5145 - rmse: 1.5705 - mse: 2.5145 - val_loss: 2.5403 - val_rmse: 1.5918 - val_mse: 2.5403
Epoch 10/10
156/156 [==============================] - 2s 10ms/step - loss: 2.5509 - rmse: 1.5834 - mse: 2.5509 - val_loss: 2.8771 - val_rmse: 1.6924 - val_mse: 2.8771
###Markdown
Visualize loss curve
###Code
# Plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
###Output
_____no_output_____
###Markdown
Save the model
###Code
OUTPUT_DIR = "babyweight_trained"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
###Output
assets saved_model.pb variables
###Markdown
LAB 4b: Create Keras DNN model.**Learning Objectives**1. Set CSV Columns, label column, and column defaults1. Make dataset of features and label from CSV files1. Create input layers for raw features1. Create feature columns for inputs1. Create DNN dense hidden layers and output layer1. Create custom evaluation metric1. Build DNN model tying all of the pieces together1. Train and evaluate Introduction In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4b_keras_dnn_babyweight.ipynb). Load necessary libraries
###Code
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Set you bucket:
###Code
BUCKET = # REPLACE BY YOUR BUCKET
os.environ['BUCKET'] = BUCKET
###Output
_____no_output_____
###Markdown
Verify CSV files existIn the seventh lab of this series [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
###Code
TRAIN_DATA_PATH = "gs://{bucket}/babyweight/data/train*.csv".format(bucket=BUCKET)
EVAL_DATA_PATH = "gs://{bucket}/babyweight/data/eval*.csv".format(bucket=BUCKET)
!gsutil ls $TRAIN_DATA_PATH
!gsutil ls $EVAL_DATA_PATH
###Output
_____no_output_____
###Markdown
Create Keras model Lab Task 1: Set CSV Columns, label column, and column defaults.Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.* `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files* `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.* `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
###Code
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = [""]
# TODO: Add string name for label column
LABEL_COLUMN = ""
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = []
###Output
_____no_output_____
###Markdown
Lab Task 2: Make dataset of features and label from CSV files.Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
###Code
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset()
# TODO: Map dataset to features and label
dataset = dataset.map() # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
###Output
_____no_output_____
###Markdown
Lab Task 3: Create input layers for raw features.We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining:* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.* dtype: The data type expected by the input, as a string (float32, float64, int32...)
###Code
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
# TODO: Create dictionary of tf.keras.layers.Input for each raw feature
inputs = {}
return inputs
###Output
_____no_output_____
###Markdown
Lab Task 4: Create feature columns for inputs.Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
###Code
def create_feature_columns():
"""Creates dictionary of feature columns from inputs.
Returns:
Dictionary of feature columns.
"""
# TODO: Create feature columns for numeric features
feature_columns = {}
# TODO: Add feature columns for categorical features
return feature_columns
###Output
_____no_output_____
###Markdown
Lab Task 5: Create DNN dense hidden layers and output layer.So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
###Code
def get_model_outputs(inputs):
"""Creates model architecture and returns outputs.
Args:
inputs: Dense tensor used as inputs to model.
Returns:
Dense tensor output from the model.
"""
# TODO: Create two hidden layers of [64, 32] just in like the BQML DNN
# TODO: Create final output layer
return output
###Output
_____no_output_____
###Markdown
Lab Task 6: Create custom evaluation metric.We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
###Code
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
# TODO: Calculate RMSE from true and predicted labels
pass
###Output
_____no_output_____
###Markdown
Lab Task 7: Build DNN model tying all of the pieces together.Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
###Code
# Build a simple Keras DNN using its Functional API
def build_dnn_model():
"""Builds simple DNN using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layer
inputs = create_input_layers()
# Create feature columns
feature_columns = create_feature_columns()
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(
feature_columns=feature_columns.values())(inputs)
# Get output of model given inputs
output = get_model_outputs(dnn_inputs)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=["mse"])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
###Output
_____no_output_____
###Markdown
We can visualize the DNN using the Keras plot_model utility.
###Code
tf.keras.utils.plot_model(
model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
###Output
_____no_output_____
###Markdown
Run and evaluate model Lab Task 8: Train and evaluate.We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
###Code
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
# TODO: Load training dataset
trainds = load_dataset()
# TODO: Load evaluation dataset
evalds = load_dataset().take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit()
###Output
_____no_output_____
###Markdown
Visualize loss curve
###Code
# Plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
###Output
_____no_output_____
###Markdown
Save the model
###Code
OUTPUT_DIR = "babyweight_trained"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
###Output
_____no_output_____
###Markdown
LAB 4b: Create Keras DNN model.**Learning Objectives**1. Set CSV Columns, label column, and column defaults1. Make dataset of features and label from CSV files1. Create input layers for raw features1. Create feature columns for inputs1. Create DNN dense hidden layers and output layer1. Create custom evaluation metric1. Build DNN model tying all of the pieces together1. Train and evaluate Introduction In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4b_keras_dnn_babyweight.ipynb). Load necessary libraries
###Code
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
print(tf.__version__)
###Output
2.3.0
###Markdown
Set you bucket:
###Code
BUCKET = 'qwiklabs-gcp-02-15ad15b6da61' # REPLACE BY YOUR BUCKET
os.environ['BUCKET'] = BUCKET
###Output
_____no_output_____
###Markdown
Verify CSV files existIn the seventh lab of this series [1b_prepare_data_babyweight](../solutions/1b_prepare_data_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
###Code
TRAIN_DATA_PATH = "gs://{bucket}/babyweight/data/train*.csv".format(bucket=BUCKET)
EVAL_DATA_PATH = "gs://{bucket}/babyweight/data/eval*.csv".format(bucket=BUCKET)
!gsutil ls $TRAIN_DATA_PATH
!gsutil ls $EVAL_DATA_PATH
###Output
gs://qwiklabs-gcp-02-15ad15b6da61/babyweight/data/eval000000000000.csv
gs://qwiklabs-gcp-02-15ad15b6da61/babyweight/data/eval000000000001.csv
###Markdown
Create Keras model Lab Task 1: Set CSV Columns, label column, and column defaults.Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.* `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files* `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.* `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
###Code
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks"]
# TODO: Add string name for label column
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = [[1.0], ["null"], [15.0], ["null"], [1.0]]
###Output
_____no_output_____
###Markdown
Lab Task 2: Make dataset of features and label from CSV files.Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
###Code
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset(file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS)
# TODO: Map dataset to features and label
dataset = dataset.map(map_func=features_and_labels) # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
###Output
_____no_output_____
###Markdown
Lab Task 3: Create input layers for raw features.We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining:* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.* dtype: The data type expected by the input, as a string (float32, float64, int32...)
###Code
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
# TODO: Create dictionary of tf.keras.layers.Input for each raw feature
inputs = {colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
inputs.update({
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string")
for colname in ["is_male", "plurality"]})
return inputs
###Output
_____no_output_____
###Markdown
Lab Task 4: Create feature columns for inputs.Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
###Code
def categorical_fc(name, values):
"""Helper function to wrap categorical feature by indicator column.
Args:
name: str, name of feature.
values: list, list of strings of categorical values.
Returns:
Indicator column of categorical feature.
"""
cat_column = tf.feature_column.categorical_column_with_vocabulary_list(
key=name, vocabulary_list=values)
return tf.feature_column.indicator_column(categorical_column=cat_column)
def create_feature_columns():
"""Creates dictionary of feature columns from inputs.
Returns:
Dictionary of feature columns.
"""
# TODO: Create feature columns for numeric features
feature_columns = {colname : tf.feature_column.numeric_column(key=colname)
for colname in ["mother_age", "gestation_weeks"]}
feature_columns["is_male"] = categorical_fc("is_male", ["True", "False", "Unknown"])
feature_columns["plurality"] = categorical_fc("plurality", ["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"])# TODO: Add feature columns for categorical features
return feature_columns
###Output
_____no_output_____
###Markdown
Lab Task 5: Create DNN dense hidden layers and output layer.So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
###Code
def get_model_outputs(inputs):
"""Creates model architecture and returns outputs.
Args:
inputs: Dense tensor used as inputs to model.
Returns:
Dense tensor output from the model.
"""
# TODO: Create two hidden layers of [64, 32] just in like the BQML DNN
h1 = tf.keras.layers.Dense(64, activation="relu", name="h1")(inputs)
h2 = tf.keras.layers.Dense(32, activation="relu", name="h2")(h1)
# TODO: Create final output layer
output = tf.keras.layers.Dense(units=1, activation="linear", name="weight")(h2)
return output
###Output
_____no_output_____
###Markdown
Lab Task 6: Create custom evaluation metric.We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
###Code
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
# TODO: Calculate RMSE from true and predicted labels
return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2)) #pass
###Output
_____no_output_____
###Markdown
Lab Task 7: Build DNN model tying all of the pieces together.Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
###Code
# Build a simple Keras DNN using its Functional API
def build_dnn_model():
"""Builds simple DNN using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layer
inputs = create_input_layers()
# Create feature columns
feature_columns = create_feature_columns()
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(
feature_columns=feature_columns.values())(inputs)
# Get output of model given inputs
output = get_model_outputs(dnn_inputs)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=["mse"])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
###Output
Here is our DNN architecture so far:
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
gestation_weeks (InputLayer) [(None,)] 0
__________________________________________________________________________________________________
is_male (InputLayer) [(None,)] 0
__________________________________________________________________________________________________
mother_age (InputLayer) [(None,)] 0
__________________________________________________________________________________________________
plurality (InputLayer) [(None,)] 0
__________________________________________________________________________________________________
dense_features (DenseFeatures) (None, 11) 0 gestation_weeks[0][0]
is_male[0][0]
mother_age[0][0]
plurality[0][0]
__________________________________________________________________________________________________
h1 (Dense) (None, 64) 768 dense_features[0][0]
__________________________________________________________________________________________________
h2 (Dense) (None, 32) 2080 h1[0][0]
__________________________________________________________________________________________________
weight (Dense) (None, 1) 33 h2[0][0]
==================================================================================================
Total params: 2,881
Trainable params: 2,881
Non-trainable params: 0
__________________________________________________________________________________________________
None
###Markdown
We can visualize the DNN using the Keras plot_model utility.
###Code
tf.keras.utils.plot_model(
model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
###Output
_____no_output_____
###Markdown
Run and evaluate model Lab Task 8: Train and evaluate.We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
###Code
TRAIN_BATCH_SIZE = 100
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 50 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
# TODO: Load training dataset
trainds = load_dataset(pattern=TRAIN_DATA_PATH, batch_size=TRAIN_BATCH_SIZE, mode='train')
# TODO: Load evaluation dataset
evalds = load_dataset(pattern=EVAL_DATA_PATH, batch_size=1000, mode='eval').take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch,
callbacks=[tensorboard_callback])
###Output
Epoch 1/50
1/10 [==>...........................] - ETA: 0s - loss: 2.8931 - mse: 2.8931WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0026s vs `on_train_batch_end` time: 0.0321s). Check your callbacks.
10/10 [==============================] - 1s 76ms/step - loss: 2.6729 - mse: 2.6729 - val_loss: 2.8123 - val_mse: 2.8123
Epoch 2/50
10/10 [==============================] - 1s 65ms/step - loss: 2.2540 - mse: 2.2540 - val_loss: 2.5656 - val_mse: 2.5656
Epoch 3/50
10/10 [==============================] - 1s 64ms/step - loss: 2.3580 - mse: 2.3580 - val_loss: 2.5900 - val_mse: 2.5900
Epoch 4/50
10/10 [==============================] - 1s 60ms/step - loss: 2.2590 - mse: 2.2590 - val_loss: 2.6642 - val_mse: 2.6642
Epoch 5/50
10/10 [==============================] - 1s 86ms/step - loss: 2.4886 - mse: 2.4886 - val_loss: 2.7052 - val_mse: 2.7052
Epoch 6/50
10/10 [==============================] - 1s 66ms/step - loss: 2.2252 - mse: 2.2252 - val_loss: 2.6348 - val_mse: 2.6348
Epoch 7/50
10/10 [==============================] - 1s 65ms/step - loss: 2.4938 - mse: 2.4938 - val_loss: 2.6127 - val_mse: 2.6127
Epoch 8/50
10/10 [==============================] - 1s 71ms/step - loss: 2.3195 - mse: 2.3195 - val_loss: 2.6976 - val_mse: 2.6976
Epoch 9/50
10/10 [==============================] - 1s 62ms/step - loss: 2.2959 - mse: 2.2959 - val_loss: 2.5963 - val_mse: 2.5963
Epoch 10/50
10/10 [==============================] - 1s 67ms/step - loss: 2.6611 - mse: 2.6611 - val_loss: 2.5941 - val_mse: 2.5941
Epoch 11/50
10/10 [==============================] - 1s 70ms/step - loss: 2.5323 - mse: 2.5323 - val_loss: 2.5975 - val_mse: 2.5975
Epoch 12/50
10/10 [==============================] - 1s 77ms/step - loss: 2.3709 - mse: 2.3709 - val_loss: 2.5742 - val_mse: 2.5742
Epoch 13/50
10/10 [==============================] - 1s 68ms/step - loss: 2.3579 - mse: 2.3579 - val_loss: 2.5570 - val_mse: 2.5570
Epoch 14/50
10/10 [==============================] - 1s 65ms/step - loss: 2.2145 - mse: 2.2145 - val_loss: 2.5298 - val_mse: 2.5298
Epoch 15/50
10/10 [==============================] - 1s 66ms/step - loss: 2.3664 - mse: 2.3664 - val_loss: 2.6541 - val_mse: 2.6541
Epoch 16/50
10/10 [==============================] - 1s 67ms/step - loss: 2.2418 - mse: 2.2418 - val_loss: 2.5640 - val_mse: 2.5640
Epoch 17/50
10/10 [==============================] - 1s 63ms/step - loss: 2.4729 - mse: 2.4729 - val_loss: 2.6310 - val_mse: 2.6310
Epoch 18/50
10/10 [==============================] - 1s 67ms/step - loss: 2.4403 - mse: 2.4403 - val_loss: 2.6494 - val_mse: 2.6494
Epoch 19/50
10/10 [==============================] - 1s 62ms/step - loss: 2.1997 - mse: 2.1997 - val_loss: 2.6637 - val_mse: 2.6637
Epoch 20/50
10/10 [==============================] - 1s 64ms/step - loss: 2.4546 - mse: 2.4546 - val_loss: 2.6630 - val_mse: 2.6630
Epoch 21/50
10/10 [==============================] - 1s 64ms/step - loss: 2.5279 - mse: 2.5279 - val_loss: 2.5292 - val_mse: 2.5292
Epoch 22/50
10/10 [==============================] - 1s 63ms/step - loss: 2.5247 - mse: 2.5247 - val_loss: 2.5806 - val_mse: 2.5806
Epoch 23/50
10/10 [==============================] - 1s 66ms/step - loss: 2.3235 - mse: 2.3235 - val_loss: 2.6582 - val_mse: 2.6582
Epoch 24/50
10/10 [==============================] - 1s 66ms/step - loss: 2.3556 - mse: 2.3556 - val_loss: 2.5207 - val_mse: 2.5207
Epoch 25/50
10/10 [==============================] - 1s 70ms/step - loss: 2.4645 - mse: 2.4645 - val_loss: 2.6709 - val_mse: 2.6709
Epoch 26/50
10/10 [==============================] - 1s 64ms/step - loss: 2.5551 - mse: 2.5551 - val_loss: 2.7190 - val_mse: 2.7190
Epoch 27/50
10/10 [==============================] - 1s 70ms/step - loss: 2.3547 - mse: 2.3547 - val_loss: 2.5226 - val_mse: 2.5226
Epoch 28/50
10/10 [==============================] - 1s 59ms/step - loss: 2.0341 - mse: 2.0341 - val_loss: 2.5942 - val_mse: 2.5942
Epoch 29/50
10/10 [==============================] - 1s 64ms/step - loss: 2.3808 - mse: 2.3808 - val_loss: 2.5556 - val_mse: 2.5556
Epoch 30/50
10/10 [==============================] - 1s 62ms/step - loss: 2.3096 - mse: 2.3096 - val_loss: 2.5726 - val_mse: 2.5726
Epoch 31/50
10/10 [==============================] - 1s 75ms/step - loss: 2.2567 - mse: 2.2567 - val_loss: 2.6299 - val_mse: 2.6299
Epoch 32/50
10/10 [==============================] - 1s 70ms/step - loss: 2.3173 - mse: 2.3173 - val_loss: 2.5771 - val_mse: 2.5771
Epoch 33/50
10/10 [==============================] - 1s 64ms/step - loss: 2.1736 - mse: 2.1736 - val_loss: 2.6019 - val_mse: 2.6019
Epoch 34/50
10/10 [==============================] - 1s 77ms/step - loss: 2.2208 - mse: 2.2208 - val_loss: 2.6209 - val_mse: 2.6209
Epoch 35/50
10/10 [==============================] - 1s 69ms/step - loss: 2.2863 - mse: 2.2863 - val_loss: 2.5386 - val_mse: 2.5386
Epoch 36/50
10/10 [==============================] - 1s 66ms/step - loss: 2.2754 - mse: 2.2754 - val_loss: 2.6817 - val_mse: 2.6817
Epoch 37/50
10/10 [==============================] - 1s 63ms/step - loss: 2.2653 - mse: 2.2653 - val_loss: 2.5871 - val_mse: 2.5871
Epoch 38/50
10/10 [==============================] - 1s 65ms/step - loss: 2.1258 - mse: 2.1258 - val_loss: 2.6002 - val_mse: 2.6002
Epoch 39/50
10/10 [==============================] - 1s 62ms/step - loss: 2.3747 - mse: 2.3747 - val_loss: 2.7334 - val_mse: 2.7334
Epoch 40/50
10/10 [==============================] - 1s 63ms/step - loss: 2.2271 - mse: 2.2271 - val_loss: 2.6102 - val_mse: 2.6102
Epoch 41/50
10/10 [==============================] - 1s 67ms/step - loss: 2.2500 - mse: 2.2500 - val_loss: 2.5072 - val_mse: 2.5072
Epoch 42/50
10/10 [==============================] - 1s 76ms/step - loss: 2.3377 - mse: 2.3377 - val_loss: 2.5707 - val_mse: 2.5707
Epoch 43/50
10/10 [==============================] - 1s 64ms/step - loss: 1.9811 - mse: 1.9811 - val_loss: 2.5337 - val_mse: 2.5337
Epoch 44/50
10/10 [==============================] - 1s 58ms/step - loss: 2.3438 - mse: 2.3438 - val_loss: 2.5429 - val_mse: 2.5429
Epoch 45/50
10/10 [==============================] - 1s 73ms/step - loss: 2.0553 - mse: 2.0553 - val_loss: 2.6429 - val_mse: 2.6429
Epoch 46/50
10/10 [==============================] - 1s 66ms/step - loss: 2.3018 - mse: 2.3018 - val_loss: 2.5436 - val_mse: 2.5436
Epoch 47/50
10/10 [==============================] - 1s 69ms/step - loss: 2.3280 - mse: 2.3280 - val_loss: 2.5139 - val_mse: 2.5139
Epoch 48/50
10/10 [==============================] - 1s 61ms/step - loss: 2.3754 - mse: 2.3754 - val_loss: 2.4794 - val_mse: 2.4794
Epoch 49/50
10/10 [==============================] - 1s 57ms/step - loss: 2.2917 - mse: 2.2917 - val_loss: 2.4919 - val_mse: 2.4919
Epoch 50/50
10/10 [==============================] - 1s 61ms/step - loss: 2.2008 - mse: 2.2008 - val_loss: 2.5578 - val_mse: 2.5578
###Markdown
Visualize loss curve
###Code
# Plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "mse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
###Output
_____no_output_____
###Markdown
Save the model
###Code
OUTPUT_DIR = "babyweight_trained"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
###Output
assets saved_model.pb variables
###Markdown
LAB 4b: Create Keras DNN model.**Learning Objectives**1. Set CSV Columns, label column, and column defaults1. Make dataset of features and label from CSV files1. Create input layers for raw features1. Create feature columns for inputs1. Create DNN dense hidden layers and output layer1. Create custom evaluation metric1. Build DNN model tying all of the pieces together1. Train and evaluate Introduction In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4b_keras_dnn_babyweight.ipynb). Load necessary libraries
###Code
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Verify CSV files existIn the seventh lab of this series [4a_sample_babyweight](../solutions/4a_sample_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
###Code
%%bash
ls *.csv
%%bash
head -5 *.csv
###Output
_____no_output_____
###Markdown
Create Keras model Lab Task 1: Set CSV Columns, label column, and column defaults.Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.* `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files* `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.* `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
###Code
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = [""]
# TODO: Add string name for label column
LABEL_COLUMN = ""
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = []
###Output
_____no_output_____
###Markdown
Lab Task 2: Make dataset of features and label from CSV files.Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
###Code
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset()
# TODO: Map dataset to features and label
dataset = dataset.map() # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
###Output
_____no_output_____
###Markdown
Lab Task 3: Create input layers for raw features.We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining:* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.* dtype: The data type expected by the input, as a string (float32, float64, int32...)
###Code
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
# TODO: Create dictionary of tf.keras.layers.Input for each raw feature
inputs = {}
return inputs
###Output
_____no_output_____
###Markdown
Lab Task 4: Create feature columns for inputs.Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
###Code
def create_feature_columns():
"""Creates dictionary of feature columns from inputs.
Returns:
Dictionary of feature columns.
"""
# TODO: Create feature columns for numeric features
feature_columns = {}
# TODO: Add feature columns for categorical features
return feature_columns
###Output
_____no_output_____
###Markdown
Lab Task 5: Create DNN dense hidden layers and output layer.So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
###Code
def get_model_outputs(inputs):
"""Creates model architecture and returns outputs.
Args:
inputs: Dense tensor used as inputs to model.
Returns:
Dense tensor output from the model.
"""
# TODO: Create two hidden layers of [64, 32] just in like the BQML DNN
# TODO: Create final output layer
return output
###Output
_____no_output_____
###Markdown
Lab Task 6: Create custom evaluation metric.We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
###Code
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
# TODO: Calculate RMSE from true and predicted labels
pass
###Output
_____no_output_____
###Markdown
Lab Task 7: Build DNN model tying all of the pieces together.Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
###Code
# Build a simple Keras DNN using its Functional API
def build_dnn_model():
"""Builds simple DNN using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layer
inputs = create_input_layers()
# Create feature columns
feature_columns = create_feature_columns()
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(
feature_columns=feature_columns.values())(inputs)
# Get output of model given inputs
output = get_model_outputs(dnn_inputs)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=["mse"])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
###Output
_____no_output_____
###Markdown
We can visualize the DNN using the Keras plot_model utility.
###Code
tf.keras.utils.plot_model(
model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
###Output
_____no_output_____
###Markdown
Run and evaluate model Lab Task 8: Train and evaluate.We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
###Code
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
# TODO: Load training dataset
trainds = load_dataset()
# TODO: Load evaluation dataset
evalds = load_dataset().take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit()
###Output
_____no_output_____
###Markdown
Visualize loss curve
###Code
# Plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
###Output
_____no_output_____
###Markdown
Save the model
###Code
OUTPUT_DIR = "babyweight_trained"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
###Output
_____no_output_____
###Markdown
LAB 4b: Create Keras DNN model.**Learning Objectives**1. Set CSV Columns, label column, and column defaults1. Make dataset of features and label from CSV files1. Create input layers for raw features1. Create feature columns for inputs1. Create DNN dense hidden layers and output layer1. Create custom evaluation metric1. Build DNN model tying all of the pieces together1. Train and evaluate Introduction In this notebook, we'll be using Keras to create a DNN model to predict the weight of a baby before it is born.We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a deep neural network in Keras. We'll create a custom evaluation metric and build our DNN model. Finally, we'll train and evaluate our model.Each learning objective will correspond to a __TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4b_keras_dnn_babyweight.ipynb). Load necessary libraries
###Code
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Verify CSV files existIn the seventh lab of this series [4a_sample_babyweight](../solutions/4a_sample_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
###Code
%%bash
ls *.csv
%%bash
head -5 *.csv
###Output
_____no_output_____
###Markdown
Create Keras model Lab Task 1: Set CSV Columns, label column, and column defaults.Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.* `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files* `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.* `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
###Code
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = [""]
# TODO: Add string name for label column
LABEL_COLUMN = ""
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = []
###Output
_____no_output_____
###Markdown
Lab Task 2: Make dataset of features and label from CSV files.Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
###Code
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset()
# TODO: Map dataset to features and label
dataset = dataset.map() # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
###Output
_____no_output_____
###Markdown
Lab Task 3: Create input layers for raw features.We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining:* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.* dtype: The data type expected by the input, as a string (float32, float64, int32...)
###Code
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
# TODO: Create dictionary of tf.keras.layers.Input for each raw feature
inputs = {}
return inputs
###Output
_____no_output_____
###Markdown
Lab Task 4: Create feature columns for inputs.Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
###Code
def create_feature_columns():
"""Creates dictionary of feature columns from inputs.
Returns:
Dictionary of feature columns.
"""
# TODO: Create feature columns for numeric features
feature_columns = {}
# TODO: Add feature columns for categorical features
return feature_columns
###Output
_____no_output_____
###Markdown
Lab Task 5: Create DNN dense hidden layers and output layer.So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
###Code
def get_model_outputs(inputs):
"""Creates model architecture and returns outputs.
Args:
inputs: Dense tensor used as inputs to model.
Returns:
Dense tensor output from the model.
"""
# TODO: Create two hidden layers of [64, 32] just in like the BQML DNN
# TODO: Create final output layer
return output
###Output
_____no_output_____
###Markdown
Lab Task 6: Create custom evaluation metric.We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
###Code
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
# TODO: Calculate RMSE from true and predicted labels
pass
###Output
_____no_output_____
###Markdown
Lab Task 7: Build DNN model tying all of the pieces together.Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
###Code
# Build a simple Keras DNN using its Functional API
def build_dnn_model():
"""Builds simple DNN using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layer
inputs = create_input_layers()
# Create feature columns
feature_columns = create_feature_columns()
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(
feature_columns=feature_columns.values())(inputs)
# Get output of model given inputs
output = get_model_outputs(dnn_inputs)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=["mse"])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
###Output
_____no_output_____
###Markdown
We can visualize the DNN using the Keras plot_model utility.
###Code
tf.keras.utils.plot_model(
model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
###Output
_____no_output_____
###Markdown
Run and evaluate model Lab Task 8: Train and evaluate.We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
###Code
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
# TODO: Load training dataset
trainds = load_dataset()
# TODO: Load evaluation dataset
evalds = load_dataset().take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit()
###Output
_____no_output_____
###Markdown
Visualize loss curve
###Code
# Plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
###Output
_____no_output_____
###Markdown
Save the model
###Code
OUTPUT_DIR = "babyweight_trained"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
###Output
_____no_output_____ |
.ipynb_checkpoints/Rossman Store-checkpoint.ipynb | ###Markdown
Imports
###Code
import pandas as pd
import inflection
import math
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
import datetime
###Output
_____no_output_____
###Markdown
Helper Functions Loading Data
###Code
df_sales_raw = pd.read_csv('data/train.csv', low_memory=False)
df_stores_raw = pd.read_csv('data/store.csv', low_memory=False)
df_raw = pd.merge(df_sales_raw, df_stores_raw, how = 'left', on='Store')
###Output
_____no_output_____
###Markdown
Descrição dos Dados Rename Columns: Nessa iremos alterar o nome das colunas de "camelcase" para "sneakcase"
###Code
df1 = df_raw.copy()
past_columns = [ 'Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo',
'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment',
'CompetitionDistance', 'CompetitionOpenSinceMonth',
'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek',
'Promo2SinceYear', 'PromoInterval']
sneakcase = lambda x: inflection.underscore(x)
list_columns = list(map(sneakcase, past_columns))
df1.columns = list_columns
df1
###Output
_____no_output_____
###Markdown
Dimensão dos DadosNesse setor iremos verificar a quantidade de linhas e colunas do nosso dataframe para verificar o tamanho de dados que temos.
###Code
print('Number of rowls: {} '.format(df1.shape[0]))
print('Number of columns: {} '.format(df1.shape[1]))
###Output
Number of rowls: 1017209
Number of columns: 18
###Markdown
Tipos de DadosNesse setor iremos analisar os tipos de dados existentes
###Code
df1.date = pd.to_datetime(df1.date)
df1.dtypes
###Output
_____no_output_____
###Markdown
Checagem de Valores NATemos três métodos para alterar a presença de valores inexistentes, sendo estes:- Eliminação dos valores NaN- Usando algoritmos de "Machine Learning" para alterar os valores NaN- Entender os motivos destes estarem presente nos dados e então realizar a alteração.
###Code
# competition_distance
# Replace the NA values to the max distance
df1['competition_distance'].apply(lambda x: df1['competition_distance'].max() if math.isnan(x) else x)
# competition_open_since_month
# Replace the NA values to the moth
df1['competition_open_since_month'] = df1.apply(lambda x: x['date'].month if math.isnan(x['competition_open_since_month']) else x['competition_open_since_month'], axis = 1)
# competition_open_since_year
# Replace the NA values to the year
df1['competition_open_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['competition_open_since_year']) else x['competition_open_since_year'], axis = 1)
# promo2_since_week
df1['promo2_since_week'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_week']) else x['promo2_since_week'], axis = 1)
# promo2_since_year
df1['promo2_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['promo2_since_year']) else x['promo2_since_year'], axis = 1)
# promo_interval
month_map = {1: 'Jan', 2:'Feb', 3:'Mar', 4:'Apr', 5:'May', 6:'Jun',
7:'Jul', 8:'Aug', 9:'Sep', 10:'Oct', 11:'Nov', 12:'Dec'}
# fill the promo_interval NaN to 0
df1['promo_interval'].fillna(0, inplace = True)
df1['month_promo'] = df1['date'].dt.month.map(month_map)
df1['is_promo'] = df1[['promo_interval', 'month_promo']].apply(lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_promo'] in x['promo_interval'].split(',') else 0, axis = 1)
###Output
_____no_output_____
###Markdown
Estatísticas DescritivasNeste setor iremos realizar algumas estatísticas descritivas para entender melhor nossos dados, iremos inicialmente realizar algumas métricas simples para que no próximo ciclo seja implementada com mais robustez
###Code
# Select dtypes
df_num = df1.select_dtypes(include = ['int64', 'float64'])
df_cat = df1.select_dtypes(exclude = ['int64', 'float64', 'datetime64[ns]'])
# min, max, range, mean, median, std, skew
df_min = pd.DataFrame(df_num.min())
df_max = pd.DataFrame(df_num.max())
df_range = pd.DataFrame(df_num.max() - df_num.min())
df_mean = pd.DataFrame(df_num.mean())
df_median = pd.DataFrame(df_num.median())
df_std = pd.DataFrame(df_num.std())
df_skew = pd.DataFrame(df_num.skew())
df_kurtosis = pd.DataFrame(df_num.kurtosis())
df_metrics = pd.concat([df_min, df_max, df_range, df_mean, df_median, df_std, df_skew, df_kurtosis], axis = 1)
df_metrics.columns = ['min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
df_metrics
df_aux = df1[(df1['sales'] > 0) & (df1['state_holiday'] != 0)]
fig, axis = plt.subplots(1, 3,figsize = (16,7));
sns.boxplot(data = df_aux, x = 'state_holiday', y = 'sales', ax = axis[0]);
sns.boxplot(data = df_aux, x = 'store_type', y = 'sales', ax = axis[1]);
sns.boxplot(data = df_aux, x = 'assortment', y = 'sales', ax = axis[2]);
###Output
_____no_output_____
###Markdown
Feature Engeneering Mapa MentalIremos realizar um mapa mental para apresentar todas as variáveis contidas em nosso problema, dando este suporte para a realização de hipóteses. No mundo corporativo, esse mapa mental é produzido a partir da reunião de "insights" com outras equipes da empresa.
###Code
Image('images/mindmap.png')
###Output
_____no_output_____
###Markdown
HipótesesA partir do mapa mental, iremos desenvolver as hipóteses das variáveis que levantamosLembrando que no dia a dia da empresa, tanto as hipóteses como o mapa mental é construido a partir da reunião com outras áreas, fornecendo estas "insights" para a construção destes. Hipótese da Loja- Quanto maior o estoque, maior será a venda da loja?- Quanto maior o número de funcionários, maior é o faturamento da loja?- Lojas que se localizam no centro vendem mais do que as que se localizam fora deste?- Loja com maior sortimento (diferentes tipos) tem mais vendas?- Loja com concorrentes próximos tendem a vender menos?- Loja com consumidores a mais tempo vendem mais?- Loja com maior numero de consumidores vendem mais?- Loja com promoções vender mais?- Loja com mais promoções consecutivas vender mais? Hipóteses do Produto- Produtos com maior tempo de exposição da loja vendem mais?- Produtos com uma qualidade maior vendem mais?- Produtos que tem uma maior quantidade em vendem mais?- Produtos com menor preço vendem mais?- Produtos em que tem mais promoções vendem mais?- Produtos em que se investem mais em marketing vendem mais? Hipóteses Temporal- Lojas deveriam vender mais durante a semana do que nos fins de semana?- Lojas vendem mais nos feriados?- Lojas vendem mais com o passar dos anos?- Lojas vendem mais durante o fim do ano?- Lojas que tem mais promoções vendem mais? Hipóteses SelecionadasCom as hipóteses levantadas, iremos em seguida realizar uma seleção de quais hipóteses podemos validar neste momento tendo como base os dados que possuímos resultando então nas seguintes hipóteses: Hipótese da Loja1. Loja com concorrentes próximos tendem a vender menos?2. Loja com maior numero de consumidores vendem mais?3. Loja com consumidores a mais tempo vendem mais?4. Loja com maior numero de consumidores vendem mais?5. Loja com promoções vender mais?6. Loja com mais promoções consecutivas vender mais? Hipóteses Temporal7. Lojas deveriam vender mais durante a semana do que nos fins de semana?8. Lojas vendem mais nos feriados?9. Lojas vendem mais com o passar dos anos?10. Lojas vendem mais durante o fim do ano?11. Lojas que tem mais promoções vendem mais? Feature EngeneeringNeste setor iremos criar algumas variáveis que irão nos auxiliar para a análise dos dados.
###Code
df2 = df1.copy()
df2.dtypes
# Criando a variável year
df2['year'] = df2['date'].dt.year
# Criando a variável month
df2['month'] = df2['date'].dt.month
# Criando a variável day
df2['day'] = df2['date'].dt.day
# Criando a variável de semanas do ano
df2['week_of_year'] = df2['date'].dt.week
# Criando a variável semana e ano
df2['year_week'] = df2['date'].dt.strftime('%Y-%W')
# Criando a variável competition since
df2['competition_open_since_year'] = df2['competition_open_since_year'].astype('int64')
df2['competition_open_since_month'] = df2['competition_open_since_month'].astype('int64')
df2['competition_since'] = df2.apply(lambda x: datetime.datetime(year = x['competition_open_since_year'], month = x['competition_open_since_month'], day = 1), axis = 1)
# Criando a variável promoção / mês
df2['competition_time_month'] = ((df2['date'] - df2['competition_since'])/30).apply(lambda x: x.days).astype(int)
# Criando a vari[avel promo since
df2['promo2_since_year'] = df2['promo2_since_year'].astype('int64')
df2['promo2_since_week'] = df2['promo2_since_week'].astype('int64')
df2['promo_since'] = df2['promo2_since_year'].astype(str) + '-' + df2['promo2_since_week'].astype(str)
df2['promo_since'] = df2['promo_since'].apply(lambda x: datetime.datetime.strptime(x + '-1', '%Y-%W-%w') - datetime.timedelta(days = 7))
# Criando a variável de semanas
df2['promo_time_week'] = ( ( df2['date'] - df2['promo_since'] ) / 7).apply(lambda x: x.days).astype(int)
# Alterando a variável Assortment
df2['assortment'] = df2['assortment'].apply(lambda x: 'basic' if x == 'a' else 'extra' if x == 'b' else 'extended')
# Alterando a variável holiday
df2['state_holiday'] = df2['state_holiday'].apply(lambda x: 'public_holiday' if x =='a' else 'easter' if x =='b' else 'christmas' )
df2
df2['promo2_since_year']
###Output
_____no_output_____ |
Ex4_FlowMeterDiagnostic-Solution.ipynb | ###Markdown
ML Application Exercise - Solution Classification: Fault diagnosis of liquid ultrasonic flowmetersThe task of this exercise is to implement a complete Data Driven pipeline (load, data-analysis, visualisation, model selection and optimization, prediction) on a specific Dataset. In this exercize the challenge is to perform a classification with different models to find the most accurate prediction. The data of the meter C will be used. Dataset The notebook will upload a public available dataset: https://archive.ics.uci.edu/ml/datasets/Ultrasonic+flowmeter+diagnostics Source: The dataset was created by Kojo Sarfo Gyamfi at Coventry University, UK [email protected] and Craig Marshall National Engineering Laboratory, TUV-NEL, UK [email protected] Data Set Information: Meter A contains 87 instances of diagnostic parameters for an 8-path liquid ultrasonic flow meter (USM). It has 37 attributes and 2 classes or health states Meter B contains 92 instances of diagnostic parameters for a 4-path liquid USM. It has 52 attributes and 3 classes Meter C contains 181 instances of diagnostic parameters for a 4-path liquid USM. It has 44 attributes and 4 classes Meter D contains 180 instances of diagnostic parameters for a 4-path liquid USM. It has 44 attributes and 4 classes Par.Meter AMeter BMeter CMeter D Diagnostic Instances 87 92 181180 Liquid USM Type 8-path 4-path 4-path 4-path Attributes 37 52 44 44 Classes/Health states 2 3 4 4 Classes Names HealthyInstallation effects HealthyGas injectionWaxing HealthyGas injection Installation effectsWaxing HealthyGas injection Installation effectsWaxing Attribute Information: All attributes are continuous, with the exception of the class attribute. Meter A Parameter N.Pararameter Name (1) Flatness ratio (2) Symmetry (3) Crossflow (4)-(11) Flow velocity in each of the eight paths (12)-(19) Speed of sound in each of the eight paths (20) Average speed of sound in all eight paths (21)-(36) Gain at both ends of each of the eight paths (37) Class attribute or health state of meter: 1,2 Meter B Parameter N.Pararameter Name (1) Profile factor (2) Symmetry (3) Crossflow (4) Swirl angle (5)-(8) Flow velocity in each of the four paths (9) Average flow velocity in all four paths (10)-(13) Speed of sound in each of the four paths (14) Average speed of sound in all four paths (15)-(22) Signal strength at both ends of each of the four paths (23)-(26) Turbulence in each of the four paths (27) Meter performance (28)-(35) Signal quality at both ends of each of the four paths (36)-(43) Gain at both ends of each of the four paths (44)-(51) Transit time at both ends of each of the four paths (52) Class attribute or health state of meter: 1,2,3 Meter C and D Parameter N.Pararameter Name (1) Profile factor (2) Symmetry (3) Crossflow (4)-(7) Flow velocity in each of the four paths (8)-(11) Speed of sound in each of the four paths (12)-(19) Signal strength at both ends of each of the four paths (20)-(27) Signal quality at both ends of each of the four paths (28)-(35) Gain at both ends of each of the four paths (36)-(43) Transit time at both ends of each of the four paths (44) Class attribute or health state of meter: 1,2,3,4
###Code
# algebra
import numpy as np
# data structure
import pandas as pd
# data visualization
import matplotlib.pylab as plt
import seaborn as sns
#file handling
from pathlib import Path
###Output
_____no_output_____
###Markdown
Data loadThe process consist in downloading the data if needed, loading the data as a Pandas dataframe
###Code
filename = "Flowmeters.zip"
#if the dataset is not already in the working dir, it will download
my_file = Path(filename)
if not my_file.is_file():
print("Downloading dataset")
!wget https://archive.ics.uci.edu/ml/machine-learning-databases/00433/Flowmeters.zip
!unzip Flowmeters.zip
#function to semplificate the load of dataset, in case it is a csv, tsv or excel file
#output is a pandas dataframe
def load_csv(filename,separator,columns):
try:
csv_table = pd.read_csv(filename,sep=separator,names=columns,dtype='float64')
except:
csv_table = pd.read_excel(filename,names=columns)
print("n. samples: {}".format(csv_table.shape[0]))
print("n. columns: {}".format(csv_table.shape[1]))
return csv_table #.dropna()
#data = load_csv(filename,separator,columns)
data = pd.read_csv('Flowmeters/Meter C',sep='\t',header=None)
#Select only the interesting variable for the model, and remove any anomalous value (e.g. "nan")
data = data.dropna()
###Output
_____no_output_____
###Markdown
Data Analysis and VisualizationIn this section confidence with the data is gained, data are plotted and cleaned
###Code
#How does the dataset look like?
print(data.head())
#Faults or Healty classes are the followings, they are stored in the column n.43:
Faults = ['Healthy','Gas injection','Installation effects','Waxing']
data[43].unique()
###Output
0 1 2 3 4 5 6 \
0 1.102690 1.004425 1.006741 15.228611 16.676389 16.713056 15.051389
1 1.101432 1.003722 1.008256 14.106667 15.407500 15.473889 13.930833
2 1.098568 1.002528 1.009103 14.136667 15.388056 15.484444 13.965833
3 1.099516 1.007024 1.009363 14.146389 15.405000 15.439167 13.906111
4 1.100336 1.000661 1.006709 14.056944 15.363611 15.452222 13.948889
7 8 9 ... 34 35 36 \
0 1485.447222 1485.416667 1485.491667 ... 17.7 86.585833 85.576667
1 1485.222222 1485.211111 1485.288889 ... 17.7 86.560000 85.628056
2 1485.061111 1485.047222 1485.133333 ... 17.7 86.572222 85.635278
3 1485.144444 1485.113889 1485.216667 ... 17.7 86.566111 85.630833
4 1485.202778 1485.180556 1485.272222 ... 17.7 86.561111 85.630833
37 38 39 40 41 42 43
0 106.985000 105.530833 106.714444 105.255833 86.461111 85.460833 1
1 106.942500 105.603611 106.676111 105.326667 86.433889 85.510556 1
2 106.954722 105.614722 106.686389 105.336389 86.444722 85.519167 1
3 106.952500 105.609444 106.681389 105.331667 86.439722 85.515833 1
4 106.946667 105.603889 106.676111 105.328889 86.436944 85.512222 1
[5 rows x 44 columns]
###Markdown
Task:Is the dataset balanced? Plot the bar plot of the Health classes occurency
###Code
plt.bar(data[43].unique(),[ len(data[data[43] == k]) for k in data[43].unique()],tick_label=Faults)
plt.xticks(rotation=30)
plt.grid()
###Output
_____no_output_____
###Markdown
Machine LearningHere the interesting input features and output to predict for the task are selected, the data are opportunelly preprocessed (i.e. normalized), the dataset is splitted in two separate train and test subsets, each model is trained on the training data and evaluated against a test set. The evaluation metrics list can be found here
###Code
#the module needed for the modeling and data mining are imported
#Cross-Validation
from sklearn.model_selection import train_test_split
#Data normalization
from sklearn.preprocessing import StandardScaler
#metrics to evaluate the model
from sklearn.metrics import f1_score
from sklearn.metrics import plot_confusion_matrix
#Selection of feature and output variable, definition of the size (fraction of the total) of the random selected test set
measurements = list(range(0,43))
target = 43
input_features = measurements
output = target
#not preprocessed data
unnormalized_X,y = data[input_features],data[output]
# normalisation
#Having features on a similar scale can help the model converge more quickly towards the minimum
scaler_X = StandardScaler().fit(unnormalized_X)
X = scaler_X.transform(unnormalized_X)
#check if nan are present on the data after normalization to avoid trouble later
sum(np.isnan(X))
###Output
_____no_output_____
###Markdown
Taks Split the dataset X and y in train and test with test_size = 0.33 and random state = 0
###Code
# basic train-test dataset random split
test_size = 0.33
random_state=0
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=test_size,
random_state=random_state)
#dictionary to help the display of the results
Score_Dict = {}
#function introduced to simplifies the following comparison and test of the various
#return the trained model and the score of the selected metrics
def fit_predict_plot(model,X_train,y_train,X_test,y_test,class_names):
model.fit(X_train,y_train)
pred_y_test = model.predict(X_test)
score = f1_score(y_test,pred_y_test,average='micro')
model_name = type(model).__name__
if(model_name=='GridSearchCV'):
model_name ='CV_'+type(model.estimator).__name__
#Alternative metrics are listed here:https://scikit-learn.org/stable/modules/model_evaluation.html
Score_Dict[model_name]=score
fig,ax = plt.subplots(1,1,figsize=[10,10])
np.set_printoptions(precision=2)
plot_confusion_matrix(model,X_test,y_test,display_labels=class_names,
cmap =plt.cm.Blues,
normalize='true',
xticks_rotation=45,ax=ax)
plt.axis('tight')
return model,score
###Output
_____no_output_____
###Markdown
ModelsUsed linear models in this example are: Ridge Logistic Regression kNN Support Vector Classification Random Forest Ridge Classifier
###Code
#initialization, fit and evaluation of the model
from sklearn.linear_model import RidgeClassifier
from sklearn.model_selection import GridSearchCV
estimator = RidgeClassifier()
parameters = { 'alpha':np.logspace(-2,2,5)}
model = GridSearchCV(estimator, parameters,cv=5)
model, score = fit_predict_plot(model,X_train,y_train.values.flatten(),X_test,y_test.values.flatten(),Faults)
print(model.best_params_)
print("f1 score: %.2f"%score)
###Output
{'alpha': 0.1}
f1 score: 0.77
###Markdown
Logistic Regression
###Code
#initialization, fit and evaluation of the model
from sklearn import linear_model
estimator = linear_model.LogisticRegression(max_iter=1000)
parameters = { 'C':np.logspace(-2,3,5)}
model = GridSearchCV(estimator, parameters,cv=5)
model, score = fit_predict_plot(model,X_train,y_train.values.flatten(),X_test,y_test.values.flatten(),Faults)
print(model.best_params_)
print("f1 score: %.2f"%score)
###Output
{'C': 56.23413251903491}
f1 score: 0.90
###Markdown
kNN
###Code
#initialization, fit and evaluation of the model
from sklearn.neighbors import KNeighborsClassifier
estimator = KNeighborsClassifier()
parameters = { 'n_neighbors':[3,5,7]}
model = GridSearchCV(estimator, parameters,cv=5)
model, score = fit_predict_plot(model,X_train,y_train.values.flatten(),X_test,y_test.values.flatten(),Faults)
print(model.best_params_)
print("f1 score: %.2f"%score)
###Output
{'n_neighbors': 5}
f1 score: 0.78
###Markdown
SVC
###Code
from sklearn.svm import SVC
estimator = SVC(gamma='auto')
parameters = { 'C':[0.1,1,10,100]}
model = GridSearchCV(estimator, parameters,cv=5)
model, score = fit_predict_plot(model,X_train,y_train.values.flatten(),X_test,y_test.values.flatten(),Faults)
print(model.best_params_)
print("f1 score: %.2f"%score)
###Output
{'C': 100}
f1 score: 0.98
###Markdown
Random Forest
###Code
#initialization, fit and evaluation of the model
from sklearn.ensemble import RandomForestClassifier
estimator = RandomForestClassifier()
parameters = { 'min_samples_leaf':[1,3,5],
'class_weight':['balanced_subsample'],
'n_estimators':[10,100,200]}
model = GridSearchCV(estimator, parameters,cv=5)
model, score = fit_predict_plot(model,X_train,y_train.values.flatten(),X_test,y_test.values.flatten(),Faults)
print(model.best_params_)
print("f1 score: %.2f"%score)
#print out the results in a table
from IPython.display import Markdown as md
from IPython.display import display
table = '<table><tr><th> Model</th><th> Accuracy Metric </th></tr>'
for key, value in Score_Dict.items():
table +='<tr> <td>'+key+'</td><td>' +'%.2f'%(value)+'</td></tr>'
table+='</table>'
display(md(table))
names = list(Score_Dict.keys())
values = list(Score_Dict.values())
plt.figure(figsize=(15, 3))
plt.bar(names, values)
plt.ylabel('Accuracy Metric')
plt.xticks(rotation=30)
plt.grid()
###Output
_____no_output_____ |
Function Approximation by Neural Network/Polynomial regression - linear and neural network.ipynb | ###Markdown
Polynomial regression with linear models and neural network* Are Linear models sufficient for handling processes with transcedental functions?* Do neural networks perform better in those cases? Import libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Global variables for the program
###Code
N_points = 500 # Number of points for constructing function
x_min = 1 # Min of the range of x (feature)
x_max = 10 # Max of the range of x (feature)
noise_mean = 0 # Mean of the Gaussian noise adder
noise_sd = 2 # Std.Dev of the Gaussian noise adder
ridge_alpha = tuple([10**(x) for x in range(-3,0,1) ]) # Alpha (regularization strength) of ridge regression
lasso_eps = 0.001
lasso_nalpha=20
lasso_iter=1000
degree_min = 2
degree_max = 8
###Output
_____no_output_____
###Markdown
Generate feature and output vector following a non-linear function$$ The\ ground\ truth\ or\ originating\ function\ is\ as\ follows:\ $$$$ y=f(x)= x^2.sin(x).e^{-0.1x}+\psi(x) $$$$: \psi(x) = {\displaystyle f(x\;|\;\mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}\;e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}} $$
###Code
x_smooth = np.array(np.linspace(x_min,x_max,501))
# Linearly spaced sample points
X=np.array(np.linspace(x_min,x_max,N_points))
# Samples drawn from uniform random distribution
X_sample = x_min+np.random.rand(N_points)*(x_max-x_min)
def func(x):
result = (20*x+3*x**2+0.1*x**3)*np.sin(x)*np.exp(-(1/x_max)*x)
return (result)
noise_x = np.random.normal(loc=noise_mean,scale=noise_sd,size=N_points)
y = func(X)+noise_x
y_sampled = func(X_sample)+noise_x
df = pd.DataFrame(data=X,columns=['X'])
df['Ideal y']=df['X'].apply(func)
df['y']=y
df['X_sampled']=X_sample
df['y_sampled']=y_sampled
df.head()
###Output
_____no_output_____
###Markdown
Plot the function(s), both the ideal characteristic and the observed output (with process and observation noise)
###Code
df.plot.scatter('X','Ideal y',title='Ideal y',grid=True,edgecolors=(0,0,0),c='blue',s=40,figsize=(10,5))
plt.plot(x_smooth,func(x_smooth),'k')
df.plot.scatter('X_sampled',y='y_sampled',title='Randomly sampled y',
grid=True,edgecolors=(0,0,0),c='orange',s=40,figsize=(10,5))
plt.plot(x_smooth,func(x_smooth),'k')
###Output
_____no_output_____
###Markdown
Import scikit-learn librares and prepare train/test splits
###Code
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LassoCV
from sklearn.linear_model import RidgeCV
from sklearn.ensemble import AdaBoostRegressor
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
X_train, X_test, y_train, y_test = train_test_split(df['X'], df['y'], test_size=0.33)
X_train=X_train.values.reshape(-1,1)
X_test=X_test.values.reshape(-1,1)
n_train=X_train.shape[0]
###Output
_____no_output_____
###Markdown
Polynomial model with Ridge regularization (pipelined) with lineary spaced samples** This is an advanced machine learning method which prevents over-fitting by penalizing high-valued coefficients i.e. keep them bounded **
###Code
linear_sample_score = []
poly_degree = []
for degree in range(degree_min,degree_max+1):
#model = make_pipeline(PolynomialFeatures(degree), RidgeCV(alphas=ridge_alpha,normalize=True,cv=5))
model = make_pipeline(PolynomialFeatures(degree), LassoCV(eps=lasso_eps,n_alphas=lasso_nalpha,
max_iter=lasso_iter,normalize=True,cv=5))
#model = make_pipeline(PolynomialFeatures(degree), LinearRegression(normalize=True))
model.fit(X_train, y_train)
y_pred = np.array(model.predict(X_train))
test_pred = np.array(model.predict(X_test))
RMSE=np.sqrt(np.sum(np.square(y_pred-y_train)))
test_score = model.score(X_test,y_test)
linear_sample_score.append(test_score)
poly_degree.append(degree)
print("Test score of model with degree {}: {}\n".format(degree,test_score))
#plt.figure()
#plt.title("RMSE: {}".format(RMSE),fontsize=10)
#plt.suptitle("Polynomial of degree {}".format(degree),fontsize=15)
#plt.xlabel("X training values")
#plt.ylabel("Fitted and training values")
#plt.scatter(X_train,y_pred)
#plt.scatter(X_train,y_train)
plt.figure()
plt.title("Predicted vs. actual for polynomial of degree {}".format(degree),fontsize=15)
plt.xlabel("Actual values")
plt.ylabel("Predicted values")
plt.scatter(y_test,test_pred)
plt.plot(y_test,y_test,'r',lw=2)
linear_sample_score
###Output
_____no_output_____
###Markdown
Modeling with randomly sampled data set
###Code
X_train, X_test, y_train, y_test = train_test_split(df['X_sampled'], df['y_sampled'], test_size=0.33)
X_train=X_train.values.reshape(-1,1)
X_test=X_test.values.reshape(-1,1)
random_sample_score = []
poly_degree = []
for degree in range(degree_min,degree_max+1):
#model = make_pipeline(PolynomialFeatures(degree), RidgeCV(alphas=ridge_alpha,normalize=True,cv=5))
model = make_pipeline(PolynomialFeatures(degree), LassoCV(eps=lasso_eps,n_alphas=lasso_nalpha,
max_iter=lasso_iter,normalize=True,cv=5))
#model = make_pipeline(PolynomialFeatures(degree), LinearRegression(normalize=True))
model.fit(X_train, y_train)
y_pred = np.array(model.predict(X_train))
test_pred = np.array(model.predict(X_test))
RMSE=np.sqrt(np.sum(np.square(y_pred-y_train)))
test_score = model.score(X_test,y_test)
random_sample_score.append(test_score)
poly_degree.append(degree)
print("Test score of model with degree {}: {}\n".format(degree,test_score))
#plt.figure()
#plt.title("RMSE: {}".format(RMSE),fontsize=10)
#plt.suptitle("Polynomial of degree {}".format(degree),fontsize=15)
#plt.xlabel("X training values")
#plt.ylabel("Fitted and training values")
#plt.scatter(X_train,y_pred)
#plt.scatter(X_train,y_train)
plt.figure()
plt.title("Predicted vs. actual for polynomial of degree {}".format(degree),fontsize=15)
plt.xlabel("Actual values")
plt.ylabel("Predicted values")
plt.scatter(y_test,test_pred)
plt.plot(y_test,y_test,'r',lw=2)
random_sample_score
df_score = pd.DataFrame(data={'degree':[d for d in range(degree_min,degree_max+1)],
'Linear sample score':linear_sample_score,
'Random sample score':random_sample_score})
df_score
plt.figure(figsize=(8,5))
plt.grid(True)
plt.plot(df_score['degree'],df_score['Linear sample score'],lw=2)
plt.plot(df_score['degree'],df_score['Random sample score'],lw=2)
plt.xlabel ("Model Complexity: Degree of polynomial",fontsize=20)
plt.ylabel ("Model Score: R^2 score on test set",fontsize=15)
plt.legend(fontsize=15)
###Output
_____no_output_____
###Markdown
Cehcking the regularization strength from the cross-validated model pipeline
###Code
m=model.steps[1][1]
m.alpha_
###Output
_____no_output_____
###Markdown
Neural network for regression Import and declaration of variables
###Code
import tensorflow as tf
learning_rate = 0.000001
training_epochs = 20000
n_input = 1 # Number of features
n_output = 1 # Regression output is a number only
n_hidden_layer = 35 # layer number of features
X_train, X_test, y_train, y_test = train_test_split(df['X'], df['y'], test_size=0.33)
X_train=X_train.reshape(X_train.size,1)
y_train=y_train.reshape(y_train.size,1)
X_test=X_test.reshape(X_test.size,1)
y_test=y_test.reshape(y_test.size,1)
from sklearn import preprocessing
X_scaled = preprocessing.scale(X_train)
y_scaled = preprocessing.scale(y_train)
###Output
C:\Users\Tirtha\Python\Anaconda3\lib\site-packages\ipykernel_launcher.py:3: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
This is separate from the ipykernel package so we can avoid doing imports until
C:\Users\Tirtha\Python\Anaconda3\lib\site-packages\ipykernel_launcher.py:4: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
after removing the cwd from sys.path.
C:\Users\Tirtha\Python\Anaconda3\lib\site-packages\ipykernel_launcher.py:5: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
"""
C:\Users\Tirtha\Python\Anaconda3\lib\site-packages\ipykernel_launcher.py:6: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
###Markdown
Weights and bias variable
###Code
# Store layers weight & bias as Variables classes in dictionaries
weights = {
'hidden_layer': tf.Variable(tf.random_normal([n_input, n_hidden_layer])),
'out': tf.Variable(tf.random_normal([n_hidden_layer, n_output]))
}
biases = {
'hidden_layer': tf.Variable(tf.random_normal([n_hidden_layer])),
'out': tf.Variable(tf.random_normal([n_output]))
}
print("Shape of the weights tensor of hidden layer:",weights['hidden_layer'].shape)
print("Shape of the weights tensor of output layer:",weights['out'].shape)
print("--------------------------------------------------------")
print("Shape of the bias tensor of hidden layer:",biases['hidden_layer'].shape)
print("Shape of the bias tensor of output layer:",biases['out'].shape)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
w=sess.run(weights['hidden_layer'])
b=sess.run(biases['hidden_layer'])
print("Weight tensor initialized randomly\n---------------------------------------\n",w)
print("Bias tensor initialized randomly\n---------------------------------------\n",b)
sess.close()
###Output
Weight tensor initialized randomly
---------------------------------------
[[ 1.04348898 -0.62562287 0.0830955 -0.2694059 -1.59905183 1.82611179
-0.21245536 -1.21637654 0.97147286 -0.08349181 -1.6938988 0.7615844
1.4193033 1.52271056 -0.26382461 -0.66391391 0.62335193 -0.64882958
0.34043887 0.51017839 -1.31694865 -0.38064736 1.18706989 0.3256394
-1.07438827 0.99597555 -0.84235168 -0.14966556 -0.07332329 0.45747992
-0.90638632 0.38841721 -1.22614443 -1.21204579 -2.03451443]]
Bias tensor initialized randomly
---------------------------------------
[ 0.42340374 0.19241172 -0.32600278 0.70526534 0.61445254 0.15266864
0.51332366 1.05123603 0.49825382 0.58842802 1.42681241 0.90139294
0.25430983 0.70529252 -0.16479528 1.69503176 0.94038701 0.32357663
0.61296964 -0.77653986 0.07061771 1.3192941 0.12997486 0.4277775
0.37885833 1.02218032 0.81157911 0.29033285 0.521981 0.20968065
-0.46419618 0.01151479 -0.11108538 -0.60381615 0.17639446]
###Markdown
Input data as placeholder
###Code
# tf Graph input
x = tf.placeholder("float32", [None,n_input])
y = tf.placeholder("float32", [None,n_output])
###Output
_____no_output_____
###Markdown
Hidden and output layers definition (using TensorFlow mathematical functions)
###Code
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['hidden_layer']),biases['hidden_layer'])
layer_1 = tf.nn.relu(layer_1)
# Output layer with linear activation
ops = tf.add(tf.matmul(layer_1, weights['out']), biases['out'])
###Output
_____no_output_____
###Markdown
Gradient descent optimizer for training (backpropagation):For the training of the neural network we need to perform __backpropagation__ i.e. propagate the errors, calculated by this cost function, backwards through the layers all the way up to the input weights and bias in order to adjust them accordingly (minimize the error). This involves taking first-order derivatives of the activation functions and applying chain-rule to ___'multiply'___ the effect of various layers as the error propagates back.You can read more on this here: [Backpropagation in Neural Network](https://en.wikipedia.org/wiki/Backpropagation)Fortunately, TensorFlow already implicitly implements this step i.e. takes care of all the chained differentiations for us. All we need to do is to specify an Optimizer object and pass on the cost function. Here, we are using a Gradient Descent Optimizer.Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point.You can read more on this: [Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent)
###Code
# Define loss and optimizer
cost = tf.reduce_sum(tf.squared_difference(ops,y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TensorFlow Session for training and loss estimation
###Code
from tqdm import tqdm
# Initializing the variables
init = tf.global_variables_initializer()
# Empty lists for book-keeping purpose
epoch=0
log_epoch = []
epoch_count=[]
acc=[]
loss_epoch=[]
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Loop over epochs
for epoch in tqdm(range(training_epochs)):
# Run optimization process (backprop) and cost function (to get loss value)
_,l=sess.run([optimizer,cost], feed_dict={x: X_scaled, y: y_scaled})
loss_epoch.append(l) # Save the loss for every epoch
epoch_count.append(epoch+1) #Save the epoch count
# print("Epoch {}/{} finished. Loss: {}, Accuracy: {}".format(epoch+1,training_epochs,round(l,4),round(accu,4)))
#print("Epoch {}/{} finished. Loss: {}".format(epoch+1,training_epochs,round(l,4)))
w=sess.run(weights)
b = sess.run(biases)
#layer_1 = tf.add(tf.matmul(X_test, w['hidden_layer']),b['hidden_layer'])
#layer_1 = tf.nn.relu(layer_1)
# Output layer with no activation
#ops = tf.add(tf.matmul(layer_1, w['out']), b['out'])
layer1=np.matmul(X_test,w['hidden_layer'])+b['hidden_layer']
layer1_out = np.maximum(layer1,0)
yhat = np.matmul(layer1_out,w['out'])+b['out']
yhat-y_test
plt.plot(epoch_count,loss_epoch)
###Output
_____no_output_____
###Markdown
Keras
###Code
X_scaled.shape
# Imports
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
# Building the model
model = Sequential()
model.add(Dense(25, activation='linear', input_dim=1))
#model.add(Dropout(.2))
model.add(Dense(25, activation='linear'))
#model.add(Dropout(.1))
model.add(Dense(25, activation='linear'))
model.add(Dense(25, activation='linear'))
model.add(Dense(1, activation='linear'))
# Compiling the model
sgd = SGD(lr=0.001, decay=0, momentum=0.9, nesterov=True)
model.compile(loss = 'mean_squared_error', optimizer='sgd')
model.summary()
model.fit(X_scaled, y_scaled, epochs=2000, verbose=0)
score = model.evaluate(X_test, y_test)
score
yhat=model.predict(X_test)
yhat
plt.scatter(yhat,y_test)
###Output
_____no_output_____
###Markdown
| Name | Description | Date| :- |-------------: | :-:|__Reza Hashemi__| __Polynomial regression - linear and neural network__. | __On 11th of August 2019__ Polynomial regression with linear models and neural network* Are Linear models sufficient for handling processes with transcedental functions?* Do neural networks perform better in those cases? Import libraries
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Global variables for the program
###Code
N_points = 500 # Number of points for constructing function
x_min = 1 # Min of the range of x (feature)
x_max = 10 # Max of the range of x (feature)
noise_mean = 0 # Mean of the Gaussian noise adder
noise_sd = 2 # Std.Dev of the Gaussian noise adder
ridge_alpha = tuple([10**(x) for x in range(-3,0,1) ]) # Alpha (regularization strength) of ridge regression
lasso_eps = 0.001
lasso_nalpha=20
lasso_iter=1000
degree_min = 2
degree_max = 8
###Output
_____no_output_____
###Markdown
Generate feature and output vector following a non-linear function$$ The\ ground\ truth\ or\ originating\ function\ is\ as\ follows:\ $$$$ y=f(x)= x^2.sin(x).e^{-0.1x}+\psi(x) $$$$: \psi(x) = {\displaystyle f(x\;|\;\mu ,\sigma ^{2})={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}\;e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}} $$
###Code
x_smooth = np.array(np.linspace(x_min,x_max,501))
# Linearly spaced sample points
X=np.array(np.linspace(x_min,x_max,N_points))
# Samples drawn from uniform random distribution
X_sample = x_min+np.random.rand(N_points)*(x_max-x_min)
def func(x):
result = (20*x+3*x**2+0.1*x**3)*np.sin(x)*np.exp(-(1/x_max)*x)
return (result)
noise_x = np.random.normal(loc=noise_mean,scale=noise_sd,size=N_points)
y = func(X)+noise_x
y_sampled = func(X_sample)+noise_x
df = pd.DataFrame(data=X,columns=['X'])
df['Ideal y']=df['X'].apply(func)
df['y']=y
df['X_sampled']=X_sample
df['y_sampled']=y_sampled
df.head()
###Output
_____no_output_____
###Markdown
Plot the function(s), both the ideal characteristic and the observed output (with process and observation noise)
###Code
df.plot.scatter('X','Ideal y',title='Ideal y',grid=True,edgecolors=(0,0,0),c='blue',s=40,figsize=(10,5))
plt.plot(x_smooth,func(x_smooth),'k')
df.plot.scatter('X_sampled',y='y_sampled',title='Randomly sampled y',
grid=True,edgecolors=(0,0,0),c='orange',s=40,figsize=(10,5))
plt.plot(x_smooth,func(x_smooth),'k')
###Output
_____no_output_____
###Markdown
Import scikit-learn librares and prepare train/test splits
###Code
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import LassoCV
from sklearn.linear_model import RidgeCV
from sklearn.ensemble import AdaBoostRegressor
from sklearn.preprocessing import PolynomialFeatures
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
X_train, X_test, y_train, y_test = train_test_split(df['X'], df['y'], test_size=0.33)
X_train=X_train.values.reshape(-1,1)
X_test=X_test.values.reshape(-1,1)
n_train=X_train.shape[0]
###Output
_____no_output_____
###Markdown
Polynomial model with Ridge regularization (pipelined) with lineary spaced samples** This is an advanced machine learning method which prevents over-fitting by penalizing high-valued coefficients i.e. keep them bounded **
###Code
linear_sample_score = []
poly_degree = []
for degree in range(degree_min,degree_max+1):
#model = make_pipeline(PolynomialFeatures(degree), RidgeCV(alphas=ridge_alpha,normalize=True,cv=5))
model = make_pipeline(PolynomialFeatures(degree), LassoCV(eps=lasso_eps,n_alphas=lasso_nalpha,
max_iter=lasso_iter,normalize=True,cv=5))
#model = make_pipeline(PolynomialFeatures(degree), LinearRegression(normalize=True))
model.fit(X_train, y_train)
y_pred = np.array(model.predict(X_train))
test_pred = np.array(model.predict(X_test))
RMSE=np.sqrt(np.sum(np.square(y_pred-y_train)))
test_score = model.score(X_test,y_test)
linear_sample_score.append(test_score)
poly_degree.append(degree)
print("Test score of model with degree {}: {}\n".format(degree,test_score))
#plt.figure()
#plt.title("RMSE: {}".format(RMSE),fontsize=10)
#plt.suptitle("Polynomial of degree {}".format(degree),fontsize=15)
#plt.xlabel("X training values")
#plt.ylabel("Fitted and training values")
#plt.scatter(X_train,y_pred)
#plt.scatter(X_train,y_train)
plt.figure()
plt.title("Predicted vs. actual for polynomial of degree {}".format(degree),fontsize=15)
plt.xlabel("Actual values")
plt.ylabel("Predicted values")
plt.scatter(y_test,test_pred)
plt.plot(y_test,y_test,'r',lw=2)
linear_sample_score
###Output
_____no_output_____
###Markdown
Modeling with randomly sampled data set
###Code
X_train, X_test, y_train, y_test = train_test_split(df['X_sampled'], df['y_sampled'], test_size=0.33)
X_train=X_train.values.reshape(-1,1)
X_test=X_test.values.reshape(-1,1)
random_sample_score = []
poly_degree = []
for degree in range(degree_min,degree_max+1):
#model = make_pipeline(PolynomialFeatures(degree), RidgeCV(alphas=ridge_alpha,normalize=True,cv=5))
model = make_pipeline(PolynomialFeatures(degree), LassoCV(eps=lasso_eps,n_alphas=lasso_nalpha,
max_iter=lasso_iter,normalize=True,cv=5))
#model = make_pipeline(PolynomialFeatures(degree), LinearRegression(normalize=True))
model.fit(X_train, y_train)
y_pred = np.array(model.predict(X_train))
test_pred = np.array(model.predict(X_test))
RMSE=np.sqrt(np.sum(np.square(y_pred-y_train)))
test_score = model.score(X_test,y_test)
random_sample_score.append(test_score)
poly_degree.append(degree)
print("Test score of model with degree {}: {}\n".format(degree,test_score))
#plt.figure()
#plt.title("RMSE: {}".format(RMSE),fontsize=10)
#plt.suptitle("Polynomial of degree {}".format(degree),fontsize=15)
#plt.xlabel("X training values")
#plt.ylabel("Fitted and training values")
#plt.scatter(X_train,y_pred)
#plt.scatter(X_train,y_train)
plt.figure()
plt.title("Predicted vs. actual for polynomial of degree {}".format(degree),fontsize=15)
plt.xlabel("Actual values")
plt.ylabel("Predicted values")
plt.scatter(y_test,test_pred)
plt.plot(y_test,y_test,'r',lw=2)
random_sample_score
df_score = pd.DataFrame(data={'degree':[d for d in range(degree_min,degree_max+1)],
'Linear sample score':linear_sample_score,
'Random sample score':random_sample_score})
df_score
plt.figure(figsize=(8,5))
plt.grid(True)
plt.plot(df_score['degree'],df_score['Linear sample score'],lw=2)
plt.plot(df_score['degree'],df_score['Random sample score'],lw=2)
plt.xlabel ("Model Complexity: Degree of polynomial",fontsize=20)
plt.ylabel ("Model Score: R^2 score on test set",fontsize=15)
plt.legend(fontsize=15)
###Output
_____no_output_____
###Markdown
Cehcking the regularization strength from the cross-validated model pipeline
###Code
m=model.steps[1][1]
m.alpha_
###Output
_____no_output_____
###Markdown
Neural network for regression Import and declaration of variables
###Code
import tensorflow as tf
learning_rate = 0.000001
training_epochs = 20000
n_input = 1 # Number of features
n_output = 1 # Regression output is a number only
n_hidden_layer = 35 # layer number of features
X_train, X_test, y_train, y_test = train_test_split(df['X'], df['y'], test_size=0.33)
X_train=X_train.reshape(X_train.size,1)
y_train=y_train.reshape(y_train.size,1)
X_test=X_test.reshape(X_test.size,1)
y_test=y_test.reshape(y_test.size,1)
from sklearn import preprocessing
X_scaled = preprocessing.scale(X_train)
y_scaled = preprocessing.scale(y_train)
###Output
C:\Users\Tirtha\Python\Anaconda3\lib\site-packages\ipykernel_launcher.py:3: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
This is separate from the ipykernel package so we can avoid doing imports until
C:\Users\Tirtha\Python\Anaconda3\lib\site-packages\ipykernel_launcher.py:4: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
after removing the cwd from sys.path.
C:\Users\Tirtha\Python\Anaconda3\lib\site-packages\ipykernel_launcher.py:5: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
"""
C:\Users\Tirtha\Python\Anaconda3\lib\site-packages\ipykernel_launcher.py:6: FutureWarning: reshape is deprecated and will raise in a subsequent release. Please use .values.reshape(...) instead
###Markdown
Weights and bias variable
###Code
# Store layers weight & bias as Variables classes in dictionaries
weights = {
'hidden_layer': tf.Variable(tf.random_normal([n_input, n_hidden_layer])),
'out': tf.Variable(tf.random_normal([n_hidden_layer, n_output]))
}
biases = {
'hidden_layer': tf.Variable(tf.random_normal([n_hidden_layer])),
'out': tf.Variable(tf.random_normal([n_output]))
}
print("Shape of the weights tensor of hidden layer:",weights['hidden_layer'].shape)
print("Shape of the weights tensor of output layer:",weights['out'].shape)
print("--------------------------------------------------------")
print("Shape of the bias tensor of hidden layer:",biases['hidden_layer'].shape)
print("Shape of the bias tensor of output layer:",biases['out'].shape)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
w=sess.run(weights['hidden_layer'])
b=sess.run(biases['hidden_layer'])
print("Weight tensor initialized randomly\n---------------------------------------\n",w)
print("Bias tensor initialized randomly\n---------------------------------------\n",b)
sess.close()
###Output
Weight tensor initialized randomly
---------------------------------------
[[ 1.04348898 -0.62562287 0.0830955 -0.2694059 -1.59905183 1.82611179
-0.21245536 -1.21637654 0.97147286 -0.08349181 -1.6938988 0.7615844
1.4193033 1.52271056 -0.26382461 -0.66391391 0.62335193 -0.64882958
0.34043887 0.51017839 -1.31694865 -0.38064736 1.18706989 0.3256394
-1.07438827 0.99597555 -0.84235168 -0.14966556 -0.07332329 0.45747992
-0.90638632 0.38841721 -1.22614443 -1.21204579 -2.03451443]]
Bias tensor initialized randomly
---------------------------------------
[ 0.42340374 0.19241172 -0.32600278 0.70526534 0.61445254 0.15266864
0.51332366 1.05123603 0.49825382 0.58842802 1.42681241 0.90139294
0.25430983 0.70529252 -0.16479528 1.69503176 0.94038701 0.32357663
0.61296964 -0.77653986 0.07061771 1.3192941 0.12997486 0.4277775
0.37885833 1.02218032 0.81157911 0.29033285 0.521981 0.20968065
-0.46419618 0.01151479 -0.11108538 -0.60381615 0.17639446]
###Markdown
Input data as placeholder
###Code
# tf Graph input
x = tf.placeholder("float32", [None,n_input])
y = tf.placeholder("float32", [None,n_output])
###Output
_____no_output_____
###Markdown
Hidden and output layers definition (using TensorFlow mathematical functions)
###Code
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['hidden_layer']),biases['hidden_layer'])
layer_1 = tf.nn.relu(layer_1)
# Output layer with linear activation
ops = tf.add(tf.matmul(layer_1, weights['out']), biases['out'])
###Output
_____no_output_____
###Markdown
Gradient descent optimizer for training (backpropagation):For the training of the neural network we need to perform __backpropagation__ i.e. propagate the errors, calculated by this cost function, backwards through the layers all the way up to the input weights and bias in order to adjust them accordingly (minimize the error). This involves taking first-order derivatives of the activation functions and applying chain-rule to ___'multiply'___ the effect of various layers as the error propagates back.You can read more on this here: [Backpropagation in Neural Network](https://en.wikipedia.org/wiki/Backpropagation)Fortunately, TensorFlow already implicitly implements this step i.e. takes care of all the chained differentiations for us. All we need to do is to specify an Optimizer object and pass on the cost function. Here, we are using a Gradient Descent Optimizer.Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point.You can read more on this: [Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent)
###Code
# Define loss and optimizer
cost = tf.reduce_sum(tf.squared_difference(ops,y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
###Output
_____no_output_____
###Markdown
TensorFlow Session for training and loss estimation
###Code
from tqdm import tqdm
# Initializing the variables
init = tf.global_variables_initializer()
# Empty lists for book-keeping purpose
epoch=0
log_epoch = []
epoch_count=[]
acc=[]
loss_epoch=[]
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# Loop over epochs
for epoch in tqdm(range(training_epochs)):
# Run optimization process (backprop) and cost function (to get loss value)
_,l=sess.run([optimizer,cost], feed_dict={x: X_scaled, y: y_scaled})
loss_epoch.append(l) # Save the loss for every epoch
epoch_count.append(epoch+1) #Save the epoch count
# print("Epoch {}/{} finished. Loss: {}, Accuracy: {}".format(epoch+1,training_epochs,round(l,4),round(accu,4)))
#print("Epoch {}/{} finished. Loss: {}".format(epoch+1,training_epochs,round(l,4)))
w=sess.run(weights)
b = sess.run(biases)
#layer_1 = tf.add(tf.matmul(X_test, w['hidden_layer']),b['hidden_layer'])
#layer_1 = tf.nn.relu(layer_1)
# Output layer with no activation
#ops = tf.add(tf.matmul(layer_1, w['out']), b['out'])
layer1=np.matmul(X_test,w['hidden_layer'])+b['hidden_layer']
layer1_out = np.maximum(layer1,0)
yhat = np.matmul(layer1_out,w['out'])+b['out']
yhat-y_test
plt.plot(epoch_count,loss_epoch)
###Output
_____no_output_____
###Markdown
Keras
###Code
X_scaled.shape
# Imports
import numpy as np
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.utils import np_utils
# Building the model
model = Sequential()
model.add(Dense(25, activation='linear', input_dim=1))
#model.add(Dropout(.2))
model.add(Dense(25, activation='linear'))
#model.add(Dropout(.1))
model.add(Dense(25, activation='linear'))
model.add(Dense(25, activation='linear'))
model.add(Dense(1, activation='linear'))
# Compiling the model
sgd = SGD(lr=0.001, decay=0, momentum=0.9, nesterov=True)
model.compile(loss = 'mean_squared_error', optimizer='sgd')
model.summary()
model.fit(X_scaled, y_scaled, epochs=2000, verbose=0)
score = model.evaluate(X_test, y_test)
score
yhat=model.predict(X_test)
yhat
plt.scatter(yhat,y_test)
###Output
_____no_output_____ |
notebooks/Clustering RFM.ipynb | ###Markdown
Clustering RFMThis notebook performs clustering on the Instacart Dataset to segment users based on the Recency, Frequency and Monetary values Clustering the data for customer segmentation
###Code
#loading the necessary packages
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
import pandas as pd
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import KMeans
from sklearn.manifold import TSNE
from sklearn.metrics import silhouette_score
from yellowbrick.cluster import KElbowVisualizer
%matplotlib inline
from matplotlib import rc
# Define a function to test KMeans at various k
# This approach uses silhouette score to evaluate KMeans
def optimal_kmeans(dataset, start=2, end=11):
'''
Calculate the optimal number of kmeans
INPUT:
dataset : dataframe. Dataset for k-means to fit
start : int. Starting range of kmeans to test
end : int. Ending range of kmeans to test
OUTPUT:
Values and line plot of Silhouette Score.
'''
# Create empty lists to store values for plotting graphs
n_clu = []
km_ss = []
# Create a for loop to find optimal n_clusters
for n_clusters in range(start, end):
# Create cluster labels
kmeans = KMeans(n_clusters=n_clusters)
labels = kmeans.fit_predict(dataset)
# Calcualte model performance
silhouette_avg = round(silhouette_score(dataset, labels,
random_state=1), 3)
# Append score to lists
km_ss.append(silhouette_avg)
n_clu.append(n_clusters)
print("No. Clusters: {}, Silhouette Score: {}, Change from Previous Cluster: {}".format(
n_clusters,
silhouette_avg,
(km_ss[n_clusters - start] - km_ss[n_clusters - start - 1]).round(3)))
# Plot graph at the end of loop
if n_clusters == end - 1:
plt.figure(figsize=(5.6,3.5))
#plt.title('Silhouette Score Elbow for KMeans Clustering')
plt.xlabel('k')
plt.ylabel('silhouette score')
sns.pointplot(x=n_clu, y=km_ss)
plt.savefig('silhouette_score.pdf', format='pdf',
pad_inches=2.0)
plt.tight_layout()
plt.show()
def kmeans(df, clusters_number):
'''
Implement k-means clustering on dataset
INPUT:
dataset : dataframe. Dataset for k-means to fit.
clusters_number : int. Number of clusters to form.
end : int. Ending range of kmeans to test.
OUTPUT:
Cluster results and t-SNE visualisation of clusters.
'''
x = 25000
kmeans = KMeans(n_clusters = clusters_number, random_state = 1)
kmeans.fit(df[:x])
labels = kmeans.predict(df[x:])
# Extract cluster labels
cluster_labels = kmeans.labels_
# Create a cluster label column in original dataset
df_new = df[:x].assign(Cluster = cluster_labels)
# # Initialise TSNE
# model = TSNE(random_state=1)
# transformed = model.fit_transform(df)
# # Plot t-SNE
# plt.title('Flattened Graph of {} Clusters'.format(clusters_number))
# sns.scatterplot(x=transformed[:,0], y=transformed[:,1], hue=cluster_labels, style=cluster_labels, palette="Set1")
# plt.savefig('cluster_brain_plot_6_clusters_first_2500.png')
return df_new, cluster_labels, labels
import matplotlib
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
plt.rc('axes', labelsize=11)
plt.rc('axes', titlesize=11)
plt.rc('xtick', labelsize=9)
plt.rc('ytick', labelsize=9)
###Output
_____no_output_____
###Markdown
Import and merge the data
###Code
products = pd.read_csv("../instacart/products.csv")
orders = pd.read_csv("../instacart/orders.csv")
order_products_train = pd.read_csv("../instacart/order_products__train.csv")
order_products_prior = pd.read_csv("../instacart/order_products__prior.csv")
departments = pd.read_csv("../instacart/departments.csv")
aisles = pd.read_csv("../instacart/aisles.csv")
merge_data = products.merge(order_products_prior, on = 'product_id', how = 'inner')
merge_data = departments.merge(merge_data, on = 'department_id', how = 'inner')
merge_data = orders.merge(merge_data, on = 'order_id', how = 'inner')
#remove some useless info
# merge_data = merge_data.drop(['department','product_name'], axis = 1)
print( "Number of departments:", departments['department_id'].nunique())
print( "Number of aisles:", aisles['aisle_id'].nunique())
print( "Number of products:", products['product_id'].nunique())
print( "Number of unique users:", merge_data['user_id'].nunique())
print( "Number of unique orders", merge_data['order_id'].nunique())
print("Departments columns:", departments.columns)
print("Aisles columns:", aisles.columns)
print("Product columns:", products.columns)
print("Order_products:" , order_products_prior.columns)
print("Order:" , orders.columns)
###Output
_____no_output_____
###Markdown
Define functions to calculate the values
###Code
# returns data of User A
def user_specific_data(user_number):
user_data = merge_data_train[merge_data_train['user_id'] == user_number]
return user_data
# returns data of User A and Item B
def user_product_data(user_number,product_number):
user_data = merge_data[merge_data['user_id'] == user_number]
user_product_data = user_data[user_data['product_id'] == product_number]
return user_product_data
#creating crosstabs that indicates the items purchased during each transaction also giving the days since prior-order.
#Visually easy to see which item where purchased in a transaction.
def crosstab_user(user_number):
user_data = user_specific_data(user_number)
seq = user_data.order_id.unique()
crosst_user = pd.crosstab(user_data.product_name,user_data.order_id).reindex(seq, axis = 'columns')
sns.heatmap(crosst_user,cmap="YlGnBu",annot=True, cbar=False)
return crosst_user
def crosstab_user_order_id(user_number):
user_data = user_specific_data(user_number)
user_data = user_data.fillna(value = 0, axis = 1)
seq = user_data.order_id.unique()
dspo_data = user_data.groupby('order_id', as_index=False)['days_since_prior_order'].mean()
#dspo_data = dspo_data.T
#user_data = pd.concat([dspo_data,user_data])
crosst_user = pd.crosstab(user_data.product_name,user_data.order_id).reindex(seq, axis = 'columns')
#sns.heatmap(crosst_user,cmap="YlGnBu",annot=True, cbar=False)
crosst_user = pd.merge((crosst_user.T), dspo_data, on = 'order_id')
crosst_user = crosst_user.set_index('order_id')
crosst_user = crosst_user.T
#sns.heatmap(crosst_user,cmap="YlGnBu",annot=True, cbar=False)
return crosst_user
# Frequency being the number of orders placed by a user
# Total number of orders placed by a specific user
order_id_grouped = merge_data.drop(['days_since_prior_order','product_id','product_name','add_to_cart_order','reordered'],axis = 1)
number_of_orders_per_user = order_id_grouped.groupby('user_id').agg(num_orders = pd.NamedAgg(column = 'order_id', aggfunc = 'nunique' ))
number_of_orders_per_user
# plotting the number of products in each order
#creating a graph displaying the time of the day vs the departments
dep_prod = products.merge(departments, on = 'department_id', how = 'inner')
order_order_prod = orders.merge(order_products_prior, on = 'order_id', how = 'inner')
order_dep_prod = dep_prod.merge(order_order_prod,on = 'product_id', how = 'inner')
order_dep_prod_cleaned = order_dep_prod.drop(['days_since_prior_order','add_to_cart_order','reordered','aisle_id','product_id','product_name','order_id','user_id','eval_set'],axis = 1)
num_prods = order_dep_prod.groupby("order_id")["add_to_cart_order"].aggregate("max").reset_index()
cnt_srs = num_prods.add_to_cart_order.value_counts()
cnt_srs
# creating a dataframe that specify the number of products in each order for each user
num_prods_user = orders.merge(num_prods, on = 'order_id', how = 'inner')
num_prods_user.drop(['eval_set','order_dow','order_hour_of_day','days_since_prior_order','order_number'],axis = 1)
# We want the average products per order per user for the monetary entry of RFM
average_num_prods_user =num_prods_user.groupby("user_id")["add_to_cart_order"].aggregate("mean").reset_index()
#creating a dataframe that contains the Frequency en the monatory values
F_M = number_of_orders_per_user.merge(average_num_prods_user, on = 'user_id', how = 'inner')
F_M = F_M.rename(columns={"num_orders": "Frequency", "add_to_cart_order": "Monetary"})
#creating the Recency feature
# getting the last days_since_prior_order in the train set....
# using the 2nd last days_since_prior_order as recency
last_days_since_prior_order_user =orders.groupby("user_id")["days_since_prior_order"].nth(-2).reset_index()
# using the average days_since_prior_order as the recency feature
mean_days_since_prior_order_user =orders.groupby("user_id")["days_since_prior_order"].mean().reset_index()
R_F_M = F_M.merge(mean_days_since_prior_order_user, on = 'user_id', how = 'inner')
RFM = R_F_M.rename(columns={"days_since_prior_order": "Recency"})
RFM.set_index('user_id', inplace = True)
#changing the columns so that the order of the columns are RFM
cols = ['Recency', 'Frequency', 'Monetary']
RFM = RFM[cols]
RFM
###Output
_____no_output_____
###Markdown
Checking if the data created is skewed...
###Code
RFM = pd.read_pickle("RFM.pkl")
#plotting the data to see if the features that we created is skewed
plt.figure(figsize=[5.6,5.6])
RFM.hist(figsize=[5.6,4])
plt.tight_layout()
plt.savefig("RFM.pdf")
###Output
_____no_output_____
###Markdown
From the features we see that the Monetary feature that was created is positively skewed. This means that we will have to transform the current data to the log form of the data. The orthers are roughly normal, so that we will use it as is...
###Code
#From the figures we see that Frequency (total number of orders per customer) is positively skewed
#thus we need to log transform the data so that we can use K-Means clustering
RFM['Frequency'] = np.log(RFM['Frequency'])
#RFM['Recency'] = np.log(RFM['Recency'])
#RFM['Monerary'] = np.log(RFM['Monerary'])
# RFM.hist(figsize=(10,6))
# plt.tight_layout()
RFM['Monetary'] = np.log(RFM['Monetary'])
df = RFM.drop(['Recency'],axis = 1)
df.hist(figsize=[5.6,2])
plt.tight_layout()
plt.savefig("rfm_scaled.pdf")
###Output
_____no_output_____
###Markdown
Now the data looks more normal so we will use it as created... The data should also be scaled...
###Code
#So now that the data is roughly normal we need to scale the features, because K-Means wants normal data
#around a mean of 0 and a std of 1
#Scaling the RFM features that we created
#This is part of the pre-processing process...
scaling_fact = StandardScaler()
RFM_scaled = scaling_fact.fit_transform(RFM)
RFM_scaled = pd.DataFrame(RFM_scaled)
RFM_scaled.hist(figsize=(10,6))
plt.tight_layout()
data_described = RFM_scaled.describe()
data_described = data_described.round(decimals=2)
data_described
###Output
_____no_output_____
###Markdown
Market Segmentation Using K-Means to cluster into segments after engineering RFM featuresLooking into how many clusters are a good number for this datasetK-Means performs best when not skewed and when normalised around a mean of 0 and a standard deviation of 1 -- we just did these so we are good to go!
###Code
# Visualize performance of KMeans at various values k
# This approaches uses distortion score to evaluate KMeans
model = KMeans()
plt.figure(figsize= [5.6,3])
visualizer = KElbowVisualizer(model, k=(2, 15))
visualizer.fit(RFM_scaled)
# plt.tight_layout()
#
visualizer.show(outpath = "elbow.pdf")
# plt.savefig('elbow.pdf')
visualizer.show?
plt.gca().set_xlabel("k")
plt.gca().set_ylabel("distortion score")
visualizer.savefig('elbow.pdf')
visualizer.fit(RFM_scaled)
###Output
_____no_output_____
###Markdown
With the elbow method it is clear that the number of clusters should be 6
###Code
# Plot clusters for k=3
cluster_less_6, cluster_labels, labels = kmeans(RFM_scaled, 6)
print(labels.shape)
print(cluster_labels.shape)
# clusters_3 = kmeans(RFM_scaled, 3)
# Convert clusters to DataFrame with appropriate index and column names
cluster_df = pd.DataFrame(cluster_less_6)
cluster_df.index = RFM[:25000].index
cluster_df.columns = ['Recency', 'Monetary',
'Frequency', 'Cluster']
cluster_df
cluster_df.index.names = ['user_id']
cluster_df.head()
# Reshape data for snake plot
cluster_melt = pd.melt(cluster_df.reset_index(),
id_vars=['user_id', 'Cluster'],
value_vars=['Recency',
'Frequency',
'Monetary'],
var_name='Metric',
value_name='Value')
cluster_melt['Cluster'] += 1
# Create snake plot
# palette = ['powderblue', 'green','orange','purple','steelblue','grey']
palette = 'colorblind'
plt.figure(figsize=[5.6,3])
sns.pointplot(x='Metric', y='Value', data=cluster_melt, hue='Cluster',
palette=palette)
plt.xlabel('')
plt.ylabel('Value')
plt.yticks([])
#plt.title('Six Customer Segments')
sns.despine()
plt.tight_layout()
# plt.savefig('snake_plot_5_clusters_less_data_head_25000_Av_R2.png', dpi=300, pad_inches=2.0)
plt.savefig('snake_plot_5_clusters_less_data_head_25000_Av_R2.pdf')
plt.show()
###Output
_____no_output_____
###Markdown
Visualise the clusters
###Code
from mpl_toolkits.mplot3d import Axes3D
threeD = plt.figure(figsize=[5.6,3]).gca(projection= '3d')
threeD.scatter(cluster_df['Recency'],cluster_df['Frequency'],
cluster_df['Monetary'],
c = cluster_df['Cluster'],cmap='icefire_r')
threeD.set_xlabel('Recency')
threeD.set_ylabel('Frequency')
threeD.set_zlabel('Monetary')
plt.legend()
plt.tight_layout()
plt.savefig('threeD_cluster.png', dpi = 500)
###Output
_____no_output_____
###Markdown
Format the cluster data into a table
###Code
df_clusters_all_data = pd.DataFrame(cluster_labels)
df_clusters_add = pd.DataFrame(labels)
df_clusters_all_data = df_clusters_all_data.append(df_clusters_add).reset_index()
df_clusters_all = df_clusters_all_data.drop(['index'],axis = 1)
df_clusters_all = df_clusters_all.rename(columns={0:'Cluster'})
df_clusters_all.to_csv('clustered_data.csv')
#Clustering the customers based on their RFM values
from sklearn.cluster import KMeans
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import MinMaxScaler
import seaborn as sns
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.preprocessing import scale
import sklearn.metrics as ss
from sklearn.metrics import confusion_matrix, classification_report
X = scale(RFM)
clustering = KMeans(n_clusters = 5, random_state = 5)
clustering.fit(X)
###Output
_____no_output_____
###Markdown
Plotting the model output
###Code
%matplotlib inline
color_theme = np.array(['darkgray','lightsalmon','powderblue','green','yellow'])
plt.scatter(x = RFM.Frequency, y = RFM.Recency, c = color_theme[clustering.labels_])
plt.title('K-Means classification')
plt.scatter(x = RFM.Monerary, y = RFM.Recency, c = color_theme[clustering.labels_])
plt.title('K-Means classification')
plt.scatter(x = RFM.Monerary, y = RFM.Frequency, c = clustering.labels_)
plt.title('K-Means classification')
def bench_k_means(estimator, name, data):
estimator.fit(data)
print('%-9s\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f'
% (name, estimator.inertia_,
metrics.homogeneity_score(y, estimator.labels_),
metrics.completeness_score(y, estimator.labels_),
metrics.v_measure_score(y, estimator.labels_),
metrics.adjusted_rand_score(y, estimator.labels_),
metrics.adjusted_mutual_info_score(y, estimator.labels_),
metrics.silhouette_score(data, estimator.labels_,
metric='euclidean')))
###Output
_____no_output_____ |
notebooks/__debugging/TEST_water_intake_profile_calculator.ipynb | ###Markdown
[](https://neutronimaging.pages.ornl.gov/en/tutorial/notebooks/water_intake_profile_calculator/) Select your IPTS
###Code
from __code.ui_builder import UiBuilder
o_builder = UiBuilder(ui_name = 'ui_water_intake_profile.ui')
from __code.roi_selection_ui import Interface
from __code import system
from __code.water_intake_profile_calculator import WaterIntakeProfileCalculator, WaterIntakeProfileSelector
system.System.select_working_dir()
from __code.__all import custom_style
custom_style.style()
###Output
_____no_output_____
###Markdown
Python Import
###Code
%gui qt
###Output
_____no_output_____
###Markdown
Select Images to Process
###Code
o_water = WaterIntakeProfileCalculator(working_dir=system.System.get_working_dir())
o_water.select_data()
###Output
_____no_output_____
###Markdown
Select Profile Region
###Code
o_gui = WaterIntakeProfileSelector(dict_data=o_water.dict_files)
o_gui.show()
###Output
_____no_output_____
###Markdown
%DEBUGGING
###Code
from __code import system
from __code.water_intake_profile_calculator import WaterIntakeProfileCalculator, WaterIntakeProfileSelector
%gui qt
list_files = ['/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-Das-Saikat/only_data_of_interest/image_00544.tif',
'/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-Das-Saikat/only_data_of_interest/image_00545.tif',
'/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-Das-Saikat/only_data_of_interest/image_00546.tif',
'/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-Das-Saikat/only_data_of_interest/image_00547.tif',
'/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-Das-Saikat/only_data_of_interest/image_00548.tif',
]
list_files = ['/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-15177/Sample5_uptake_no bad images/Sample5_1min_r_0.tif',
'/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-15177/Sample5_uptake_no bad images/Sample5_1min_r_1.tif',
'/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-15177/Sample5_uptake_no bad images/Sample5_1min_r_2.tif',
'/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-15177/Sample5_uptake_no bad images/Sample5_1min_r_3.tif',
'/Volumes/my_book_thunderbolt_duo/IPTS/IPTS-15177/Sample5_uptake_no bad images/Sample5_1min_r_4.tif',
]
list_files = ["/Users/j35/IPTS/charles/im0000.tif",
"/Users/j35/IPTS/charles/im0320.tif",
"/Users/j35/IPTS/charles/im0321.tif",
"/Users/j35/IPTS/charles/im0322.tif",
"/Users/j35/IPTS/charles/im0323.tif",
"/Users/j35/IPTS/charles/im0324.tif",
"/Users/j35/IPTS/charles/im0325.tif",
"/Users/j35/IPTS/charles/im0326.tif",
]
o_water = WaterIntakeProfileCalculator()
o_water.load_and_plot(list_files)
o_gui = WaterIntakeProfileSelector(dict_data = o_water.dict_files)
o_gui.show()
171-66+131
236*0.05
###Output
_____no_output_____ |
signals/signals-lab-1/signals-1-4-more-interpreting-the-dft.ipynb | ###Markdown
_Speech Processing Labs 2021: SIGNALS 1: More on Interpreting the DFT (Extension)_
###Code
## Run this first!
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import cmath
from math import floor
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
plt.style.use('ggplot')
from dspMisc import *
###Output
_____no_output_____
###Markdown
More on Interpreting the DFT This notebook is extension material: This notebook goes through DFT output frequencies and leakage in more detail than is strictly necessary for this course. It's perfectly fine to skip it for now. Learning Outcomes* Understand how sampling rate effects the DFT output* Understand what the DFT leakage is. Need to know* Topic Videos: Fourier Analysis, Frequency Domain* [Digital Signals: Sampling sinusoids](./signals-1-2-sampling-sinusoids.ipynb)* [The Discrete Fourier Transform](./signals-1-3-discrete-fourier-transform-in-detail.ipynb) Equation alert: If you're viewing this on github, please note that the equation rendering is not always perfect. You should view the notebooks through a jupyter notebook server for an accurate view. A Very Quick Recap of the DFTThe [previous notebook](./signals-1-3-discrete-fourier-transform-in-detail.ipynb) went through the mechanics of the Discrete Fourier Transform (DFT). To summarize, the DFT input and output are broadly: * **Input:** $N$ amplitude samples over time * $x[n]$, for $n=0..N-1$ (i.e. a time series of $N$ samples) * **Output:** the dot product (i.e., the similiarity) between the input and $N$ sinusoids with different frequencies * DFT[k] $= Me^{-j\phi}$, i.e. a complex number (in polar form) with **magnitude** $M$ and **phase** angle $\phi$ * The $N$ DFT outputs represent $N$ equally space frequencies between 0 and the sampling rate. The outputs are calculated using the following formula for $k=0,...N-1$. $$ \begin{align}DFT[k] &= \sum_{n=0}^{N-1} x[n] e^{-j \frac{2\pi nk}{N}} \\&= \sum_{n=0}^{N-1} x[n]\big[\cos\big(\frac{2\pi nk}{N} \big) - j \sin\big(\frac{2\pi nk}{N} \big) \big]\end{align}$$You can think DFT[k] as a **phasor**, which looks like an analogue clockhand (i.e. a vector) ticking (i.e., rotating) around a clockface (i.e. a circle), where the length of the hand is the **peak amplitude** of that wave, and how fast it goes around the clock is it's frequency. Each of these DFT[k] 'clocks' corresponds to a sinusoid of a specific frequency.Each DFT[k] output essentially tells us whether the input signal has a sinusoidal component that matches the $k$th DFT phasor frequency. So, we talk about the DFT outputs as providing the **frequency response** of the input signal. Since the the DFT outputs are complex numbers, we can talk about them in terms of magnitude and phase angle: The **magnitude** of DFT[k] tells us how much we'd weight the $k$-th phasor if we were to try to reconstruct the original input by adding all the DFT phasors together. The **phase angle** of DFT[k] tells use whether we need to shift that wave along the time axis. The DFT Frequency Response: Which Frequencies?In [the first notebook on interpreting the DFT](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb) we saw that for input of length $N$, the DFT **output analysis frequencies** are $N$ evenly space points between 0 Hz and the sampling rate. So, how can we see this from the DFT equation? We can first note that DFT[0] (corresponding to a 0 Hz, i.e., a phasor stuck at one point) is the average of the input sequence. This tells us the amplitude of the waveform (i.e. whether it's centered above or below 0 in amplitude). This is often referred to as the DC component ('Direct Current') in electrical engineering texts. Now, we can work out all the other output frequencies by noticing that DFT[1] represents a phasor that takes N equal steps to make one complete one full circle (clockwise starting from (1,0)). So, $e^{-j 2\pi n/N}$ in the equation represents the $n$th step around the circle. Let's call the **sampling rate** $f_s$ (samples/second). We can then figure out the frequency of represented by DFT[1] by figuring out the time it takes to make one cycle (i.e., the period), which is the time it takes to make $N$ steps. * The time between each sample (i.e., the **sampling time**) is $t_s = 1/f_s$ (seconds)* So, $N$ samples takes $t_s \times N$ (seconds x samples = seconds) * And it will take the $k=1$ phasor $T = t_s \times N$ (seconds) to make 1 complete cycle * This is the **period** or **wavelength** of the phasor * Thus, the **frequency** of the $k=1$ phasor is $f_{min} = 1/T = 1/(t_s N) $ (cycles/second) * i.e., $f_{min} = f_s/N$So, the minimum frequency that we can analyse in an application of the DFT $f_{min}$ depends on the input size $N$ and the sampling rate $f_s$. From there we can see that DFT[k] represents a phasor that completes the circle $k$ times faster than the one corresponding to DFT[1]. That is, The frequency associated with DFT[k] is: $kf_{min}$ (cycles/second) = $kf_s/N$Since $k$ = 0,...,$N-1$, this is the same as saying we take taking N evenly space points between 0 Hz and the sampling rate, $f_s$, which is the shortcut we took in [the first notebook on the DFT](./signals-1-1-interpreting-the-discrete-fourier-transform.ipynb). Thinking about this in terms of sampling rates and aliasing explains why you get the mirroring effect in the DFT outputs: Once you get to half the sampling rate, your samples are too far apart (in time) to capture the actual frequency of the sinusoid, as we can't capture 2 points per cycle. Sinusoids of those higher frequencies become indistinguishable from their lower (mirror) counterpart. So in analyzing what frequency components are in an input signal we only consider the first $N/2$ DFT outputs (corresponding to 0 to $f_s/2$ Hz, i.e. the Nyquist Frequency) So, the important thing to remember is that the DFT outputs depend on: * The **number of samples** in the input sequence, $N$* The **sampling rate**, $f_s$ samples/second ExerciseAssume we have a sampling rate of $f_s = 32$ samples/second, and an input length of $N=16$. * What's the frequency associated with DFT[1]? * What's the frequency associated with DFT[5]? Notes Leakage One of the main things to remember about the DFT is that you're calculating the correlation between the input and phasors with specific frequencies. If your input exactly matches one of those phasor frequencies the magnitude response will show a positive magnitude for that phasor and zero for everything else. However, if the input frequency falls between output frequencies, then you'll see **spectral leakage**. The DFT outputs close to the input frequency will also get positive magnitudes, with the DFT output closest to the input frequency getting the highest magnitude. The following code gives an example
###Code
## input size
N=64
## sampling rate
f_s = 64
freq1 = 4.5 ## In between DFT output frequencies
freq2 = 20 ## One of the DFT outputs
#freq2 = 6
amplitude1 = 1
amplitude2 = 0.5
x1, time_steps = gen_sinusoid(frequency=freq1, phase=0, amplitude=amplitude1, sample_rate=f_s, seq_length=N, gen_function=np.cos)
x2, time_steps = gen_sinusoid(frequency=freq2, phase=np.pi/2, amplitude=amplitude2, sample_rate=f_s, seq_length=N, gen_function=np.cos)
x_compound = x1 + x2
## Plot the input
fig, timedom = plt.subplots(figsize=(16, 4))
timedom.scatter(time_steps, x_compound, color='magenta')
timedom.plot(time_steps, x_compound, color='magenta')
timedom.set_xlabel("time (s)")
timedom.set_ylabel("Amplitude")
timedom.set_title("Leakage example (time domain)")
## Do the DFT on the compound waveform as above:
mags, phases = get_dft_mag_phase(x_compound, seq_len=N)
dft_freqs = get_dft_freqs_all(sample_rate=f_s, seq_len=N)
## Plot the magnitudes
fig, fdom = plt.subplots(figsize=(16, 4))
## Just plot the first N/2 frequencies since we know that they are the mirrored for k>N/2
fdom.scatter(dft_freqs[:round(N/2)], mags[:round(N/2)])
fdom.set_xlabel("Frequency (Hz)")
fdom.set_ylabel("Magnitude")
fdom.set_title("Leakage example: Magnitude response")
#print(mags)
###Output
_____no_output_____
###Markdown
Leakage as the normalized sinc function Leakage makes the DFT harder to interpret. However, we can derive the shape that leakage will have from the the DFT equation and some algebra about rectangular functions. It turns out that leakage for a particular frequency follows the normalized **sinc** function: $$\begin{align}X(m) &= \Big|\frac{AN}{2} \cdot \mathop{sinc}(c-m)\Big|\\&= \Big|\frac{AN}{2} \cdot \frac{\sin(\pi(c-m))}{2\pi(c-m)}\Big|\\\end{align}$$Where $A$ is the peak amplitude of the input, $N$ is the input sequence length, $c$ is the number of cycles completed in the input sequence time. If $c$ is a whole number we just get the usual DFT response (i.e. a single spike at the corresponding frequency), but if $c$ is not a whole number, we get a spread across output frequency bins.The sinc function is a bit hard to think about from just the equation, but it's easy to recognize when plotted (as below) Let's check whether the sinc function matches what we get in the DFT. First we write a function to evaluate the leakage function in between our DFT outputs.
###Code
## Calculate the approximated leakage as the sinc function
def calc_leakage(freq, sample_rate, seqlen, amplitude=1):
sequence_time = (1/sample_rate)*seqlen
## number of cycles in input for component
c = freq * sequence_time
print("c=", c)
## Interpolate between DFT ouput indices
ms = np.arange(0, seqlen, 0.1)
## Approximated response - we could actually just return a function here, but
## let's just keep things concrete for now.
leakage = np.abs(amplitude * seqlen * 0.5 * np.sinc((c-ms)))
return leakage, ms * (sample_rate/seqlen)
###Output
_____no_output_____
###Markdown
Now let's plot the leakage predicted for our two input components separately (top) and added together (bottom)
###Code
## Calculate the leakage function for our know input wave components
leakage1, ms = calc_leakage(freq1, f_s, N, amplitude=amplitude1)
leakage2, ms = calc_leakage(freq2, f_s, N, amplitude=amplitude2)
## Plot the magnitude response and the leakage function for each of our input components
fig, fdom = plt.subplots(figsize=(16, 4))
fdom.set(xlim=(-1, N/2), ylim=(-1, N))
fdom.scatter(dft_freqs, mags)
fdom.plot(ms, leakage1)
fdom.plot(ms, leakage2)
fdom.set_xlabel("Frequency (Hz)")
fdom.set_ylabel("Magnitude")
fdom.set_title("Leakage function for each input component frequency")
## Plot the magnitude response and the sum of the leakage functions
fig, fdom = plt.subplots(figsize=(16, 4))
fdom.set(xlim=(-1, N/2), ylim=(-1, N))
fdom.scatter(dft_freqs, mags)
fdom.plot(ms, leakage1 + leakage2, color='C5')
fdom.set_xlabel("Frequency (Hz)")
fdom.set_ylabel("Magnitude")
fdom.set_title("Sum of leakage functions for input components")
## It fits, though not perfectly!
###Output
_____no_output_____
###Markdown
In the top figure, you should see that peaks (**main lobes**) of each leakage function are aligned with our input component frequencies. The peaks are at the same points as the DFT outputs when the sinusoidal component has a frequency matching the DFT output frequency (i.e. 12 Hz). Otherwise we see the spread of leakage around the input component frequency (i.e. around 4.5 Hz). You'll also notice that our DFT magnitudes points don't always line up perfectly with our sinc curves. Part of this is because the leakage function is an _approximation_. Nevertheless, it's a very good approximation! Exercise* What happens if the frequencies of the two components making the compound waveform are very close together? * e.g. make `f_in2=6`* What if one of the components has a relatively small amplitude? * e.g. change `amplitude` of the second input to 0.5 Notes Shaping the lobesThe leakage sinc function has a peak around a specific frequency. If we want our DFT to be better able to distinguish between close frequencies, we need that peak, the **main lobe** to be narrower. We also want the other peaks, the **side lobes** to be flatter. We can achieve this using different **windowing methods** on our input. This is why you see 'Hanning' as the default option for window method in the advanced spectrogram settings in praat. We'll see some more examples of this when we look at different types of filters later. But for now the main thing to observe is that leakage might give you the impression that specific frequency components are present in your waveform, when what's actually happening is that your waveform has frequencies that don't match the DFT phasors. Another thing that can happen is that the peak for a low amplitude component gets subsumed into the main lobe of a nearby frequency component. This might make you miss frequency components in your input! Extra: Here's the composition and decomposition for the compound waveform one you saw in Notebook 1!
###Code
N=1028
f_s = 1028
f_in1 = 8 ## In between DFT output frequencies
f_in2 = 20 ## One of the DFT outputs
f_in3 = 36 ## One of the DFT outputs
x1, time_steps = gen_sinusoid(frequency=f_in1, phase=0, amplitude=0.5, sample_rate=f_s, seq_length=N, gen_function=np.cos)
x2, time_steps = gen_sinusoid(frequency=f_in2, phase=np.pi/2, amplitude=1, sample_rate=f_s, seq_length=N, gen_function=np.cos)
x3, time_steps = gen_sinusoid(frequency=f_in3, phase=0, amplitude=0.3, sample_rate=f_s, seq_length=N, gen_function=np.cos)
x_compound = x1 + x2 + x3
## Plot the input
fig, timedom = plt.subplots(figsize=(16, 4))
#timedom.scatter(time_steps, x_compound, color='magenta')
#timedom.set(xlim=(0, 0.2))
timedom.plot(time_steps, x_compound, color='magenta')
## Plot the input
fig = plt.figure(figsize=(15,15))
gs = fig.add_gridspec(3,2)
ymax=2
timedom = fig.add_subplot(gs[0, 0])
timedom.set(xlim=(-0.1, 1), ylim=(-ymax,ymax))
timedom.plot(time_steps, x_compound, color='magenta')
timedom.set_title("waveform made from adding 3 sine waves")
s1 = fig.add_subplot(gs[0, 1])
s1.set(xlim=(-0.1, 1), ylim=(-ymax,ymax))
s1.plot(time_steps, x1)
s1.set_title("component 1: 8 Hz")
s2 = fig.add_subplot(gs[1, 1])
s2.set(xlim=(-0.1, 1), ylim=(-ymax,ymax))
s2.plot(time_steps, x2)
s2.set_title("component 2: 20 Hz")
s3 = fig.add_subplot(gs[2, 1])
s3.set(xlim=(-0.1, 1), ylim=(-ymax,ymax))
s3.plot(time_steps, x3)
s3.set_title("component 3: 36 Hz")
#fig.savefig("../fig/compound_waveform.png")
## Do the DFT on the compound waveform as above:
mags, phases = get_dft_mag_phase(x_compound, seq_len=N)
dft_freqs = get_dft_freqs_all(sample_rate=f_s, seq_len=N)
## Plot the magnitudes
fig,fdom = plt.subplots(figsize=(16, 4))
fdom.set(xlim=(-1, N/2))
## Just plot the first N/2 frequencies since we know that they are the mirrored for k>N/2
fdom.scatter(dft_freqs[:round(N/2)], mags[:round(N/2)])
fdom.set_xlabel("Frequency (Hz)")
fdom.set_ylabel("Magnitude")
fdom.set_title("Magnitude Response")
#print(mags)
###Output
_____no_output_____ |
exercise-handling-missing-values.ipynb | ###Markdown
**This notebook is an exercise in the [Data Cleaning](https://www.kaggle.com/learn/data-cleaning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/handling-missing-values).**--- In this exercise, you'll apply what you learned in the **Handling missing values** tutorial. SetupThe questions below will give you feedback on your work. Run the following cell to set up the feedback system.
###Code
from learntools.core import binder
binder.bind(globals())
from learntools.data_cleaning.ex1 import *
print("Setup Complete")
###Output
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3357: DtypeWarning: Columns (22,32) have mixed types.Specify dtype option on import or set low_memory=False.
if (await self.run_code(code, result, async_=asy)):
###Markdown
1) Take a first look at the dataRun the next code cell to load in the libraries and dataset you'll use to complete the exercise.
###Code
# modules we'll use
import pandas as pd
import numpy as np
# read in all our data
sf_permits = pd.read_csv("../input/building-permit-applications-data/Building_Permits.csv")
# set seed for reproducibility
np.random.seed(0)
###Output
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3166: DtypeWarning: Columns (22,32) have mixed types.Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Use the code cell below to print the first five rows of the `sf_permits` DataFrame.
###Code
# TODO: Your code here!
sf_permits.head()
###Output
_____no_output_____
###Markdown
Does the dataset have any missing values? Once you have an answer, run the code cell below to get credit for your work.
###Code
sf_missing=sf_permits.isnull().sum()
print(sf_missing)
###Output
Permit Number 0
Permit Type 0
Permit Type Definition 0
Permit Creation Date 0
Block 0
Lot 0
Street Number 0
Street Number Suffix 196684
Street Name 0
Street Suffix 2768
Unit 169421
Unit Suffix 196939
Description 290
Current Status 0
Current Status Date 0
Filed Date 0
Issued Date 14940
Completed Date 101709
First Construction Document Date 14946
Structural Notification 191978
Number of Existing Stories 42784
Number of Proposed Stories 42868
Voluntary Soft-Story Retrofit 198865
Fire Only Permit 180073
Permit Expiration Date 51880
Estimated Cost 38066
Revised Cost 6066
Existing Use 41114
Existing Units 51538
Proposed Use 42439
Proposed Units 50911
Plansets 37309
TIDF Compliance 198898
Existing Construction Type 43366
Existing Construction Type Description 43366
Proposed Construction Type 43162
Proposed Construction Type Description 43162
Site Permit 193541
Supervisor District 1717
Neighborhoods - Analysis Boundaries 1725
Zipcode 1716
Location 1700
Record ID 0
dtype: int64
###Markdown
We can see many columns have missing values
###Code
# Check your answer (Run this code cell to receive credit!)
q1.check()
# Line below will give you a hint
#q1.hint()
###Output
_____no_output_____
###Markdown
2) How many missing data points do we have?What percentage of the values in the dataset are missing? Your answer should be a number between 0 and 100. (If 1/4 of the values in the dataset are missing, the answer is 25.)
###Code
# TODO: Your code here!
totalcells=np.product(sf_permits.shape)
missings=sf_missing.sum()
percent_missing = (missings/totalcells)*100
print(percent_missing)
# Check your answer
q2.check()
# Lines below will give you a hint or solution code
#q2.hint()
#q2.solution()
###Output
_____no_output_____
###Markdown
3) Figure out why the data is missingLook at the columns **"Street Number Suffix"** and **"Zipcode"** from the [San Francisco Building Permits dataset](https://www.kaggle.com/aparnashastry/building-permit-applications-data). Both of these contain missing values. - Which, if either, are missing because they don't exist? - Which, if either, are missing because they weren't recorded? Once you have an answer, run the code cell below. Street Number Suffix is not exisiting while Zipcode have not been recorded
###Code
# Check your answer (Run this code cell to receive credit!)
q3.check()
# Line below will give you a hint
#q3.hint()
###Output
_____no_output_____
###Markdown
4) Drop missing values: rowsIf you removed all of the rows of `sf_permits` with missing values, how many rows are left?**Note**: Do not change the value of `sf_permits` when checking this.
###Code
# TODO: Your code here!
sf_permit=sf_permits
sf_permit.dropna()
###Output
_____no_output_____
###Markdown
Once you have an answer, run the code cell below. All the rows have NA values hence all the rows are dropped
###Code
# Check your answer (Run this code cell to receive credit!)
q4.check()
# Line below will give you a hint
#q4.hint()
###Output
_____no_output_____
###Markdown
5) Drop missing values: columnsNow try removing all the columns with empty values. - Create a new DataFrame called `sf_permits_with_na_dropped` that has all of the columns with empty values removed. - How many columns were removed from the original `sf_permits` DataFrame? Use this number to set the value of the `dropped_columns` variable below.
###Code
# TODO: Your code here
sf_permits_with_na_dropped =sf_permit.dropna(axis=1)
print(sf_permits_with_na_dropped.shape[1])
print(sf_permits.shape[1])
dropped_columns =(sf_permits.shape[1]-sf_permits_with_na_dropped.shape[1])
print(dropped_columns)
# Check your answer
q5.check()
# Lines below will give you a hint or solution code
#q5.hint()
#q5.solution()
###Output
_____no_output_____
###Markdown
6) Fill in missing values automaticallyTry replacing all the NaN's in the `sf_permits` data with the one that comes directly after it and then replacing any remaining NaN's with 0. Set the result to a new DataFrame `sf_permits_with_na_imputed`.
###Code
# TODO: Your code here
sf_permits_with_na_imputed = sf_permits.fillna(method='bfill',axis=0).fillna(0)
# Check your answer
q6.check()
# Lines below will give you a hint or solution code
#q6.hint()
#q6.solution()
###Output
_____no_output_____
###Markdown
**This notebook is an exercise in the [Data Cleaning](https://www.kaggle.com/learn/data-cleaning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/handling-missing-values).**--- In this exercise, you'll apply what you learned in the **Handling missing values** tutorial. SetupThe questions below will give you feedback on your work. Run the following cell to set up the feedback system.
###Code
from learntools.core import binder
binder.bind(globals())
from learntools.data_cleaning.ex1 import *
print("Setup Complete")
###Output
_____no_output_____
###Markdown
1) Take a first look at the dataRun the next code cell to load in the libraries and dataset you'll use to complete the exercise.
###Code
# modules we'll use
import pandas as pd
import numpy as np
# read in all our data
sf_permits = pd.read_csv("../input/building-permit-applications-data/Building_Permits.csv")
# set seed for reproducibility
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Use the code cell below to print the first five rows of the `sf_permits` DataFrame.
###Code
# TODO: Your code here!
sf_permits.head()
###Output
_____no_output_____
###Markdown
Does the dataset have any missing values? Once you have an answer, run the code cell below to get credit for your work.
###Code
# Check your answer (Run this code cell to receive credit!)
q1.check()
# Line below will give you a hint
#q1.hint()
###Output
_____no_output_____
###Markdown
2) How many missing data points do we have?What percentage of the values in the dataset are missing? Your answer should be a number between 0 and 100. (If 1/4 of the values in the dataset are missing, the answer is 25.)
###Code
# TODO: Your code here!
percent_missing = ((sf_permits.isnull().sum().sum())/np.product(sf_permits.shape))*100
# Check your answer
q2.check()
# Lines below will give you a hint or solution code
#q2.hint()
#q2.solution()
###Output
_____no_output_____
###Markdown
3) Figure out why the data is missingLook at the columns **"Street Number Suffix"** and **"Zipcode"** from the [San Francisco Building Permits dataset](https://www.kaggle.com/aparnashastry/building-permit-applications-data). Both of these contain missing values. - Which, if either, are missing because they don't exist? - Which, if either, are missing because they weren't recorded? Once you have an answer, run the code cell below.
###Code
# Check your answer (Run this code cell to receive credit!)
q3.check()
# Line below will give you a hint
#q3.hint()
###Output
_____no_output_____
###Markdown
4) Drop missing values: rowsIf you removed all of the rows of `sf_permits` with missing values, how many rows are left?**Note**: Do not change the value of `sf_permits` when checking this.
###Code
# TODO: Your code here!
newdataset = sf_permits
newdataset.dropna()
###Output
_____no_output_____
###Markdown
Once you have an answer, run the code cell below.
###Code
# Check your answer (Run this code cell to receive credit!)
q4.check()
# Line below will give you a hint
#q4.hint()
###Output
_____no_output_____
###Markdown
5) Drop missing values: columnsNow try removing all the columns with empty values. - Create a new DataFrame called `sf_permits_with_na_dropped` that has all of the columns with empty values removed. - How many columns were removed from the original `sf_permits` DataFrame? Use this number to set the value of the `dropped_columns` variable below.
###Code
# TODO: Your code here
sf_permits_with_na_dropped = sf_permits.dropna(axis=1)
dropped_columns = sf_permits.shape[1]-sf_permits_with_na_dropped.shape[1]
# Check your answer
q5.check()
# Lines below will give you a hint or solution code
#q5.hint()
#q5.solution()
###Output
_____no_output_____
###Markdown
6) Fill in missing values automaticallyTry replacing all the NaN's in the `sf_permits` data with the one that comes directly after it and then replacing any remaining NaN's with 0. Set the result to a new DataFrame `sf_permits_with_na_imputed`.
###Code
# TODO: Your code here
sf_permits_with_na_imputed = sf_permits.fillna(method='bfill',axis=0).fillna(0)
# Check your answer
q6.check()
# Lines below will give you a hint or solution code
#q6.hint()
#q6.solution()
###Output
_____no_output_____
###Markdown
**This notebook is an exercise in the [Data Cleaning](https://www.kaggle.com/learn/data-cleaning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/handling-missing-values).**--- In this exercise, you'll apply what you learned in the **Handling missing values** tutorial. SetupThe questions below will give you feedback on your work. Run the following cell to set up the feedback system.
###Code
from learntools.core import binder
binder.bind(globals())
from learntools.data_cleaning.ex1 import *
print("Setup Complete")
###Output
_____no_output_____
###Markdown
1) Take a first look at the dataRun the next code cell to load in the libraries and dataset you'll use to complete the exercise.
###Code
# modules we'll use
import pandas as pd
import numpy as np
# read in all our data
sf_permits = pd.read_csv("../input/building-permit-applications-data/Building_Permits.csv")
# set seed for reproducibility
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Use the code cell below to print the first five rows of the `sf_permits` DataFrame.
###Code
sf_permits.head()
###Output
_____no_output_____
###Markdown
Does the dataset have any missing values? Once you have an answer, run the code cell below to get credit for your work.
###Code
# Check your answer (Run this code cell to receive credit!)
q1.check()
# Line below will give you a hint
q1.hint()
###Output
_____no_output_____
###Markdown
2) How many missing data points do we have?What percentage of the values in the dataset are missing? Your answer should be a number between 0 and 100. (If 1/4 of the values in the dataset are missing, the answer is 25.)
###Code
missing = sf_permits.isnull().sum().sum()
total = np.product(sf_permits.shape)
percent_missing = (missing / total)*100
# Check your answer
q2.check()
# Lines below will give you a hint or solution code
q2.hint()
q2.solution()
###Output
_____no_output_____
###Markdown
3) Figure out why the data is missingLook at the columns **"Street Number Suffix"** and **"Zipcode"** from the [San Francisco Building Permits dataset](https://www.kaggle.com/aparnashastry/building-permit-applications-data). Both of these contain missing values. - Which, if either, are missing because they don't exist? - Which, if either, are missing because they weren't recorded? Once you have an answer, run the code cell below.
###Code
#street number suffix is not existed sometimes, but zipcode always should exist so if it is missing it is due to recording error
q3.check()
# Line below will give you a hint
q3.hint()
###Output
_____no_output_____
###Markdown
4) Drop missing values: rowsIf you removed all of the rows of `sf_permits` with missing values, how many rows are left?**Note**: Do not change the value of `sf_permits` when checking this.
###Code
sf_permits.dropna()
###Output
_____no_output_____
###Markdown
Once you have an answer, run the code cell below.
###Code
# Check your answer (Run this code cell to receive credit!)
q4.check()
# Line below will give you a hint
q4.hint()
###Output
_____no_output_____
###Markdown
5) Drop missing values: columnsNow try removing all the columns with empty values. - Create a new DataFrame called `sf_permits_with_na_dropped` that has all of the columns with empty values removed. - How many columns were removed from the original `sf_permits` DataFrame? Use this number to set the value of the `dropped_columns` variable below.
###Code
# remove all columns with at least one missing value
sf_permits_with_na_dropped = sf_permits.dropna(axis=1)
# calculate number of dropped columns
columns_in_original_dataset = sf_permits.shape[1]
columns_in_na_dropped = sf_permits_with_na_dropped.shape[1]
dropped_columns = columns_in_original_dataset - columns_in_na_dropped
# Check your answer
q5.check()
# Lines below will give you a hint or solution code
#q5.hint()
#q5.solution()
###Output
_____no_output_____
###Markdown
6) Fill in missing values automaticallyTry replacing all the NaN's in the `sf_permits` data with the one that comes directly after it and then replacing any remaining NaN's with 0. Set the result to a new DataFrame `sf_permits_with_na_imputed`.
###Code
# TODO: Your code here
sf_permits_with_na_imputed = sf_permits.fillna(method='bfill', axis=0).fillna(0)
# Check your answer
q6.check()
# Lines below will give you a hint or solution code
#q6.hint()
#q6.solution()
###Output
_____no_output_____
###Markdown
**This notebook is an exercise in the [Data Cleaning](https://www.kaggle.com/learn/data-cleaning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/handling-missing-values).**--- In this exercise, you'll apply what you learned in the **Handling missing values** tutorial. SetupThe questions below will give you feedback on your work. Run the following cell to set up the feedback system.
###Code
from learntools.core import binder
binder.bind(globals())
from learntools.data_cleaning.ex1 import *
print("Setup Complete")
###Output
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3254: DtypeWarning: Columns (22,32) have mixed types.Specify dtype option on import or set low_memory=False.
if (await self.run_code(code, result, async_=asy)):
###Markdown
1) Take a first look at the dataRun the next code cell to load in the libraries and dataset you'll use to complete the exercise.
###Code
# modules we'll use
import pandas as pd
import numpy as np
# read in all our data
sf_permits = pd.read_csv("../input/building-permit-applications-data/Building_Permits.csv")
# set seed for reproducibility
np.random.seed(0)
###Output
/opt/conda/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3063: DtypeWarning: Columns (22,32) have mixed types.Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Use the code cell below to print the first five rows of the `sf_permits` DataFrame.
###Code
# TODO: Your code here!
###Output
_____no_output_____
###Markdown
Does the dataset have any missing values? Once you have an answer, run the code cell below to get credit for your work.
###Code
# Check your answer (Run this code cell to receive credit!)
q1.check()
# Line below will give you a hint
#q1.hint()
###Output
_____no_output_____
###Markdown
2) How many missing data points do we have?What percentage of the values in the dataset are missing? Your answer should be a number between 0 and 100. (If 1/4 of the values in the dataset are missing, the answer is 25.)
###Code
# TODO: Your code here!
percent_missing = ____
# Check your answer
q2.check()
# Lines below will give you a hint or solution code
#q2.hint()
#q2.solution()
###Output
_____no_output_____
###Markdown
3) Figure out why the data is missingLook at the columns **"Street Number Suffix"** and **"Zipcode"** from the [San Francisco Building Permits dataset](https://www.kaggle.com/aparnashastry/building-permit-applications-data). Both of these contain missing values. - Which, if either, are missing because they don't exist? - Which, if either, are missing because they weren't recorded? Once you have an answer, run the code cell below.
###Code
# Check your answer (Run this code cell to receive credit!)
q3.check()
# Line below will give you a hint
#q3.hint()
###Output
_____no_output_____
###Markdown
4) Drop missing values: rowsIf you removed all of the rows of `sf_permits` with missing values, how many rows are left?**Note**: Do not change the value of `sf_permits` when checking this.
###Code
# TODO: Your code here!
###Output
_____no_output_____
###Markdown
Once you have an answer, run the code cell below.
###Code
# Check your answer (Run this code cell to receive credit!)
q4.check()
# Line below will give you a hint
#q4.hint()
###Output
_____no_output_____
###Markdown
5) Drop missing values: columnsNow try removing all the columns with empty values. - Create a new DataFrame called `sf_permits_with_na_dropped` that has all of the columns with empty values removed. - How many columns were removed from the original `sf_permits` DataFrame? Use this number to set the value of the `dropped_columns` variable below.
###Code
# TODO: Your code here
sf_permits_with_na_dropped = ____
dropped_columns = ____
# Check your answer
q5.check()
# Lines below will give you a hint or solution code
#q5.hint()
#q5.solution()
###Output
_____no_output_____
###Markdown
6) Fill in missing values automaticallyTry replacing all the NaN's in the `sf_permits` data with the one that comes directly after it and then replacing any remaining NaN's with 0. Set the result to a new DataFrame `sf_permits_with_na_imputed`.
###Code
# TODO: Your code here
sf_permits_with_na_imputed = ____
# Check your answer
q6.check()
# Lines below will give you a hint or solution code
#q6.hint()
#q6.solution()
###Output
_____no_output_____ |
sentinel1-classification/pre-process.ipynb | ###Markdown
Generating Training Images and Labels for Forest Classification This code reads two geotiff files for Sentinel-1 and Labels from Global Forest Watch (GFW) on a local disk, and generates 256 x 256 image chips and labels to be used in training. Sentinel-1 data is already reprojected to GFW grid using the code in `re-project.ipynb`.Training images and labels are being exported/saved as numpy arrays on the disk for quick read into the training later on. This code is written as a test, and ideally there shouldn't be a need to writing these data on the disk and reading them again. Being able to read the source Sentinel-1 imagery (from its native projection), quickly reproject to the labels' grid, and then generate image chips on the fly is a base requirement to be able to scale this training to regional and continental level data.
###Code
%matplotlib inline
from osgeo import gdal
import matplotlib.pyplot as plt
import numpy as np
import os
import glob
# this allows GDAL to throw Python Exceptions
gdal.UseExceptions()
pathData = "/home/ec2-user/data/"
###Output
_____no_output_____
###Markdown
Read Image Data (Sentinel-1)
###Code
s1_filename = pathData + "S1_Aug17_GFW_grid.tif"
try:
s1_datafile = gdal.Open(s1_filename)
except RuntimeError:
print('Unable to open {}'.format(s1_filename))
sys.exit(1)
s1_nx = s1_datafile.RasterXSize
s1_ny = s1_datafile.RasterYSize
s1_gt = s1_datafile.GetGeoTransform()
s1_proj = s1_datafile.GetProjection()
s1_xres = s1_gt[1]
s1_yres = s1_gt[5]
s1_data = s1_datafile.ReadAsArray()
s1_data = np.swapaxes(s1_data, 0, 1)
s1_data = np.swapaxes(s1_data, 1, 2)
dataVV = s1_data[:, :, 0::2]
dataVH = s1_data[:, :, 1::2]
dataVV[dataVH<-30] = np.nan # Remove pixels less than NESZ
dataVH[dataVH<-30] = np.nan # Remove pixels less than NESZ
VV_A = np.nanmean(dataVV[:, :, 0::2], 2) # Using only one mode of observations (ascending vs descending)
VH_A = np.nanmean(dataVH[:, :, 0::2], 2) # Using only one mode of observations (ascending vs descending)
###Output
_____no_output_____
###Markdown
Read Labels (Global Forest Watch)
###Code
labels_filename = pathData + "GFWLabels2017_noNaN.tiff"
try:
datafile = gdal.Open(labels_filename)
except RuntimeError:
print('Unable to open {}'.format(fileName))
sys.exit(1)
l_nx = datafile.RasterXSize
l_ny = datafile.RasterYSize
l_gt = datafile.GetGeoTransform()
l_proj = datafile.GetProjection()
l_xres = l_gt[1]
l_yres = l_gt[5]
labels = datafile.ReadAsArray()
# Clean existing data
files = glob.glob('data/train/image/*.npy')
for f in files:
os.remove(f)
files = glob.glob('data/test/image/*.npy')
for f in files:
os.remove(f)
files = glob.glob('data/train/label/*.npy')
for f in files:
os.remove(f)
files = glob.glob('data/test/label/*.npy')
for f in files:
os.remove(f)
###Output
_____no_output_____
###Markdown
Generat Image Chips
###Code
# Generating 256 x 256 images
VV_A = VV_A[10:-10, 10:-10]
VH_A = VH_A[10:-10, 10:-10]
test_samples = np.random.choice(120, 19, replace=False)
n_train = -1
n_test = -1
n_image = -1
for i_row in range(0, int(np.floor(VV_A.shape[0]/256))):
for i_col in range(0, int(np.floor(VV_A.shape[1]/256))):
n_image = n_image + 1
if n_image in test_samples:
n_test = n_test + 1
image_VV = 10 ** (VV_A[i_row * 256 : (i_row + 1) * 256, i_col * 256 : (i_col + 1) * 256] / 10)
image_VH = 10 ** (VH_A[i_row * 256 : (i_row + 1) * 256, i_col * 256 : (i_col + 1) * 256] / 10)
image = np.dstack((image_VV, image_VH))
label = labels[i_row * 256 : (i_row + 1) * 256, i_col * 256 : (i_col + 1) * 256]
np.save('data/test/image/' + str(n_test), image)
np.save('data/test/label/' + str(n_test), label)
else:
n_train = n_train + 1
image_VV = VV_A[i_row * 256 : (i_row + 1) * 256, i_col * 256 : (i_col + 1) * 256] / -30
image_VH = VH_A[i_row * 256 : (i_row + 1) * 256, i_col * 256 : (i_col + 1) * 256] / -30
image = np.dstack((image_VV, image_VH))
label = labels[i_row * 256 : (i_row + 1) * 256, i_col * 256 : (i_col + 1) * 256]
np.save('data/train/image/' + str(n_train), image)
np.save('data/train/label/' + str(n_train), label)
###Output
_____no_output_____ |
docs/notebooks/depth/LyzengaDepth.ipynb | ###Markdown
Lyzenga MethodI want to apply the Lyzenga 2006 method for comparison.
###Code
%pylab inline
import geopandas as gpd
import pandas as pd
from OpticalRS import *
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.cross_validation import train_test_split
import itertools
import statsmodels.formula.api as smf
from collections import OrderedDict
style.use('ggplot')
cd ../data
###Output
/home/jkibele/Copy/JobStuff/PhD/iPythonNotebooks/DepthPaper/data
###Markdown
PreprocessingThat happened [here](../ClassificationDev/Lyzenga/Lyzenga2006/Lyzenga2006.ipynbPreprocessing-My-Imagery).
###Code
imrds = RasterDS('glint_corrected.tif')
imarr = imrds.band_array
deprds = RasterDS('Leigh_Depth_atAcq_Resampled.tif')
darr = -1 * deprds.band_array.squeeze()
###Output
_____no_output_____
###Markdown
Depth LimitLyzenga et al methods for determining shallow water don't work for me based on the high reflectance of the water column and extremely low reflectance of Ecklonia for the blue bands. So I'm just going to limit the depths under consideration using the multibeam data.
###Code
darr = np.ma.masked_greater( darr, 20.0 )
###Output
_____no_output_____
###Markdown
Equalize MasksI need to make sure I'm dealing with the same pixels in depth and image data.
###Code
imarr = ArrayUtils.mask3D_with_2D( imarr, darr.mask )
darr = np.ma.masked_where( imarr[...,0].mask, darr )
###Output
_____no_output_____
###Markdown
Dark Pixel SubtractionI need to calculate a modified version of $X_i = ln(L_i - L_{si})$. In order to do that I'll first load the deep water means and standard deviations I calculated [here](ImageryPreprocessing.ipynb).
###Code
dwmeans = np.load('darkmeans.pkl')
dwstds = np.load('darkstds.pkl')
###Output
_____no_output_____
###Markdown
I applied the same modification as Armstrong (1993), 2 standard deviations from $L_{si}$, to avoid getting too many negative values because those can't be log transformed.
###Code
dpsub = ArrayUtils.equalize_band_masks( \
np.ma.masked_less( imarr - (dwmeans - 2 * dwstds), 0.0 ) )
print "After that I still retain %.1f%% of my pixels." % ( 100 * dpsub.count() / float( imarr.count() ) )
X = np.log( dpsub )
# imrds.new_image_from_array(X.astype('float32'),'LyzengaX.tif')
###Output
_____no_output_____
###Markdown
I'll need to equalize the masks again. I'll call the depths h in reference to Lyzenga et al. 2006 (e.g. equation 14).
###Code
h = np.ma.masked_where( X[...,0].mask, darr )
imshow( X[...,1] )
###Output
_____no_output_____
###Markdown
DataframePut my $X_i$ and my $h$ values into a dataframe so I can regress them easily.
###Code
df = ArrayUtils.band_df( X )
df['depth'] = h.compressed()
###Output
_____no_output_____
###Markdown
Data SplitI need to split my data into training and test sets.
###Code
x_train, x_test, y_train, y_test = train_test_split( \
df[imrds.band_names],df.depth,train_size=300000,random_state=5)
traindf = ArrayUtils.band_df( x_train )
traindf['depth'] = y_train.ravel()
testdf = ArrayUtils.band_df( x_test )
testdf['depth'] = y_test.ravel()
###Output
_____no_output_____
###Markdown
Find the Best Band ComboThat's the one that returns the largest $R^2$ value.
###Code
def get_fit( ind, x_train, y_train ):
skols = LinearRegression()
skolsfit = skols.fit(x_train[...,ind],y_train)
return skolsfit
def get_selfscore( ind, x_train, y_train ):
fit = get_fit( ind, x_train, y_train )
return fit.score( x_train[...,ind], y_train )
od = OrderedDict()
for comb in itertools.combinations( range(8), 2 ):
od[ get_selfscore(comb,x_train,y_train) ] = [ c+1 for c in comb ]
od_sort = sorted( od.items(), key=lambda t: t[0], reverse=True )
od_sort
best_ind = np.array( od_sort[0][1] ) - 1
best_ind
###Output
_____no_output_____
###Markdown
Build the model
###Code
skols = LinearRegression()
skolsfit = skols.fit(x_train[...,best_ind],y_train)
print "h0 = %.2f, h2 = %.2f, h3 = %.2f" % \
(skolsfit.intercept_,skolsfit.coef_[0],skolsfit.coef_[1])
###Output
h0 = 17.08, h2 = 16.06, h3 = -16.16
###Markdown
Check the Results
###Code
print "R^2 = %.6f" % skolsfit.score(x_test[...,best_ind],y_test)
pred = skolsfit.predict(x_test[...,best_ind])
fig,ax = plt.subplots(1,1,figsize=(8,6))
mapa = ax.hexbin(pred,y_test,mincnt=1,bins='log',gridsize=500,cmap=plt.cm.hot)
# ax.scatter(pred,y_test,alpha=0.008,edgecolor='none')
ax.set_ylabel('MB Depth')
ax.set_xlabel('Predicted Depth')
rmse = np.sqrt( mean_squared_error( y_test, pred ) )
n = x_train.shape[0]
tit = "RMSE: %.4f, n=%i" % (rmse,n)
ax.set_title(tit)
ax.set_aspect('equal')
ax.axis([-5,25,-5,25])
ax.plot([-5,25],[-5,25],c='white')
cb = plt.colorbar(mapa)
cb.set_label("Log10(N)")
LyzPredVsMB = pd.DataFrame({'prediction':pred,'mb_depth':y_test})
LyzPredVsMB.to_pickle('LyzPredVsMB.pkl')
###Output
_____no_output_____
###Markdown
Effect of Depth Limit on Model AccuracyGiven a fixed number of training points (n=1500), what is the effect of limiting the depth of the model.
###Code
fullim = imrds.band_array
fulldep = -1 * deprds.band_array.squeeze()
fullim = ArrayUtils.mask3D_with_2D( fullim, fulldep.mask )
fulldep = np.ma.masked_where( fullim[...,0].mask, fulldep )
dlims = arange(5,31,2.5)
drmses,meanerrs,stderrs = [],[],[]
for dl in dlims:
dlarr = np.ma.masked_greater( fulldep, dl )
iml = ArrayUtils.mask3D_with_2D( fullim, dlarr.mask )
imldsub = ArrayUtils.equalize_band_masks( \
np.ma.masked_less( iml - (dwmeans - 2 * dwstds), 0.0 ) )
imlX = np.log( imldsub )
dlarr = np.ma.masked_where( imlX[...,0].mask, dlarr )
xl_train, xl_test, yl_train, yl_test = train_test_split( \
imlX.compressed().reshape(-1,8),dlarr.compressed(),train_size=1500,random_state=5)
linr = LinearRegression()
predl = linr.fit(xl_train[...,best_ind],yl_train).predict( xl_test[...,best_ind] )
drmses.append( sqrt( mean_squared_error(yl_test,predl) ) )
meanerrs.append( (yl_test - predl).mean() )
stderrs.append( (yl_test - predl).std() )
fig,(ax1,ax2) = subplots(1,2,figsize=(12,6))
ax1.plot(dlims,np.array(drmses),marker='o',c='b')
ax1.set_xlabel("Data Depth Limit (m)")
ax1.set_ylabel("Model RMSE (m)")
em,es = np.array(meanerrs), np.array(stderrs)
ax2.plot(dlims,em,marker='o',c='b')
ax2.plot(dlims,em+es,linestyle='--',c='k')
ax2.plot(dlims,em-es,linestyle='--',c='k')
ax2.set_xlabel("Data Depth Limit (m)")
ax2.set_ylabel("Model Mean Error (m)")
deplimdf = pd.DataFrame({'depth_lim':dlims,'rmse':drmses,\
'mean_error':meanerrs,'standard_error':stderrs})
deplimdf.to_pickle('LyzengaDepthLimitDF.pkl')
###Output
_____no_output_____
###Markdown
Limited Training DataI want to see how the accuracy of this method is affected by the reduction of training data.
###Code
# ns = np.logspace(log10(0.00003*df.depth.count()),log10(0.80*df.depth.count()),15)
int(ns.min()),int(ns.max())
ns = np.logspace(1,log10(0.80*df.depth.count()),15)
ltdf = pd.DataFrame({'train_size':ns})
for rs in range(10):
nrmses = []
for n in ns:
xn_train,xn_test,yn_train,yn_test = train_test_split( \
df[imrds.band_names],df.depth,train_size=int(n),random_state=rs+100)
thisols = LinearRegression()
npred = thisols.fit(xn_train[...,best_ind],yn_train).predict(xn_test[...,best_ind])
nrmses.append( sqrt( mean_squared_error(yn_test,npred ) ) )
dflabel = 'rand_state_%i' % rs
ltdf[dflabel] = nrmses
print "min points: %i, max points: %i" % (int(ns.min()),int(ns.max()))
fig,ax = subplots(1,1,figsize=(10,6))
for rs in range(10):
dflabel = 'rand_state_%i' % rs
ax.plot(ltdf['train_size'],ltdf[dflabel])
ax.set_xlabel("Number of Training Points")
ax.set_ylabel("Model RMSE (m)")
# ax.set_xlim(0,5000)
ax.set_xscale('log')
ax.set_title("Rapidly Increasing Accuracy With More Training Data")
ltdf.to_pickle('LyzengaAccuracyDF.pkl')
###Output
_____no_output_____
###Markdown
Full PredictionPerform a prediction on all the data and find the errors. Save the outputs for comparison with KNN.
###Code
full_pred = skolsfit.predict(X[...,best_ind])
full_pred = np.ma.masked_where( h.mask, full_pred )
full_errs = full_pred - h
blah = hist( full_errs.compressed(), 100 )
figure(figsize=(12,11))
vmin,vmax = np.percentile(full_errs.compressed(),0.1),np.percentile(full_errs.compressed(),99.9)
imshow( full_errs, vmin=vmin, vmax=vmax )
ax = gca()
ax.set_axis_off()
ax.set_title("Depth Errors (m)")
colorbar()
full_pred.dump('LyzDepthPred.pkl')
full_errs.dump('LyzDepthPredErrs.pkl')
###Output
_____no_output_____ |
Crack_Segmentation.ipynb | ###Markdown
첫 번째 이미지로 마스킹을 확인해보자
###Code
def draw_rect_box(x_pos,y_pos,width,height):
###현재의 plt에 사각형 박스를 그려주는 함수입니다.
###
x_ = [x_pos, x_pos+width, x_pos+width, x_pos, x_pos]
y_ = [y_pos, y_pos, y_pos + height, y_pos + height, y_pos]
plt.plot(x_,y_,'red')
_,x_pos,y_pos,width,height = json_data['00001.jpg5217']['regions'][0]['shape_attributes'].values()
# print(type(json_data['00001.jpg5217']['regions'][0]))
print("x_pos: {}, y_pos: {}, width: {}, height: {}".format(x_pos,y_pos,width,height))
if colab:
img_path = "/content/Surface_Crack_Segmentation/Positive_jw/"
else:
img_path = "D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/Positive_jw/"
img_name = '00001.jpg'
img_full_path = img_path + img_name
img_arr = np.array(Image.open(img_full_path))
print("image shape is :",img_arr.shape)
WIDTH,HEIGHT,CHANNEL = img_arr.shape # 이미지의 너비, 높이, 채널을 저장
plt.imshow(img_arr)
draw_rect_box(x_pos,y_pos,width,height)
# 데이터셋에서 label의 형태는 binary image 이므로, 같은 형태로 바꾸어 준다.
def make_to_label_img(x_pos,y_pos,width,height,WIDTH=227,HEIGHT=227):
label_sample = np.zeros((WIDTH,HEIGHT))
# print(label_sample.shape)
for i in range(WIDTH):
for j in range(HEIGHT):
if i >= x_pos and i<x_pos+width:
if j>=y_pos and j < y_pos+height:
label_sample[j][i] = 1
return label_sample
# plt.imshow(label_sample)
label_sample = make_to_label_img(x_pos,y_pos,width,height)
plt.imshow(label_sample)
###Output
_____no_output_____
###Markdown
데이터셋을 생성 크랙이 있는 데이터부터 로드
###Code
_,x_pos,y_pos,width,height = json_data['00001.jpg5217']['regions'][0]['shape_attributes'].values()
y_train_positive = []
for i in range(len(data_names)):
_,x_pos,y_pos,width,height = json_data[data_names[i]]['regions'][0]['shape_attributes'].values()
label_tmp = make_to_label_img(x_pos,y_pos,width,height)
# print(i)
y_train_positive.append(label_tmp)
y_train_positive = np.array(y_train_positive)
# y_train_positive = y_train_positive.reshape(y_train_positive.shape[0],y_train_positive.shape[1],y_train_positive.shape[2],1)
# print(y_train_positive.shape)
# (227,227)이미지는 Conv에 적합하지 않으므로, (128,128)로 resize합니다
y_train_positive_resized=[]
for i in y_train_positive:
Im = Image.fromarray(i)
Im = Im.resize((128,128),Image.BOX)
y_train_positive_resized.append(np.ceil(np.array(Im)))
y_train_positive = np.array(y_train_positive_resized)
y_train_positive = y_train_positive.reshape(y_train_positive.shape[0],y_train_positive.shape[1],y_train_positive.shape[2],1)
print(np.max(y_train_positive))
print("y_train_positive shape :",y_train_positive.shape)
if colab:
positive_path = '/content/Surface_Crack_Segmentation/Positive_jw/*.jpg'
negative_path = '/content/Surface_Crack_Segmentation/Negative_jw/*.jpg'
else:
positive_path = 'D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/Positive_jw/*.jpg'
negative_path = 'D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/Negative_jw/*.jpg'
positive_imgs = glob.glob(positive_path)
negative_imgs = glob.glob(negative_path)
x_train_positive = []
for i in range(len(positive_imgs)):
Im = Image.open(positive_imgs[i])
Im = Im.resize((128,128))
x_train_positive.append(np.array(Im))
x_train_positive = np.array(x_train_positive)
x_train_negative = []
for i in range(len(negative_imgs)):
Im = Image.open(negative_imgs[i])
Im = Im.resize((128,128))
x_train_negative.append(np.array(Im))
x_train_negative = np.array(x_train_negative)
print("x_train_positive shape :",x_train_positive.shape)
print("x_train_negative shape :",x_train_negative.shape)
# 크랙이 있는 것과 없는 것 두가지를 concat해서 x_train 데이터를 만든다.
x_train = np.concatenate((x_train_negative,x_train_positive))
x_train = x_train/255.0
print("x_train shape :",x_train.shape)
# negative 데이터의 label은 모두 0일 것이다 <-- 크랙이 없기 때문에
y_train_negative = np.zeros((x_train_negative.shape[0], 128, 128, 1))
# print(y_train_negative.shape)
# label(y_train)도 concat해준다
y_train = np.concatenate((y_train_negative,y_train_positive))
print(y_train.shape)
###Output
(200, 128, 128, 1)
###Markdown
데이터셋 생성(총 200개의 데이터- 트레이닝 : 170개, 검증 : 30개)
###Code
# from_tensor_slice로 데이터셋 생성
BATCH_SIZE=10
dataset = tf.data.Dataset.from_tensor_slices((x_train,y_train)).shuffle(10000).batch(BATCH_SIZE)
# 트레인 데이터셋은 0.85 * 200 / 10 --> 170개
# 테스트 데이터셋은 나머지 30개
validation_split = 0.85
train_dataset_size = int(y_train.shape[0] * validation_split / BATCH_SIZE)
train_dataset = dataset.take(train_dataset_size)
test_dataset = dataset.skip(train_dataset_size)
###Output
_____no_output_____
###Markdown
모델 정의하기 (Simple U-Net)
###Code
OUTPUT_CHANNELS = 3
# 베이스모델로 MobileNetV2를 사용해서, 조금 더 가벼운 네트워크를 구성하고자 합니다.
base_model = tf.keras.applications.MobileNetV2(input_shape=[128, 128, CHANNEL], include_top=False)
base_model.summary()
# U-Net에서 특징 추출 레이어로 사용할 계층들입니다.
#이 층들의 활성화를 이용합시다
layer_names = [
'block_1_expand_relu', # 64x64
'block_3_expand_relu', # 32x32
'block_6_expand_relu', # 16x16
'block_13_expand_relu', # 8x8
'block_16_project', # 4x4
]
layers = [base_model.get_layer(name).output for name in layer_names]
###Output
_____no_output_____
###Markdown
U-Net은 다음과 같은 구조로, 일부는 정확한 지역화(Localization)을 수행하게 됩니다.
###Code
# U-net은 기본적으로 아래층으로 심층 특징 추출하는 층과, skip하는 층이 합쳐지는 구조# 특징추출 모델을 만듭시다.
# 이를 'down_stack'한다고 합니다.
down_stack = tf.keras.Model(inputs=base_model.input, outputs=layers)
# 이미 특징 추출은 MobileNet에서 수행되었기 때문에, trainable = False
down_stack.trainable = False
# up_stack을 1회 수행하는 하나의 계층을 만들도록 upsample 함수를 정의합니다.
def upsample(filters, size, apply_dropout=False):
initializer = tf.random_normal_initializer(0., 0.02)
result = tf.keras.Sequential()
result.add(
tf.keras.layers.Conv2DTranspose(filters, size, strides=2,padding='same',kernel_initializer=initializer,use_bias=False))
result.add(tf.keras.layers.BatchNormalization())
if apply_dropout:
result.add(tf.keras.layers.Dropout(0.5))
result.add(tf.keras.layers.ReLU())
return result
up_stack = [
upsample(512, 3), # 4x4 -> 8x8
upsample(256, 3), # 8x8 -> 16x16
upsample(128, 3), # 16x16 -> 32x32
upsample(64, 3), # 32x32 -> 64x64
]
def build_model(num_output_channels):
input_layer = tf.keras.layers.Input(shape=[128,128,3])
x = input_layer
# 모델을 다운 스택
skips = down_stack(x)
x = skips[-1]
skips = reversed(skips[:-1])
# skip connection을 upsampling한다
for up, skip in zip(up_stack,skips):
x = up(x)
# skip해서 넘어오는 connection과 down_stack에서 올라오는 up을 concatenate한다.
concat = tf.keras.layers.Concatenate()
x = concat([x,skip])
# 현재 최종 계층의 output shape = (None, 64,64,1)
# 마지막 계층으로 Conv2DTranspose를 함으로써, output shape를 (None, 64, 64, Channel)로 지정한다
last_layer = tf.keras.layers.Conv2DTranspose(num_output_channels, 3, strides=2, padding='same') # 64x64 -> 128,128
x = last_layer(x)
return tf.keras.Model(inputs=input_layer, outputs=x)
OUTPUT_CHANNELS = 3
model = build_model(OUTPUT_CHANNELS)
model.compile(optimizer='adam',loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
모델의 그래프를 출력
###Code
if colab:
from tensorflow.keras.utils import plot_model
plot_model(model, show_shapes=True)
# model.png 파일이 저장된 것을 확인할 수 있습니다.
plt.figure(figsize=(12,25))
plt.imshow(np.array(Image.open('model.png')))
else:
model.summary()
###Output
_____no_output_____
###Markdown
초기 prediction을 출력
###Code
# image, mask = next(iter(dataset))
# predicted_mask = model.predict(image)
# # output 3채널 중에서 가장 큰 값들을 찾아서 1채널로 축소
# predicted_mask = tf.argmax(predicted_mask, axis=-1)
# predicted_mask = np.array(predicted_mask).reshape((10,128,128,1))
# # 의미없는 Mask들이 나오는 것을 볼 수 있다.
# plt.figure(figsize=(15,20))
# for i in range(10):
# plt.subplot(1,10,i+1)
# plt.imshow(predicted_mask[i].reshape(128,128,1))
sample_image, sample_mask = next(iter(dataset))
def show_predictions(dataset=None, num=1,epoch=None):
if dataset:
for image, mask in dataset.take(num):
predicted_mask = model.predict(image)
# output 3채널 중에서 가장 큰 값들을 찾아서 1채널로 축소
predicted_mask = tf.argmax(predicted_mask, axis=-1)
predicted_mask = np.array(predicted_mask).reshape((10,128,128,1))
# display([image[0], mask[0], predicted_mask])
plt.figure(figsize=(15,5))
for i in range(BATCH_SIZE):
plt.subplot(3,BATCH_SIZE,i+1)
plt.imshow(image[i])
plt.subplot(3,BATCH_SIZE,i+BATCH_SIZE+1)
plt.imshow(np.array(mask[i]).reshape(128,128))
plt.subplot(3,BATCH_SIZE,i+2 * BATCH_SIZE+1)
plt.imshow(predicted_mask[i].reshape(128,128))
else:
predicted_mask = model.predict(sample_image)
predicted_mask = tf.argmax(predicted_mask, axis=-1)
predicted_mask = np.array(predicted_mask).reshape((10,128,128,1))
plt.figure(figsize=(15,5))
if epoch:
plt.title("Current epoch :{}".format(epoch))
for i in range(BATCH_SIZE):
plt.subplot(3,BATCH_SIZE,i+1)
plt.imshow(sample_image[i])
plt.subplot(3,BATCH_SIZE,i+BATCH_SIZE+1)
plt.imshow(np.array(sample_mask[i]).reshape(128,128))
plt.subplot(3,BATCH_SIZE,i+2 * BATCH_SIZE+1)
plt.imshow(predicted_mask[i].reshape(128,128))
# plt.show()
if epoch:
if colab:
save_path = "/content/Surface_Crack_Segmentation/fig_saves/"
else:
save_path = "D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/fig_saves/" # 이미지 저장 경로를 변경해주세요
file_name = "{}.png".format(epoch)
plt.savefig(save_path+file_name)
plt.show()
# 트레이닝 되지않은 초기 데이터를 plot
# 다소 약한 Mask들이 나오는 것을 볼 수 있다.
show_predictions(dataset,1)
###Output
_____no_output_____
###Markdown
트레이닝
###Code
# 각 트레인 epoch가 끝날 때마다 트레이닝 sample_image, sample_mask로부터 학습과정을 시각화합니다.
from IPython.display import clear_output
class DisplayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
clear_output(wait=True)
show_predictions(epoch = epoch)
EPOCHS = 100
# STEPS_PER_EPOCH = x_train.shape[0]/BATCH_SIZE # 트레이닝/검증 나누지 않았을 때 사용하세요
STEPS_PER_EPOCH = train_dataset_size
# model_history = model.fit(dataset, epochs=EPOCHS,
# steps_per_epoch=STEPS_PER_EPOCH,
# callbacks=[DisplayCallback()]) # 트레이닝/검증 나누지 않았을 때 사용하세요.
model_history = model.fit(train_dataset, validation_data=test_dataset,
epochs=EPOCHS, steps_per_epoch=STEPS_PER_EPOCH,
callbacks=[DisplayCallback()])
if colab:
model.save("/content/Surface_Crack_Segmentation/MY_MODEL")
else:
model.save("D:/_김정원/ss_class(AI)/Surface_Crack_Segmentation/MY_MODEL")
plt.figure(figsize=(12,10))
plt.subplot(2,2,1)
plt.plot(model_history.history['accuracy'])
plt.title('accuracy')
plt.subplot(2,2,2)
plt.plot(model_history.history['loss'])
plt.title('loss')
plt.subplot(2,2,3)
plt.plot(model_history.history['val_accuracy'])
plt.title('val_accuracy')
plt.subplot(2,2,4)
plt.plot(model_history.history['val_loss'])
plt.title('val_loss')
###Output
_____no_output_____ |
Loan_Defaulters_Classification-Optimization_and_Parameters_Tuning-Inferencing.ipynb | ###Markdown
About: Loan Defaulters ClassificationsDataset1: Dataset link - https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clientsExperimental Aim: To compare the performance of loan defaulters classifications using ML classifiers and neural networkResults: Performance accuracyProcessses:- exploratory data analysis- multi-model learning and performance comparison
###Code
# set up
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from IPython.display import clear_output
import seaborn as sns
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, precision_score, recall_score, confusion_matrix, precision_recall_curve
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn import svm
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline
from sklearn import tree
from sklearn import datasets, linear_model
#import xgboost as xgb
from sklearn.ensemble import RandomForestClassifier
from scipy import stats
from sklearn.ensemble import ExtraTreesClassifier
#set fiqure size to control all plt plottings
plt.rcParams['figure.figsize']=(10,5)
from time import sleep
from joblib import Parallel, delayed
###Output
_____no_output_____
###Markdown
data upload and pre-processing- upload using pandas- preprocessing and engineering using one-hot encoding
###Code
# upload data dataframe
df = pd.read_csv('default_of_credit_card_clients.csv', skiprows=1)
# removing rows with null values if any
df.head(2) # print sample frame by rows
#df.columns
# perform one-hot encoding for category variables
# generate numeric variables from category variables
df1 = pd.concat([df.drop('SEX', axis=1), pd.get_dummies(df['SEX'],prefix='sex')], axis=1)
df2 = pd.concat([df1.drop('EDUCATION', axis=1), pd.get_dummies(df1['EDUCATION'],prefix='edu')], axis=1)
df3 = pd.concat([df2.drop('MARRIAGE', axis=1), pd.get_dummies(df2['MARRIAGE'],prefix='married')], axis=1)
data1 = df3.rename(columns= {'default payment next month': 'default_payment'})
data1.columns
# data cleaning based on the key requirements
remov= ['ID', 'edu_0', 'edu_5','edu_6', 'married_0']
credit_data = data1.drop(remov, axis = 1)
credit_data.head(2)
###Output
_____no_output_____
###Markdown
exploratory data analysis1. outlier removal- univariate approach- multivariate approach
###Code
# percentage class distribution among variables
import seaborn as sns
dataplt =sns.displot(credit_data['default_payment'])
print(dataplt)
# heatmap showing correlations between variables
sns.heatmap(credit_data.corr())
# there is positive and negative correlations between variables
# this will require features correction selection
# input and target data separation
targetname = 'default_payment'
X = credit_data.drop(targetname, axis =1)
y = credit_data[targetname]
# variables fitting with random state declared to ensure reproducibility of result
feat_model = ExtraTreesClassifier(n_estimators=100, random_state=0, criterion = 'entropy')
feat_model1 = feat_model.fit(X,y)
# performance score evaluation
#print('Feature Selection Score is: {0:0.2f}'.format(perf_selection))
#visualisation of features importance, nlargest number can be changed to the desire number of features needed
feat_importances = pd.Series(feat_model.feature_importances_,index=X.columns) # track all columns by score ranks
feat_importances.nlargest(18).plot(kind='barh') # filtered only best selected columns by score values
plt.show()
# select the best features with largest possible score pairwise correlection metrics
n = 14 # number of best features of interest
# this can be used as a parameter to monitor classification performance
X_dt =credit_data[feat_importances.nlargest(n).index] # derived features from decision tree (X_dt)
print(X_dt.columns) # to see the best selected features
# scaling and tranformation of input features(X)
#StandardScaler = StandardScaler()
MinMax_Scaler = MinMaxScaler()
X11 = MinMax_Scaler.fit_transform(X_dt)
X1 = stats.zscore(X11) # normalises input data using mean and std derived from the data
y1 = y # target variable (Sale)
# Data splits
# perform train (70%) and validation test(30%) data split
X_train, X_testn, y_train, y_testn = train_test_split(X1, y1, test_size=0.3, random_state=42)
print(len(X_train)) # output from row 2 train dataset
# additional test dataset for statistical check
X_test1, X_test2n, y_test1, y_test2n = train_test_split(X_testn, y_testn, test_size=0.3, random_state=42)
print(len(X_test1))
X_test2, X_test3, y_test2, y_test3 = train_test_split(X_test2n, y_test2n, test_size=0.3, random_state=42)
print(len(X_test2))
print(len(X_test3))
y_test3.index[26]
# Construct learning pipelines for classification model
#support vector machine
pipe_svm = Pipeline([('p1', MinMaxScaler()),
('svm', svm.SVC(random_state = 5))])
# logistic regression
pipe_lr = Pipeline([('p2', MinMaxScaler()),
('lr', LogisticRegression(random_state=20))])
# adaboost
pipe_ada = Pipeline([('p3', MinMaxScaler()),
('ada', AdaBoostClassifier(n_estimators = 100, random_state = 20))])
# KNN
pipe_knn = Pipeline([('p4', MinMaxScaler()),
('knn', KNeighborsClassifier(n_neighbors=6, metric='euclidean'))])
# Random Forest (rf) network
num_trees =100
max_features = 14
pipe_rf = Pipeline([('p5', MinMaxScaler()),
('rf', RandomForestClassifier(n_estimators=num_trees,
max_features=max_features))])
# create a list of pipeline and fit training data on it
classifier_pipe = [pipe_svm, pipe_lr, pipe_ada, pipe_knn, pipe_rf]
# fit the training data on the classifier pipe
for pipe in classifier_pipe:
pipe.fit(X_train, y_train)
# Performance on train and test data sets
#create dictionary of pipeline classifiers
pipe_dic = {0: 'svm', 1: 'lr',
2:'adaboost', 3: 'knn', 4: 'rf'}
# test the performance on train data samples
perf_train = []
for indx, val in enumerate(classifier_pipe):
perf_trg = pipe_dic[indx], val.score(X_train,y_train)
perf_train.append(perf_trg)
# performance on test1 data samples
perf_test1 = []
for indx, val in enumerate(classifier_pipe):
perf_tst1 = pipe_dic[indx], val.score(X_test1,y_test1)
perf_test1.append(perf_tst1)
# performance on test2 data samples
perf_test2 = []
for indx, val in enumerate(classifier_pipe):
perf_tst2 = pipe_dic[indx], val.score(X_test2,y_test2)
perf_test2.append(perf_tst2)
# tabulated performance between train data samples and test data samples
pd_ptrain = pd.DataFrame(perf_train)
pd_ptest1 = pd.DataFrame(perf_test1)
pd_ptest2 = pd.DataFrame(perf_test2)
# concate dataframes
perf_log = pd.concat([pd_ptrain.rename(columns={ 0: 'Classifiers', 1: 'train_performance'}),
pd_ptest1.rename(columns={ 0: 'Classifiers1', 1: 'val1_performance'}),
pd_ptest2.rename(columns={ 0: 'Classifiers2', 1: 'val2_performance'})], axis = 1)
perf_log = perf_log.drop(['Classifiers1','Classifiers2'], axis =1)
perf_log
# plot
#ax = sns.barplot(x="Classifiers", y="train_performance", data=perf_log)
# model prediction accuarcy measurement
# accuracy_score is used because of multilabel classification based on jaccard similarity coefficient score
y_predicted = pipe_rf.predict(X_test3)
y_score =accuracy_score(y_test3,y_predicted)
print('Multi Classification score: {0:0.2f}'.format(
y_score))
# classification output for test
# using tn, fp, fn, tp, classfication precision can be computed
tn, fp, fn, tp = confusion_matrix(y_test3, y_predicted).ravel()
print(tn, fp, fn, tp) # output
# output explanation for 27 sales samples classification (default payment = 1, non_default payment = 0)
# tp: 596 are truely classified as sold items
# tn: 30 are wrongly classified as sold items, which should be unsold items
# fp: 117 are classified as unsold items but are actually sold items
# fn: 67 items should belong to sold items but classified as not sold
# crosstab visualisations
# comparing ground truth values with the y_predicted labels
pd.crosstab(y_test3 ,y_predicted)
###Output
_____no_output_____
###Markdown
Neural Network method
###Code
# model a dense neural network classifier
from tensorflow import keras
model = keras.Sequential(
[
keras.layers.Dense(
1024, activation="relu", input_shape=(X_train.shape[-1],)
),
keras.layers.Dense(1024, activation="relu"),
keras.layers.Dropout(0.4),
keras.layers.Dense(1024, activation="relu"),
keras.layers.Dropout(0.4),
keras.layers.Dense(1, activation="sigmoid"),
]
)
model.summary()
# compile
model.compile(
optimizer=keras.optimizers.Adam(1e-2), loss="binary_crossentropy", metrics= ['accuracy']
)
# training with class weight
model.fit(
X_train,
y_train,
batch_size=1024,
epochs=10,
verbose=2,
#callbacks=callbacks,
validation_data=(X_test3, y_test3),
)
###Output
Train on 21000 samples, validate on 810 samples
Epoch 1/10
21000/21000 - 2s - loss: 0.4328 - accuracy: 0.8203 - val_loss: 0.4415 - val_accuracy: 0.8160
Epoch 2/10
21000/21000 - 2s - loss: 0.4330 - accuracy: 0.8216 - val_loss: 0.4408 - val_accuracy: 0.8160
Epoch 3/10
21000/21000 - 2s - loss: 0.4302 - accuracy: 0.8203 - val_loss: 0.4440 - val_accuracy: 0.8111
Epoch 4/10
21000/21000 - 2s - loss: 0.4350 - accuracy: 0.8214 - val_loss: 0.4415 - val_accuracy: 0.8222
Epoch 5/10
21000/21000 - 2s - loss: 0.4301 - accuracy: 0.8221 - val_loss: 0.4409 - val_accuracy: 0.8160
Epoch 6/10
21000/21000 - 2s - loss: 0.4316 - accuracy: 0.8199 - val_loss: 0.4324 - val_accuracy: 0.8247
Epoch 7/10
21000/21000 - 2s - loss: 0.4340 - accuracy: 0.8190 - val_loss: 0.4433 - val_accuracy: 0.8148
Epoch 8/10
21000/21000 - 2s - loss: 0.4334 - accuracy: 0.8203 - val_loss: 0.4429 - val_accuracy: 0.8123
Epoch 9/10
21000/21000 - 2s - loss: 0.4368 - accuracy: 0.8199 - val_loss: 0.4433 - val_accuracy: 0.8160
Epoch 10/10
21000/21000 - 2s - loss: 0.4330 - accuracy: 0.8205 - val_loss: 0.4370 - val_accuracy: 0.8173
###Markdown
Optimizations
###Code
# finding the best model parameters
model_params = {
'pipe_svm': {
'model': svm.SVC(gamma ='auto'),
'params': {
'C': [5, 8],
'kernel': ['rbf','linear']
}
}}
# optimization method 1 - GridsearchCV parameters selection technique, -1 using all processors, any int for parrallel
scores = []
for model_name,xp in model_params.items():
clf = GridSearchCV(xp['model'], xp['params'], cv=None, n_jobs=-1, return_train_score=False)
clf.fit(X_test3, y_test3)
scores.append({
'model': model_name,
'best_score': clf.best_score_,
'best_params': clf.best_params_
})
scores
###Output
_____no_output_____ |
2016/tutorial_final/82/Apriori_Algorithm.ipynb | ###Markdown
Introduction:This tutorial will introduce you to the method of Association Rules Mining and a seminal algoithm known as Apriori Algorithm, for mining assosciation rules. Association rules mining builds upon the broader concept of mining frequent patterns. Frequent patterns are patterns that appear in datasets recurrently and frequently. The motivation for frequent patterns mining comes from Rakesh Agrawals concept of strong rules for disconvering associations between products in a transactions records at point-of-sale systems of supermarkets. This example of mining for frequent itemsets is widely known as market-basket analysis. Market Basket Analysis:Market basket analysis is the process of analyzing customer buying habits by discovering associations between different products or items that customers place in their shopping basket. The associations, when discovered, help the retailers to manage their shelf space, develop marketing strategies, engage in selective marketing and bundling of the products together. For example, if a customer buys a toothbrush, what is the likelihood of the customer buying a mouthwash like Listerine. The association rules mining also finds its applications in recommendation systems in e-commerce websites, video streaming websites, borrower defaulter prediction in capital lending firms, web-page usage mining, intrusion detection and so on. Although there are many association rules mining algorithms, we would be exploring apriori algorithm. In doing so, we will define the constituents of association rules viz, itemsets, frequent itemsets etc. Tutorial Content:In our build-up to implementing apriori algorithm, we will learn about what itemsets are, how are they represented, the measures that quantify interestingness of sossciation rules. Theorotically, we will use the basic concepts of probability to define the measures that quantify interestingness in association rules. In python, we would be using following libraries to implement to algorithm : numpy pandas itertools.chain itertools.combinations
###Code
import numpy
import pandas
import collections as cp
from itertools import chain
from itertools import combinations
###Output
_____no_output_____
###Markdown
Measures of Rule Interestingness in dataset:There are two measures of rule interestingess of data that lays foundation to mine frequent patterns. They are known as rule support and confidence. toothbrush => mouthwash [support = 5%, confidence = 80%]A support of 5% of association rule is equal to saying that 5% of all the transactions that are being considered for analysis have toothbrush and mouthwash purchased together.A confidence of 80% of association rule is equivalent to saying that 80% of the customers who bought toothbrush also bought mouthwash.Association rules are considered to be interesting if they satisfy a minimum support threshold and a minimum confidence threshold. These thresholds can be set by the users of the system, decision managers of the organization, or domain experts. Itemsets and Association RulesLet $$I = {I_1, I_2,..., I_m}$$ be a set of items and D be the dataset under consideration. Each transaction T is a set of items such that T ⊆ I and has an identifier, TID. Let A be a set of items. A transaction T is said to contain A if and only if A ⊆ T. An association rule is an implication of the form A ⇒ B, where A ⊂ I, B ⊂ I, and A∩B = φ.The association rule A ⇒ B holds in the transaction set D, with support *s* (percentage of transactions in D that contain A U B ) support( A ⇒ B ) = P(A U B) and with confidence *c*, where c is the percentage of transactions in D containing A that also contain B, which is equal to the conditional probability *P(B*|*A)*. confidence( A ⇒ B ) = P(B|A) Rules that satistfy the requirement of minimum support threshold and minimum confidence threshold are considered as strong rules. At this point we can introduce the association rule mining, generally, a two step process :1. Find all the itemsets that are frequent by selecting the itemsets that occur at least as frequently as a predetermined minimum support count, min_sup.2. Generate strong association rules from the frequent itemsets obtained from the step 1. In addition to the min_sup requirement, these rules must satisfy the mini
###Code
def fileExtract(filename):
with open(filename, 'rU') as file_iter:
for line in file_iter:
line = line.strip().rstrip(',') #Removing the the comma at the end of the line
record = frozenset(line.split(','))
yield record
# The data of each set is stored in frozenset object which is immutable
filename = """C:\Users\Ketan\Documents\CMU\Fall Semester\Practical Data Science\Tutorial\INTEGRATED-DATASET.csv"""
loadedData = fileExtract(filename)
###Output
_____no_output_____
###Markdown
```python>>>print list(loadedData)[frozenset(['Brooklyn', 'LBE', '11204']), frozenset(['Cambria Heights', 'MBE', 'WBE', 'BLACK', '11411']), frozenset(['MBE', '10598', 'BLACK', 'Yorktown Heights']), frozenset(['11561', 'MBE', 'BLACK', 'Long Beach']), frozenset(['MBE', 'Brooklyn', 'ASIAN', '11235']), frozenset(['MBE', '10010', 'WBE', 'ASIAN', 'New York']), frozenset(['10026', 'MBE', 'New York', 'ASIAN']), frozenset(['10026', 'MBE', 'New York', 'BLACK']) .... .... .... frozenset(['NON-MINORITY', 'WBE', '10025', 'New York']), frozenset(['MBE', '11554', 'WBE', 'ASIAN', 'East Meadow']), frozenset(['MBE', 'Brooklyn', 'WBE', 'BLACK', '11208']), frozenset(['NON-MINORITY', 'WBE', '7717', 'Avon by the Sea']), frozenset(['MBE', '11417', 'LBE', 'ASIAN', 'Ozone Park']), frozenset(['NON-MINORITY', '10010', 'WBE', 'New York']), frozenset(['NON-MINORITY', 'Teaneck', 'WBE', '7666']), frozenset(['Bronx', 'MBE', 'WBE', 'BLACK', '10456']), frozenset(['MBE', '7514', 'BLACK', 'Paterson']), frozenset(['NON-MINORITY', 'WBE', '10023', 'New York']), frozenset(['MBE', 'Valley Stream', 'ASIAN', '11580']), frozenset(['MBE', 'Brooklyn', 'BLACK', '11214']), frozenset(['New York', 'LBE', '10016']), frozenset(['MBE', 'New York', 'ASIAN', '10002'])]```
###Code
def getItemsetsTransactionsList(loadedData):
transactionList = list() #Create list of transactions
itemSet = set()
for record in loadedData:
transaction = frozenset(record)
transactionList.append(transaction)
for item in transaction:
itemSet.add(frozenset([item])) # Generating 1-itemSets
return itemSet, transactionList
itemSet, transactionList = getItemsetsTransactionsList(loadedData)
###Output
_____no_output_____
###Markdown
```python>>> print itemSetfrozenset(['Brooklyn', 'LBE', '11204'])frozenset(['Cambria Heights', 'MBE', 'WBE', 'BLACK', '11411'])frozenset(['MBE', '10598', 'BLACK', 'Yorktown Heights'])frozenset(['11561', 'MBE', 'BLACK', 'Long Beach'])frozenset(['MBE', 'Brooklyn', 'ASIAN', '11235'])frozenset(['MBE', '10010', 'WBE', 'ASIAN', 'New York'])frozenset(['10026', 'MBE', 'New York', 'ASIAN'])frozenset(['10026', 'MBE', 'New York', 'BLACK'])..........frozenset(['NON-MINORITY', 'WBE', 'Mineola', '11501'])frozenset(['MBE', 'ASIAN', '10550', 'Mount Vernon'])frozenset(['MBE', 'Port Chester', '10573', 'HISPANIC'])frozenset(['NON-MINORITY', 'Merrick', 'WBE', '11566'])``` Once we have generated unique itemsets and the transaction list of all transactions, the next step is to process them by applying the Apriori ALgorithm.Apriori Algorithm uses the prior knowledge of the frequently occurring itemsets. It employs an iterative approach also known as level-wise search, where k-itemsets are used to explore (k+1) itemsets. Apriori Algorithm can be divided into two steps: 1. Join Step 2. Prune Step 1. Join Step: In this step, a set of candidate k-itemsets is generating by joining $L_{k-1}$ with itself to find $L_k$ 2. Prune Step: A superset of $L_k$ called $C_k$ is maintained, which has members that may or may not be frequent. To determine the items that become part of $L_k$, a scan of a transaction list is made to check of counts of the items greater than the minimum support count. All the items that have count greater than minimum support count become part of $L_k$. However, $C_k$ can be very huge and result in too many scans to the transactionList (which would be itself huge) and a lot of computation. To avoid this, the algorithm makes use of what is called the *Apriori Property*, which is described below. Any (k − 1)-itemset that is not frequent cannot be a subset of a frequent k-itemset. Hence, if any (k − 1)-subset of a candidate k-itemset is not in $L_{k−1}$, then the candidate cannot be frequent either and so can be removed from $C_k$. This subset testing can be done quickly by maintaining a hash tree of all frequent itemsets. Illustration by example :[](https://s18.postimg.org/eujogosbt/Transactions.png) [Sourced from Data Mining: Concepts and Techniques]Suppose we have a database of transactions as shown above. It has 9 transactions. Each transaction has one or many items that were purchased together. We will apply Apriori Algorithm the transaction dataset to find the frequent itemsets.**Step : 1.** In the first iteration of the algorithm, each item that appears in the transaction set, is one of the members of the candidate 1-itemsets, $C_1$. As such, we scan the dataset to get counts of occurences of all the items.**Step : 2.** We will assume that the minimum support count is of 2 counts. $therefore$ Relative support would be $2/9 = 22%$. Now we can identify the set of frequent itemsets, $L_1$. It would be all the candidate 1-itemsets in $C_1$ that satisfy the minimum support condition. In our case, all candidates satisfy this condition.[](https://s14.postimg.org/qv5g284w1/image.png)**Step : 3.** Now comes the **join step**. We would now join $L_1$ with itself to generate candidate set of 2-itemsets, $C_2$. It is to be noted that each subset of the candidates in $C_2$ is also frequent, hence, the **prune step** would not remove any candidates.**Step : 4.** Again, the transaction dataset is scanned to get the support counts of all the candidates in $C_2$. The candidates that have support count greater than *min_sup* make up the frequent 2-itemsets, $L_2$[](https://s12.postimg.org/cw3wvqxsd/image.png)**Step : 5.** Now, for generation of candidate 3-itemsets, $C_3$, we join $L_2 x L_2$, from which we obtain : {{$I_1$, $I_2$, $I_3$}, {$I_1$, $I_2$, $I_5$}, {$I_1$, $I_3$, $I_5$}, {$I_2$, $I_3$, $I_4$}, {$I_2$, $I_3$, $I_5$}, {$I_2$, $I_4$, $I_5$}}. We can apply the **prune step** here. We know that Apriori Property says that for a itemset to be frequent, all of its subsets must also be frequent. If we take the $4^th$ itemset, {$I_2$, $I_3$, $I_4$}, the subset {$I_3$, $I_4$} is not a frequent 2-itemset (Please refer the picture for $L_2$). And hence, {$I_2$, $I_3$, $I_4$} is not a frequent 3-itemset. Same can be deduced about the other three candidate 3-itemsets and hence would be pruned. This saved the effort of retrieving the counts of these itemsets during the subsequent scan to the transaction dataset. **Step : 6** The transactions in the dataset are scanned to determine the counts of the remaining and those have counts greater than the min_sup are selected as frequent 3-itemset, $L_3$.[](https://s13.postimg.org/d1w0nw05j/image.png)**Step : 7** Further, the algorithm performs, $L_3 x L_3$ to get the candidate set of 4-itemsets, $C_4$. The join results in {{$I_1$,$I_2$,$I_3$,$I_5$}}, however, is pruned because its subset {$I_2$,$I_3$,$I_5$} is not frequent. And hence we reach a point where $C_4 = \phi$ and the algorithm terminates, having found all the frequent itemsets. Generating Association Rules from Frequent Itemsets:Now that we have all the possible frequent itemsets, we proceed to find the association rules, (which is the ultimate goal of the activity). The strong association rules satisfy both the minimum support threshold and minimum confidence threshold. We can find the confidence using the following equation for two items A and B : confidence (A => B) = P(B/A) = support_count(A U B)/support_count(A) Based on this equation, the association rules can be formed as follows :- For each frequent itemset $l$, generate all nonempty subsets of $l$. - For every nonempty subset $s$ of $l$, output the rule “$s ⇒ (l − s)$” if $\frac{(support count(l))}{(support count(s))}$ ≥ $min-conf$ where min_conf is the minimum confidence threshold. From our example, one of the frequent 3-itemset was $l$ = {{$I_1$,$I_2$,$I_5$}}. The non-empty subsets that can be generated from this itemset are {$I_1$}, {$I_2$}, {$I_5$}, {$I_1$,$I_2$}, {$I_2$,$I_5$}, {$I_1$,$I_5$}. The resulting association rules, by applying the formula above are :$I_1$ ^ $I_2$ $=>$ $I_5$ $:$ $confidence = 2/4 = 50\% $$I_1$ ^ $I_5$ $=>$ $I_2$ $:$ $confidence = 2/2 = 100\% $$I_5$ ^ $I_2$ $=>$ $I_1$ $:$ $confidence = 2/2 = 100\% $$I_1$ $=>$ $I_2$ ^ $I_5$ $:$ $confidence = 2/6 = 100\% $$I_2$ $=>$ $I_1$ ^ $I_5$ $:$ $confidence = 2/7 = 29\% $$I_5$ $=>$ $I_2$ ^ $I_1$ $:$ $confidence = 2/2 = 100\% $By fixing the minimum confidence threshold, we can select or reject the rules that satisfy or don' satisfy the condition.
###Code
frequencySet = cp.defaultdict(int)
largeSet = dict()
assocRules = dict()
if(assocRules == largeSet){
print "Should not happen"
}
else {
print "OK"
} #Vanity check, not relevant for calculation
minSupport = 0.17
minConfidence = 0.5
def getMinimumSupportItems(itemSet, transactionList, minSupport, freqSet):
"""Function to calculate the support of items of itemset in the transaction. The support is checked against minimum support.
Returns the itemset with those items that satisfy the minimum threshold requirement"""
newItemSet = set()
localSet = cp.defaultdict(int) #local dictionary to count the items in the itemset that are part of the transaction
for item in itemSet:
for transaction in transactionList:
if item.issubset(transaction):
frequencySet[item] += 1
localSet[item] += 1
print itemSet
for item, count in localSet.items():
support = float(count)/len(transactionList)
if support >= minSupport:
newItemSet.add(item)
return newItemSet
pass
# Printing and confirming the contents of the qualified newItemSet
supportOnlySet = getMinimumSupportItems(itemSet, transactionList, minSupport, frequencySet)
print supportOnlySet
###Output
_____no_output_____
###Markdown
These are all the frequent 1-itemsets ```python>>> print supportOnlySetset([frozenset(['BLACK']), frozenset(['ASIAN']), frozenset(['New York']), frozenset(['MBE']), frozenset(['NON-MINORITY']), frozenset(['WBE'])])```
###Code
def joinSet(itemSet, length):
"""Function to perform the join step of the Apriori Algorithm"""
return set([i.union(j) for i in itemSet for j in itemSet if len(i.union(j)) == length])
def subsets(arr):
""" Returns non empty subsets of arr"""
return chain(*[combinations(arr, i + 1) for i, a in enumerate(arr)])
# We canlculate the k-itemsets by iterating level-wise will there
# are no frequent itemsets as illustrated in the example above
toBeProcessedSet = supportOnlySet
k = 2
while(toBeProcessedSet != set([])):
largeSet[k-1] = toBeProcessedSet
toBeProcessedSet = joinSet(toBeProcessedSet, k)
toBeProcessedSet_c = getMinimumSupportItems(toBeProcessedSet,transactionList,minSupport,frequencySet)
toBeProcessedSet = toBeProcessedSet_c
k = k + 1
def getSupport(item):
"Local function to get the support of k-itemsets"
return float(frequencySet[item])/len(transactionList)
finalItems = []
for key, value in largeSet.items():
finalItems.extend([(tuple(item), getSupport(item)) for item in value])
print finalItems
finalRules = []
for key, value in largeSet.items()[1:]:
for item in value:
_subsets = map(frozenset, [x for x in subsets(item)])
for element in _subsets:
remain = item.difference(element)
if len(remain) > 0:
confidence = getSupport(item)/getSupport(element)
if confidence >= minConfidence:
finalRules.append(((tuple(element), tuple(remain)), confidence))
print finalRules
def printResults(items, rules):
"""prints the generated itemsets sorted by support and the confidence rules sorted by confidence"""
for item, support in sorted(items, key=lambda (item, support): support):
print "item: %s , %.3f" % (str(item), support)
print "\n------------------------ RULES:"
for rule, confidence in sorted(rules, key=lambda (rule, confidence): confidence):
pre, post = rule
print "Rule: %s ==> %s , %.3f" % (str(pre), str(post), confidence)
printResults(finalItems, finalRules)
###Output
_____no_output_____
###Markdown
```python>>> printResults(finalItems, finalRules)item: ('MBE', 'New York') , 0.170item: ('New York', 'WBE') , 0.175item: ('MBE', 'ASIAN') , 0.200item: ('ASIAN',) , 0.202item: ('New York',) , 0.295item: ('NON-MINORITY',) , 0.300item: ('NON-MINORITY', 'WBE') , 0.300item: ('BLACK',) , 0.301item: ('MBE', 'BLACK') , 0.301item: ('WBE',) , 0.477item: ('MBE',) , 0.671------------------------ RULES:Rule: ('New York',) ==> ('MBE',) , 0.578Rule: ('New York',) ==> ('WBE',) , 0.594Rule: ('WBE',) ==> ('NON-MINORITY',) , 0.628Rule: ('ASIAN',) ==> ('MBE',) , 0.990Rule: ('BLACK',) ==> ('MBE',) , 1.000Rule: ('NON-MINORITY',) ==> ('WBE',) , 1.000```
###Code
###### If the code is to be converted in a python file, please copy this pease of code at the top of the blocks.
#You will be able to pass arguments of min_sup and min_conf from command line
if __name__ == "__main__":
optparser = OptionParser()
optparser.add_option('-f', '--inputFile',
dest='input',
help='filename containing csv',
default=None)
optparser.add_option('-s', '--minSupport',
dest='minS',
help='minimum support value',
default=0.15,
type='float')
optparser.add_option('-c', '--minConfidence',
dest='minC',
help='minimum confidence value',
default=0.6,
type='float')
(options, args) = optparser.parse_args()
inFile = None
if options.input is None:
inFile = sys.stdin
elif options.input is not None:
inFile = dataFromFile(options.input)
else:
print 'No dataset filename specified, system with exit\n'
sys.exit('System will exit')
minSupport = options.minS
minConfidence = options.minC
items, rules = runApriori(inFile, minSupport, minConfidence)
printResults(items, rules)
###Output
_____no_output_____ |
fetch-app-insights-data.ipynb | ###Markdown
Fetch query data from App InsightsSample query to fetch successful syncs per minute in past 15 minutes -```customEvents| where timestamp > ago(15m) and name == "Pandium sync success"| project timestamp| summarize syncCount=count() by format_datetime(timestamp, "dd/MM/yy hh:mm")| order by timestamp asc, syncCount desc```This query can be converted to a cURL request using [Microsoft's API Explorer](https://dev.applicationinsights.io/apiexplorer/query).**Prerequisites** -* App Id from azure portal.* Api key from azure portal.*More information can be found at [AppInsights API Quickstart](https://dev.applicationinsights.io/quickstart).*
###Code
apikey = '<API Key from Azure Portal>'
appid = '<App Id of resource>' # Ex. AppInsightsProd
import requests
import json
# genric url format -
# GET /v1/apps/{app-id}/query?query=requests | where timestamp >= ago(24h) | count
url = f'https://api.applicationinsights.io/v1/apps/{appid}/query?query=customEvents%7C%20where%20timestamp%20%3E%20ago(15m)%20and%20name%20%3D%3D%20%22Pandium%20sync%20success%22%7C%20project%20timestamp%7C%20summarize%20syncCount%3Dcount()%20by%20format_datetime(timestamp%2C%20%22dd%2FMM%2Fyy%20hh%3Amm%22)%7C%20order%20by%20timestamp%20asc%2C%20syncCount%20desc'
headers = {'Content-Type': 'application/json', 'x-api-key': apikey}
response = requests.get(url, headers=headers)
json_result = json.loads(response.content.decode("utf-8"))
formatted_json_result = json.dumps(json_result, indent=4)
print(formatted_json_result)
###Output
_____no_output_____ |
example_plotting.ipynb | ###Markdown
Demonstrate plotting library Load data Load a processed TTU dataset for demonstration purposes. The dataset can be obtained by running the notebook "process_TTU_tower.ipynb" which can be found in the [a2e-mmc/assessment repository](https://github.com/a2e-mmc/assessment) (currently only in the dev branch)
###Code
datadir = '/Users/equon/a2e-mmc/assessment/datasets/SWiFT/data'
TTUdata = 'TTU_tilt_corrected_20131108-09.csv'
df = pd.read_csv(os.path.join(datadir,TTUdata),parse_dates=True,index_col=['datetime','height'])
df.head()
###Output
_____no_output_____
###Markdown
Do some additional data processing
###Code
# Calculate wind speed and direction
df['wspd'], df['wdir'] = calc_wind(df)
df['theta'] = theta(df['T'],df['p'])
# Calculate 10min averages and recompute wind speed and wind direction
df10 = df.unstack().resample('10min').mean().stack()
df10['wspd'], df10['wdir'] = calc_wind(df10)
###Output
_____no_output_____
###Markdown
Default plotting tools
###Code
fig,ax = plot_timehistory_at_height(df10,
fields = ['wspd','wdir'],
heights = [40,80,120]
)
fig,ax = plot_profile(df10,
fields = ['wspd','wdir'],
times = ['2013-11-08 18:00:00','2013-11-08 22:00:00','2013-11-09 6:00:00'],
)
fig,ax,cbar = plot_timeheight(df10,fields = ['wspd','wdir'])
# Calculate spectra at a height of 74.7 m
df_spectra = power_spectral_density(df.xs(74.7,level='height'),
tstart=pd.to_datetime('2013-11-08 12:00:00'),
interval='1h')
fig,ax = plot_spectrum(df_spectra,fields='u')
###Output
_____no_output_____
###Markdown
Advanced plotting examples Plot timehistory at all TTU heights using a custom colormap
###Code
fig,ax,ax2 = plot_timehistory_at_height(df10, fields = ['wspd','wdir'], heights = 'all',
# Specify field limits
fieldlimits={'wspd':(0,20),'wdir':(180,240)},
# Specify time limits
timelimits=('2013-11-08 12:00:00','2013-11-09 12:00:00'),
# Specify colormap
cmap='copper',
# Plot local time axis
plot_local_time=True, local_time_offset=-6,
# Additional keyword arguments to personalize plotting style
linewidth=2,linestyle='-',marker=None,
)
#Move xs tick down slightly to avoid overlap with y ticks in ax[1]
ax[-1].tick_params(axis='x', which='minor', pad=10)
# Adjust xaxis tick locations of UTC time axis
ax2.xaxis.set_major_locator(mpl.dates.AutoDateLocator(minticks=2,maxticks=3))
###Output
_____no_output_____
###Markdown
Compare instantaneous profiles with 10-min averaged profiles.
###Code
fig,ax = plot_profile(datasets={'Instantaneous data':df,'10-min averaged data':df10},
fields=['wspd','wdir','w','theta'],
times=['2013-11-08 18:00:00','2013-11-09 06:00:00'],
# Specify field limits
fieldlimits={'wspd':(0,20),'wdir':(180,240),'w':(-1,1)},
# Specify height limits
heightlimits=(0,200),
# Stack results by dataset instead of times
stack_by_datasets=True,
# Change field order to have different fields correspond to different columns instead of rows
fieldorder='F',
# Additional keyword arguments to personalize plotting style
linewidth=2,marker='o',markersize=8,mfc="none",
)
###Output
_____no_output_____
###Markdown
Demonstrate plotting library Load data Load a processed TTU dataset for demonstration purposes. The dataset can be obtained by running the notebook "process_TTU_tower.ipynb" which can be found in the [a2e-mmc/assessment repository](https://github.com/a2e-mmc/assessment) (currently only in the dev branch)
###Code
datadir = './'
TTUdata = 'TTU_tilt_corrected_20131108-09.csv'
df = pd.read_csv(os.path.join(datadir,TTUdata),parse_dates=True,index_col=['datetime','height'])
df.head()
###Output
_____no_output_____
###Markdown
Do some additional data processing
###Code
# Calculate wind speed and direction
df['wspd'], df['wdir'] = calc_wind(df)
df['theta'] = theta(df['T'],df['p'])
# Calculate 10min averages and recompute wind speed and wind direction
df10 = df.unstack().resample('10min').mean().stack()
df10['wspd'], df10['wdir'] = calc_wind(df10)
###Output
_____no_output_____
###Markdown
Default plotting tools
###Code
fig,ax = plot_timehistory_at_height(df10,
fields = ['wspd','wdir'],
heights = [40,80,120]
)
fig,ax = plot_profile(df10,
fields = ['wspd','wdir'],
times = ['2013-11-08 18:00:00','2013-11-08 22:00:00','2013-11-09 6:00:00'],
)
fig,ax,cbar = plot_timeheight(df10,fields = ['wspd','wdir'])
# Calculate spectra at a height of 74.7 m
df_spectra = power_spectral_density(df.xs(74.7,level='height'),
tstart=pd.to_datetime('2013-11-08 12:00:00'),
interval='1h')
fig,ax = plot_spectrum(df_spectra,fields='u')
###Output
_____no_output_____
###Markdown
Advanced plotting examples Plot timehistory at all TTU heights using a custom colormap
###Code
fig,ax,ax2 = plot_timehistory_at_height(df10, fields = ['wspd','wdir'], heights = 'all',
# Specify field limits
fieldlimits={'wspd':(0,20),'wdir':(180,240)},
# Specify time limits
timelimits=('2013-11-08 12:00:00','2013-11-09 12:00:00'),
# Specify colormap
cmap='copper',
# Plot local time axis
plot_local_time=True, local_time_offset=-6,
# Additional keyword arguments to personalize plotting style
linewidth=2,linestyle='-',marker=None,
)
#Move xs tick down slightly to avoid overlap with y ticks in ax[1]
ax[-1].tick_params(axis='x', which='minor', pad=10)
# Adjust xaxis tick locations of UTC time axis
ax2.xaxis.set_major_locator(mpl.dates.AutoDateLocator(minticks=2,maxticks=3))
###Output
_____no_output_____
###Markdown
Compare instantaneous profiles with 10-min averaged profiles.
###Code
fig,ax = plot_profile(datasets={'Instantaneous data':df,'10-min averaged data':df10},
fields=['wspd','wdir','w','theta'],
times=['2013-11-08 18:00:00','2013-11-09 06:00:00'],
# Specify field limits
fieldlimits={'wspd':(0,20),'wdir':(180,240),'w':(-1,1)},
# Specify height limits
heightlimits=(0,200),
# Stack results by dataset instead of times
stack_by_datasets=True,
# Change field order to have different fields correspond to different columns instead of rows
fieldorder='F',
# Additional keyword arguments to personalize plotting style
linewidth=2,marker='o',markersize=8,mfc="none",
)
###Output
_____no_output_____
###Markdown
Note: the following cell should not be needed if a `pip install [-e]` was performed
###Code
# #Make sure a2e-mmc repositories are in the pythonpath
# a2epath = '/home/equon/a2e-mmc'
# import sys
# if not a2epath in sys.path:
# sys.path.append(a2epath)
from mmctools.helper_functions import calc_wind, theta, power_spectral_density
from mmctools.plotting import plot_timeheight, plot_timehistory_at_height, plot_profile, plot_spectrum
mpl.rcParams['xtick.labelsize'] = 16
mpl.rcParams['ytick.labelsize'] = 16
mpl.rcParams['axes.labelsize'] = 16
###Output
_____no_output_____
###Markdown
Demonstrate plotting library Load data Load a processed TTU dataset for demonstration purposes. The dataset can be obtained by running the notebook "process_TTU_tower.ipynb" which can be found in the [a2e-mmc/assessment repository](https://github.com/a2e-mmc/assessment) (currently only in the dev branch)
###Code
datadir = '/home/equon/a2e-mmc/assessment/datasets/SWiFT/data'
TTUdata = 'TTU_tilt_corrected_20131108-09.csv'
df = pd.read_csv(os.path.join(datadir,TTUdata),parse_dates=True,index_col=['datetime','height'])
df.head()
###Output
_____no_output_____
###Markdown
Do some additional data processing
###Code
# Calculate wind speed and direction
df['wspd'], df['wdir'] = calc_wind(df)
df['theta'] = theta(df['t'],df['p'])
# Calculate 10min averages and recompute wind speed and wind direction
df10 = df.unstack().resample('10min').mean().stack()
df10['wspd'], df10['wdir'] = calc_wind(df10)
###Output
_____no_output_____
###Markdown
Default plotting tools
###Code
fig,ax = plot_timehistory_at_height(df10,
fields = ['wspd','wdir'],
heights = [40,80,120]
)
fig,ax = plot_profile(df10,
fields = ['wspd','wdir'],
times = ['2013-11-08 18:00:00','2013-11-08 22:00:00','2013-11-09 6:00:00'],
)
fig,ax,cbar = plot_timeheight(df10,fields = ['wspd','wdir'])
# Calculate spectra at a height of 74.7 m
df_spectra = power_spectral_density(df.xs(74.7,level='height'),
tstart=pd.to_datetime('2013-11-08 12:00:00'),
interval='1h')
fig,ax = plot_spectrum(df_spectra,fields='u')
###Output
_____no_output_____
###Markdown
Advanced plotting examples Plot timehistory at all TTU heights using a custom colormap
###Code
fig,ax,ax2 = plot_timehistory_at_height(df10, fields = ['wspd','wdir'], heights = 'all',
# Specify field limits
fieldlimits={'wspd':(0,20),'wdir':(180,240)},
# Specify time limits
timelimits=('2013-11-08 12:00:00','2013-11-09 12:00:00'),
# Specify colormap
cmap='copper',
# Plot local time axis
plot_local_time=True, local_time_offset=-6,
# Additional keyword arguments to personalize plotting style
linewidth=2,linestyle='-',marker=None,
)
#Move xs tick down slightly to avoid overlap with y ticks in ax[1]
ax[-1].tick_params(axis='x', which='minor', pad=10)
# Adjust xaxis tick locations of UTC time axis
ax2.xaxis.set_major_locator(mpl.dates.AutoDateLocator(minticks=2,maxticks=3))
###Output
_____no_output_____
###Markdown
Compare instantaneous profiles with 10-min averaged profiles.
###Code
fig,ax = plot_profile(datasets={'Instantaneous data':df,'10-min averaged data':df10},
fields=['wspd','wdir','w','theta'],
times=['2013-11-08 18:00:00','2013-11-09 06:00:00'],
# Specify field limits
fieldlimits={'wspd':(0,20),'wdir':(180,240),'w':(-1,1)},
# Specify height limits
heightlimits=(0,200),
# Stack results by dataset instead of times
stack_by_datasets=True,
# Change field order to have different fields correspond to different columns instead of rows
fieldorder='F',
# Additional keyword arguments to personalize plotting style
linewidth=2,marker='o',markersize=8,mfc="none",
)
###Output
_____no_output_____ |
docs/_static/notebooks/assign-r-code-question.ipynb | ###Markdown
**Question 1.** Write a function called `sieve` that takes in a positive integer `n` and returns a sorted vector of the prime numbers less than or equal to `n`. Use the Sieve of Eratosthenes to find the primes.```BEGIN QUESTIONname: q1points: 2```
###Code
# BEGIN SOLUTION NO PROMPT
sieve = function(n) {
is_prime = rep(TRUE, n)
p = 2
while (p^2 <= n) {
if (is_prime[p]) {
is_prime[seq(p^2, n, p)] = FALSE
}
p = p + 1
}
is_prime[1] = FALSE
return(seq(n)[is_prime])
}
# END SOLUTION
. = " # BEGIN PROMPT
sieve = function(n) {
...
}
" # END PROMPT
## Test ##
testthat::expect_equal(length(sieve(1)), 0)
## Test ##
testthat::expect_equal(sieve(2), c(2))
## Test ##
testthat::expect_equal(sieve(3), c(2, 3))
## Hidden Test ##
testthat::expect_equal(sieve(20), c(2, 3, 5, 7, 11, 13, 17, 19))
. = " # BEGIN TEST CONFIG
points: 1
hidden: true
" # END TEST CONFIG
testthat::expect_equal(sieve(100), c(2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97))
###Output
_____no_output_____ |
LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb | ###Markdown
*Data Science Unit 4 Sprint 3 Assignment 2* Convolutional Neural Networks (CNNs) AssignmentLoad a pretrained network from Keras, [ResNet50](https://tfhub.dev/google/imagenet/resnet_v1_50/classification/1) - a 50 layer deep network trained to recognize [1000 objects](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). Starting usage:```pythonimport numpy as npfrom tensorflow.keras.applications.resnet50 import ResNet50from tensorflow.keras.preprocessing import imagefrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictionsResNet50 = ResNet50(weights='imagenet')features = model.predict(x)```Next you will need to remove the last layer from the ResNet model. Here, we loop over the layers to use the sequential API. There are easier ways to add and remove layers using the Keras functional API, but doing so introduces other complexities. ```python Remote the Last Layer of ResNEtResNet50._layers.pop(0) Out New Modelmodel = Sequential() Add Pre-trained layers of Old Model to New Modelfor layer in ResNet50.layers: model.add(layer) Turn off additional training of ResNet Layers for speed of assignmentfor layer in model.layers: layer.trainable = False Add New Output Layer to Modelmodel.add(Dense(1, activation='sigmoid'))```Your assignment is to apply the transfer learning above to classify images of Mountains (`./data/mountain/*`) and images of forests (`./data/forest/*`). Treat mountains as the postive class (1) and the forest images as the negative (zero). Steps to complete assignment: 1. Load in Image Data into numpy arrays (`X`) 2. Create a `y` for the labels3. Train your model with pretrained layers from resnet4. Report your model's accuracy
###Code
### YOUR CODE HERE
# mount drive to colab
from google.colab import drive
drive.mount('/content/drive')
# there's a db file in them mountains! (manually removed)
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.preprocessing.image import img_to_array
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.layers import Input, Dense, GlobalAveragePooling2D, Dropout
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.python.keras import optimizers
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.utils import plot_model
from sklearn.model_selection import train_test_split
import random
import os
import cv2
import matplotlib.pyplot as plt
# Load images into numpy arrays/ create y for labels
FILEPATH = '/content/drive/My Drive/Colab Notebooks/module2-convolutional-neural-networks/data'
CATEGORIES = ['forest', 'mountain']
training_data = []
IMG_SIZE = 224
def create_training_data():
for category in CATEGORIES:
path = os.path.join(FILEPATH, category) # path to forest/mountains dir
class_num = CATEGORIES.index(category)
print(path, class_num)
for img in os.listdir(path):
print(img)
try:
if not img.startswith(".jpg"):
img_array = cv2.imread(os.path.join(path, img))
new_img_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
training_data.append([new_img_array, class_num])
except Exception as e:
pass
X = []
y = []
for features, label in training_data:
X.append(features)
y.append(label)
X = np.array(X)
y = to_categorical(np.array(y))
print('------- Shape of X, y data -------')
print(X.shape, y.shape)
return X, y
X, y = create_training_data()
# Train-test split the data
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.33)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# Load pre-trained resnet from keras
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense
res = ResNet50(input_shape=(224, 224, 3), weights='imagenet', include_top=False)
for layer in res.layers:
layer.trainable = False
# add your head on top
x = res.output
x = GlobalAveragePooling2D()(x)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.25)(x)
x = Dense(1024, activation='relu')(x)
x = Dropout(0.25)(x)
predictions = Dense(2, activation='softmax')(x)
model = Model(inputs=res.input, outputs=predictions)
# Compile model
model.compile(loss="categorical_crossentropy",
optimizer='adam',
metrics=["accuracy"])
EPOCHS = 5
BATCH_SIZE = 10
history = model.fit(X_train, y_train, epochs=EPOCHS, validation_split=0.15, batch_size=BATCH_SIZE, verbose=1)
score = model.evaluate(X_test, y_test, verbose=False)
print('---------------Validation Metrics---------------')
print('Loss:', score[0])
print('Accuracy:', score[1])
y_pred = model.predict(X_test)
y_pred = (y_pred > .5).astype(int)
y_pred[:25]
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
###Output
_____no_output_____
###Markdown
*Data Science Unit 4 Sprint 3 Assignment 2* Convolutional Neural Networks (CNNs) Assignment- Part 1: Pre-Trained Model- Part 2: Custom CNN Model- Part 3: CNN with Data AugmentationYou will apply three different CNN models to a binary image classification model using Keras. Classify images of Mountains (`./data/train/mountain/*`) and images of forests (`./data/train/forest/*`). Treat mountains as the positive class (1) and the forest images as the negative (zero). |Mountain (+)|Forest (-)||---|---||||The problem is relatively difficult given that the sample is tiny: there are about 350 observations per class. This sample size might be something that you can expect with prototyping an image classification problem/solution at work. Get accustomed to evaluating several different possible models. Pre - Trained ModelLoad a pretrained network from Keras, [ResNet50](https://tfhub.dev/google/imagenet/resnet_v1_50/classification/1) - a 50 layer deep network trained to recognize [1000 objects](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). Starting usage:```pythonimport numpy as npfrom tensorflow.keras.applications.resnet50 import ResNet50from tensorflow.keras.preprocessing import imagefrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictionsfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2Dfrom tensorflow.keras.models import Model This is the functional APIresnet = ResNet50(weights='imagenet', include_top=False)```The `include_top` parameter in `ResNet50` will remove the full connected layers from the ResNet model. The next step is to turn off the training of the ResNet layers. We want to use the learned parameters without updating them in future training passes. ```pythonfor layer in resnet.layers: layer.trainable = False```Using the Keras functional API, we will need to additional additional full connected layers to our model. We we removed the top layers, we removed all preivous fully connected layers. In other words, we kept only the feature processing portions of our network. You can expert with additional layers beyond what's listed here. The `GlobalAveragePooling2D` layer functions as a really fancy flatten function by taking the average of each of the last convolutional layer outputs (which is two dimensional still). ```pythonx = resnet.outputx = GlobalAveragePooling2D()(x) This layer is a really fancy flattenx = Dense(1024, activation='relu')(x)predictions = Dense(1, activation='sigmoid')(x)model = Model(resnet.input, predictions)```Your assignment is to apply the transfer learning above to classify images of Mountains (`./data/train/mountain/*`) and images of forests (`./data/train/forest/*`). Treat mountains as the positive class (1) and the forest images as the negative (zero). Steps to complete assignment: 1. Load in Image Data into numpy arrays (`X`) 2. Create a `y` for the labels3. Train your model with pre-trained layers from resnet4. Report your model's accuracy Load in DataThis surprisingly more difficult than it seems, because you are working with directories of images instead of a single file. This boiler plate will help you download a zipped version of the directory of images. The directory is organized into "train" and "validation" which you can use inside an `ImageGenerator` class to stream batches of images thru your model. Download & Summarize the DataThis step is completed for you. Just run the cells and review the results.
###Code
import tensorflow as tf
import os
# could not get the zip file like this. had to manually upload the zip file from github and unzip it
#_URL = 'https://github.com/LambdaSchool/DS-Unit-4-Sprint-3-Deep-Learning/blob/master/module2-convolutional-neural-networks/data.zip?raw=true'
#path_to_zip = tf.keras.utils.get_file('/data.zip', origin=_URL, extract=True)
#unzipping the file
!unzip /data-2.zip
PATH = os.path.join(os.path.dirname(path_to_zip),'content', 'data')
PATH
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_dir
validation_dir
train_mountain_dir = os.path.join(train_dir, 'mountain') # directory with our training cat pictures
train_forest_dir = os.path.join(train_dir, 'forest') # directory with our training dog pictures
validation_mountain_dir = os.path.join(validation_dir, 'mountain') # directory with our validation cat pictures
validation_forest_dir = os.path.join(validation_dir, 'forest') # directory with our validation dog pictures
train_mountain_dir
#train_forest_dir
num_mountain_tr = len(os.listdir(train_mountain_dir))
num_forest_tr = len(os.listdir(train_forest_dir))
num_mountain_val = len(os.listdir(validation_mountain_dir))
num_forest_val = len(os.listdir(validation_forest_dir))
total_train = num_mountain_tr + num_forest_tr
total_val = num_mountain_val + num_forest_val
print('total training mountain images:', num_mountain_tr)
print('total training forest images:', num_forest_tr)
print('total validation mountain images:', num_mountain_val)
print('total validation forest images:', num_forest_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
total training mountain images: 254
total training forest images: 270
total validation mountain images: 125
total validation forest images: 62
--
Total training images: 524
Total validation images: 187
###Markdown
Keras `ImageGenerator` to Process the DataThis step is completed for you, but please review the code. The `ImageGenerator` class reads in batches of data from a directory and pass them to the model one batch at a time. Just like large text files, this method is advantageous, because it stifles the need to load a bunch of images into memory. Check out the documentation for this class method: [Keras `ImageGenerator` Class](https://keras.io/preprocessing/image/imagedatagenerator-class). You'll expand it's use in the third assignment objective.
###Code
batch_size = 16
epochs = 50
IMG_HEIGHT = 224
IMG_WIDTH = 224
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
Found 195 images belonging to 2 classes.
###Markdown
Instatiate Model
###Code
import numpy as np
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.models import Model # This is the functional API
resnet = ResNet50(weights='imagenet', include_top=False)
for layer in resnet.layers:
layer.trainable = False
x = resnet.output
x = GlobalAveragePooling2D()(x) # This layer is a really fancy flatten
x = Dense(1024, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
model = Model(resnet.input, predictions)
model.compile(optimizer='adam',
loss= 'categorical_crossentropy',
metrics=['accuracy'])
model.summary()
###Output
Model: "functional_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, None, None, 0
__________________________________________________________________________________________________
conv1_pad (ZeroPadding2D) (None, None, None, 3 0 input_1[0][0]
__________________________________________________________________________________________________
conv1_conv (Conv2D) (None, None, None, 6 9472 conv1_pad[0][0]
__________________________________________________________________________________________________
conv1_bn (BatchNormalization) (None, None, None, 6 256 conv1_conv[0][0]
__________________________________________________________________________________________________
conv1_relu (Activation) (None, None, None, 6 0 conv1_bn[0][0]
__________________________________________________________________________________________________
pool1_pad (ZeroPadding2D) (None, None, None, 6 0 conv1_relu[0][0]
__________________________________________________________________________________________________
pool1_pool (MaxPooling2D) (None, None, None, 6 0 pool1_pad[0][0]
__________________________________________________________________________________________________
conv2_block1_1_conv (Conv2D) (None, None, None, 6 4160 pool1_pool[0][0]
__________________________________________________________________________________________________
conv2_block1_1_bn (BatchNormali (None, None, None, 6 256 conv2_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block1_1_relu (Activation (None, None, None, 6 0 conv2_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block1_2_conv (Conv2D) (None, None, None, 6 36928 conv2_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block1_2_bn (BatchNormali (None, None, None, 6 256 conv2_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv2_block1_2_relu (Activation (None, None, None, 6 0 conv2_block1_2_bn[0][0]
__________________________________________________________________________________________________
conv2_block1_0_conv (Conv2D) (None, None, None, 2 16640 pool1_pool[0][0]
__________________________________________________________________________________________________
conv2_block1_3_conv (Conv2D) (None, None, None, 2 16640 conv2_block1_2_relu[0][0]
__________________________________________________________________________________________________
conv2_block1_0_bn (BatchNormali (None, None, None, 2 1024 conv2_block1_0_conv[0][0]
__________________________________________________________________________________________________
conv2_block1_3_bn (BatchNormali (None, None, None, 2 1024 conv2_block1_3_conv[0][0]
__________________________________________________________________________________________________
conv2_block1_add (Add) (None, None, None, 2 0 conv2_block1_0_bn[0][0]
conv2_block1_3_bn[0][0]
__________________________________________________________________________________________________
conv2_block1_out (Activation) (None, None, None, 2 0 conv2_block1_add[0][0]
__________________________________________________________________________________________________
conv2_block2_1_conv (Conv2D) (None, None, None, 6 16448 conv2_block1_out[0][0]
__________________________________________________________________________________________________
conv2_block2_1_bn (BatchNormali (None, None, None, 6 256 conv2_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block2_1_relu (Activation (None, None, None, 6 0 conv2_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block2_2_conv (Conv2D) (None, None, None, 6 36928 conv2_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block2_2_bn (BatchNormali (None, None, None, 6 256 conv2_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv2_block2_2_relu (Activation (None, None, None, 6 0 conv2_block2_2_bn[0][0]
__________________________________________________________________________________________________
conv2_block2_3_conv (Conv2D) (None, None, None, 2 16640 conv2_block2_2_relu[0][0]
__________________________________________________________________________________________________
conv2_block2_3_bn (BatchNormali (None, None, None, 2 1024 conv2_block2_3_conv[0][0]
__________________________________________________________________________________________________
conv2_block2_add (Add) (None, None, None, 2 0 conv2_block1_out[0][0]
conv2_block2_3_bn[0][0]
__________________________________________________________________________________________________
conv2_block2_out (Activation) (None, None, None, 2 0 conv2_block2_add[0][0]
__________________________________________________________________________________________________
conv2_block3_1_conv (Conv2D) (None, None, None, 6 16448 conv2_block2_out[0][0]
__________________________________________________________________________________________________
conv2_block3_1_bn (BatchNormali (None, None, None, 6 256 conv2_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block3_1_relu (Activation (None, None, None, 6 0 conv2_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block3_2_conv (Conv2D) (None, None, None, 6 36928 conv2_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block3_2_bn (BatchNormali (None, None, None, 6 256 conv2_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv2_block3_2_relu (Activation (None, None, None, 6 0 conv2_block3_2_bn[0][0]
__________________________________________________________________________________________________
conv2_block3_3_conv (Conv2D) (None, None, None, 2 16640 conv2_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv2_block3_3_bn (BatchNormali (None, None, None, 2 1024 conv2_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv2_block3_add (Add) (None, None, None, 2 0 conv2_block2_out[0][0]
conv2_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv2_block3_out (Activation) (None, None, None, 2 0 conv2_block3_add[0][0]
__________________________________________________________________________________________________
conv3_block1_1_conv (Conv2D) (None, None, None, 1 32896 conv2_block3_out[0][0]
__________________________________________________________________________________________________
conv3_block1_1_bn (BatchNormali (None, None, None, 1 512 conv3_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block1_1_relu (Activation (None, None, None, 1 0 conv3_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block1_2_conv (Conv2D) (None, None, None, 1 147584 conv3_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block1_2_bn (BatchNormali (None, None, None, 1 512 conv3_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block1_2_relu (Activation (None, None, None, 1 0 conv3_block1_2_bn[0][0]
__________________________________________________________________________________________________
conv3_block1_0_conv (Conv2D) (None, None, None, 5 131584 conv2_block3_out[0][0]
__________________________________________________________________________________________________
conv3_block1_3_conv (Conv2D) (None, None, None, 5 66048 conv3_block1_2_relu[0][0]
__________________________________________________________________________________________________
conv3_block1_0_bn (BatchNormali (None, None, None, 5 2048 conv3_block1_0_conv[0][0]
__________________________________________________________________________________________________
conv3_block1_3_bn (BatchNormali (None, None, None, 5 2048 conv3_block1_3_conv[0][0]
__________________________________________________________________________________________________
conv3_block1_add (Add) (None, None, None, 5 0 conv3_block1_0_bn[0][0]
conv3_block1_3_bn[0][0]
__________________________________________________________________________________________________
conv3_block1_out (Activation) (None, None, None, 5 0 conv3_block1_add[0][0]
__________________________________________________________________________________________________
conv3_block2_1_conv (Conv2D) (None, None, None, 1 65664 conv3_block1_out[0][0]
__________________________________________________________________________________________________
conv3_block2_1_bn (BatchNormali (None, None, None, 1 512 conv3_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block2_1_relu (Activation (None, None, None, 1 0 conv3_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block2_2_conv (Conv2D) (None, None, None, 1 147584 conv3_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block2_2_bn (BatchNormali (None, None, None, 1 512 conv3_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block2_2_relu (Activation (None, None, None, 1 0 conv3_block2_2_bn[0][0]
__________________________________________________________________________________________________
conv3_block2_3_conv (Conv2D) (None, None, None, 5 66048 conv3_block2_2_relu[0][0]
__________________________________________________________________________________________________
conv3_block2_3_bn (BatchNormali (None, None, None, 5 2048 conv3_block2_3_conv[0][0]
__________________________________________________________________________________________________
conv3_block2_add (Add) (None, None, None, 5 0 conv3_block1_out[0][0]
conv3_block2_3_bn[0][0]
__________________________________________________________________________________________________
conv3_block2_out (Activation) (None, None, None, 5 0 conv3_block2_add[0][0]
__________________________________________________________________________________________________
conv3_block3_1_conv (Conv2D) (None, None, None, 1 65664 conv3_block2_out[0][0]
__________________________________________________________________________________________________
conv3_block3_1_bn (BatchNormali (None, None, None, 1 512 conv3_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block3_1_relu (Activation (None, None, None, 1 0 conv3_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block3_2_conv (Conv2D) (None, None, None, 1 147584 conv3_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block3_2_bn (BatchNormali (None, None, None, 1 512 conv3_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block3_2_relu (Activation (None, None, None, 1 0 conv3_block3_2_bn[0][0]
__________________________________________________________________________________________________
conv3_block3_3_conv (Conv2D) (None, None, None, 5 66048 conv3_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv3_block3_3_bn (BatchNormali (None, None, None, 5 2048 conv3_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv3_block3_add (Add) (None, None, None, 5 0 conv3_block2_out[0][0]
conv3_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv3_block3_out (Activation) (None, None, None, 5 0 conv3_block3_add[0][0]
__________________________________________________________________________________________________
conv3_block4_1_conv (Conv2D) (None, None, None, 1 65664 conv3_block3_out[0][0]
__________________________________________________________________________________________________
conv3_block4_1_bn (BatchNormali (None, None, None, 1 512 conv3_block4_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block4_1_relu (Activation (None, None, None, 1 0 conv3_block4_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block4_2_conv (Conv2D) (None, None, None, 1 147584 conv3_block4_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block4_2_bn (BatchNormali (None, None, None, 1 512 conv3_block4_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block4_2_relu (Activation (None, None, None, 1 0 conv3_block4_2_bn[0][0]
__________________________________________________________________________________________________
conv3_block4_3_conv (Conv2D) (None, None, None, 5 66048 conv3_block4_2_relu[0][0]
__________________________________________________________________________________________________
conv3_block4_3_bn (BatchNormali (None, None, None, 5 2048 conv3_block4_3_conv[0][0]
__________________________________________________________________________________________________
conv3_block4_add (Add) (None, None, None, 5 0 conv3_block3_out[0][0]
conv3_block4_3_bn[0][0]
__________________________________________________________________________________________________
conv3_block4_out (Activation) (None, None, None, 5 0 conv3_block4_add[0][0]
__________________________________________________________________________________________________
conv4_block1_1_conv (Conv2D) (None, None, None, 2 131328 conv3_block4_out[0][0]
__________________________________________________________________________________________________
conv4_block1_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block1_1_relu (Activation (None, None, None, 2 0 conv4_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block1_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block1_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block1_2_relu (Activation (None, None, None, 2 0 conv4_block1_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block1_0_conv (Conv2D) (None, None, None, 1 525312 conv3_block4_out[0][0]
__________________________________________________________________________________________________
conv4_block1_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block1_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block1_0_bn (BatchNormali (None, None, None, 1 4096 conv4_block1_0_conv[0][0]
__________________________________________________________________________________________________
conv4_block1_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block1_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block1_add (Add) (None, None, None, 1 0 conv4_block1_0_bn[0][0]
conv4_block1_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block1_out (Activation) (None, None, None, 1 0 conv4_block1_add[0][0]
__________________________________________________________________________________________________
conv4_block2_1_conv (Conv2D) (None, None, None, 2 262400 conv4_block1_out[0][0]
__________________________________________________________________________________________________
conv4_block2_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block2_1_relu (Activation (None, None, None, 2 0 conv4_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block2_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block2_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block2_2_relu (Activation (None, None, None, 2 0 conv4_block2_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block2_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block2_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block2_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block2_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block2_add (Add) (None, None, None, 1 0 conv4_block1_out[0][0]
conv4_block2_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block2_out (Activation) (None, None, None, 1 0 conv4_block2_add[0][0]
__________________________________________________________________________________________________
conv4_block3_1_conv (Conv2D) (None, None, None, 2 262400 conv4_block2_out[0][0]
__________________________________________________________________________________________________
conv4_block3_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block3_1_relu (Activation (None, None, None, 2 0 conv4_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block3_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block3_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block3_2_relu (Activation (None, None, None, 2 0 conv4_block3_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block3_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block3_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block3_add (Add) (None, None, None, 1 0 conv4_block2_out[0][0]
conv4_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block3_out (Activation) (None, None, None, 1 0 conv4_block3_add[0][0]
__________________________________________________________________________________________________
conv4_block4_1_conv (Conv2D) (None, None, None, 2 262400 conv4_block3_out[0][0]
__________________________________________________________________________________________________
conv4_block4_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block4_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block4_1_relu (Activation (None, None, None, 2 0 conv4_block4_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block4_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block4_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block4_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block4_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block4_2_relu (Activation (None, None, None, 2 0 conv4_block4_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block4_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block4_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block4_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block4_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block4_add (Add) (None, None, None, 1 0 conv4_block3_out[0][0]
conv4_block4_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block4_out (Activation) (None, None, None, 1 0 conv4_block4_add[0][0]
__________________________________________________________________________________________________
conv4_block5_1_conv (Conv2D) (None, None, None, 2 262400 conv4_block4_out[0][0]
__________________________________________________________________________________________________
conv4_block5_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block5_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block5_1_relu (Activation (None, None, None, 2 0 conv4_block5_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block5_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block5_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block5_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block5_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block5_2_relu (Activation (None, None, None, 2 0 conv4_block5_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block5_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block5_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block5_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block5_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block5_add (Add) (None, None, None, 1 0 conv4_block4_out[0][0]
conv4_block5_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block5_out (Activation) (None, None, None, 1 0 conv4_block5_add[0][0]
__________________________________________________________________________________________________
conv4_block6_1_conv (Conv2D) (None, None, None, 2 262400 conv4_block5_out[0][0]
__________________________________________________________________________________________________
conv4_block6_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block6_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block6_1_relu (Activation (None, None, None, 2 0 conv4_block6_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block6_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block6_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block6_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block6_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block6_2_relu (Activation (None, None, None, 2 0 conv4_block6_2_bn[0][0]
__________________________________________________________________________________________________
conv4_block6_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block6_2_relu[0][0]
__________________________________________________________________________________________________
conv4_block6_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block6_3_conv[0][0]
__________________________________________________________________________________________________
conv4_block6_add (Add) (None, None, None, 1 0 conv4_block5_out[0][0]
conv4_block6_3_bn[0][0]
__________________________________________________________________________________________________
conv4_block6_out (Activation) (None, None, None, 1 0 conv4_block6_add[0][0]
__________________________________________________________________________________________________
conv5_block1_1_conv (Conv2D) (None, None, None, 5 524800 conv4_block6_out[0][0]
__________________________________________________________________________________________________
conv5_block1_1_bn (BatchNormali (None, None, None, 5 2048 conv5_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block1_1_relu (Activation (None, None, None, 5 0 conv5_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block1_2_conv (Conv2D) (None, None, None, 5 2359808 conv5_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block1_2_bn (BatchNormali (None, None, None, 5 2048 conv5_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block1_2_relu (Activation (None, None, None, 5 0 conv5_block1_2_bn[0][0]
__________________________________________________________________________________________________
conv5_block1_0_conv (Conv2D) (None, None, None, 2 2099200 conv4_block6_out[0][0]
__________________________________________________________________________________________________
conv5_block1_3_conv (Conv2D) (None, None, None, 2 1050624 conv5_block1_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block1_0_bn (BatchNormali (None, None, None, 2 8192 conv5_block1_0_conv[0][0]
__________________________________________________________________________________________________
conv5_block1_3_bn (BatchNormali (None, None, None, 2 8192 conv5_block1_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block1_add (Add) (None, None, None, 2 0 conv5_block1_0_bn[0][0]
conv5_block1_3_bn[0][0]
__________________________________________________________________________________________________
conv5_block1_out (Activation) (None, None, None, 2 0 conv5_block1_add[0][0]
__________________________________________________________________________________________________
conv5_block2_1_conv (Conv2D) (None, None, None, 5 1049088 conv5_block1_out[0][0]
__________________________________________________________________________________________________
conv5_block2_1_bn (BatchNormali (None, None, None, 5 2048 conv5_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block2_1_relu (Activation (None, None, None, 5 0 conv5_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block2_2_conv (Conv2D) (None, None, None, 5 2359808 conv5_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block2_2_bn (BatchNormali (None, None, None, 5 2048 conv5_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block2_2_relu (Activation (None, None, None, 5 0 conv5_block2_2_bn[0][0]
__________________________________________________________________________________________________
conv5_block2_3_conv (Conv2D) (None, None, None, 2 1050624 conv5_block2_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block2_3_bn (BatchNormali (None, None, None, 2 8192 conv5_block2_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block2_add (Add) (None, None, None, 2 0 conv5_block1_out[0][0]
conv5_block2_3_bn[0][0]
__________________________________________________________________________________________________
conv5_block2_out (Activation) (None, None, None, 2 0 conv5_block2_add[0][0]
__________________________________________________________________________________________________
conv5_block3_1_conv (Conv2D) (None, None, None, 5 1049088 conv5_block2_out[0][0]
__________________________________________________________________________________________________
conv5_block3_1_bn (BatchNormali (None, None, None, 5 2048 conv5_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_1_relu (Activation (None, None, None, 5 0 conv5_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_2_conv (Conv2D) (None, None, None, 5 2359808 conv5_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_2_bn (BatchNormali (None, None, None, 5 2048 conv5_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_2_relu (Activation (None, None, None, 5 0 conv5_block3_2_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_3_conv (Conv2D) (None, None, None, 2 1050624 conv5_block3_2_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_3_bn (BatchNormali (None, None, None, 2 8192 conv5_block3_3_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_add (Add) (None, None, None, 2 0 conv5_block2_out[0][0]
conv5_block3_3_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_out (Activation) (None, None, None, 2 0 conv5_block3_add[0][0]
__________________________________________________________________________________________________
global_average_pooling2d (Globa (None, 2048) 0 conv5_block3_out[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 1024) 2098176 global_average_pooling2d[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 1) 1025 dense[0][0]
==================================================================================================
Total params: 25,686,913
Trainable params: 2,099,201
Non-trainable params: 23,587,712
__________________________________________________________________________________________________
###Markdown
Fit Model
###Code
history = model.fit(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
Epoch 1/50
32/32 [==============================] - 98s 3s/step - loss: 5.7820e-08 - accuracy: 0.5150 - val_loss: 7.7215e-08 - val_accuracy: 0.3523
Epoch 2/50
32/32 [==============================] - 98s 3s/step - loss: 5.7106e-08 - accuracy: 0.5210 - val_loss: 8.0602e-08 - val_accuracy: 0.3239
Epoch 3/50
32/32 [==============================] - 98s 3s/step - loss: 5.7820e-08 - accuracy: 0.5150 - val_loss: 7.7215e-08 - val_accuracy: 0.3523
Epoch 4/50
32/32 [==============================] - 97s 3s/step - loss: 5.8296e-08 - accuracy: 0.5110 - val_loss: 7.6538e-08 - val_accuracy: 0.3580
Epoch 5/50
32/32 [==============================] - 97s 3s/step - loss: 5.7582e-08 - accuracy: 0.5170 - val_loss: 7.8570e-08 - val_accuracy: 0.3409
Epoch 6/50
32/32 [==============================] - 97s 3s/step - loss: 5.7820e-08 - accuracy: 0.5150 - val_loss: 7.7215e-08 - val_accuracy: 0.3523
Epoch 7/50
32/32 [==============================] - 97s 3s/step - loss: 5.9248e-08 - accuracy: 0.5030 - val_loss: 7.5860e-08 - val_accuracy: 0.3636
Epoch 8/50
32/32 [==============================] - 97s 3s/step - loss: 5.8534e-08 - accuracy: 0.5090 - val_loss: 7.7215e-08 - val_accuracy: 0.3523
Epoch 9/50
32/32 [==============================] - 97s 3s/step - loss: 5.8058e-08 - accuracy: 0.5130 - val_loss: 7.8570e-08 - val_accuracy: 0.3409
Epoch 10/50
32/32 [==============================] - 97s 3s/step - loss: 5.7344e-08 - accuracy: 0.5190 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 11/50
32/32 [==============================] - 97s 3s/step - loss: 5.7106e-08 - accuracy: 0.5210 - val_loss: 7.8570e-08 - val_accuracy: 0.3409
Epoch 12/50
32/32 [==============================] - 97s 3s/step - loss: 5.9486e-08 - accuracy: 0.5010 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 13/50
32/32 [==============================] - 97s 3s/step - loss: 5.8296e-08 - accuracy: 0.5110 - val_loss: 8.1279e-08 - val_accuracy: 0.3182
Epoch 14/50
32/32 [==============================] - 97s 3s/step - loss: 5.7582e-08 - accuracy: 0.5170 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 15/50
32/32 [==============================] - 97s 3s/step - loss: 5.8296e-08 - accuracy: 0.5110 - val_loss: 7.8570e-08 - val_accuracy: 0.3409
Epoch 16/50
32/32 [==============================] - 97s 3s/step - loss: 5.8058e-08 - accuracy: 0.5130 - val_loss: 7.9924e-08 - val_accuracy: 0.3295
Epoch 17/50
32/32 [==============================] - 97s 3s/step - loss: 5.8058e-08 - accuracy: 0.5130 - val_loss: 7.8570e-08 - val_accuracy: 0.3409
Epoch 18/50
32/32 [==============================] - 97s 3s/step - loss: 5.7820e-08 - accuracy: 0.5150 - val_loss: 8.1279e-08 - val_accuracy: 0.3182
Epoch 19/50
32/32 [==============================] - 97s 3s/step - loss: 5.7582e-08 - accuracy: 0.5170 - val_loss: 7.8570e-08 - val_accuracy: 0.3409
Epoch 20/50
32/32 [==============================] - 97s 3s/step - loss: 5.6868e-08 - accuracy: 0.5230 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 21/50
32/32 [==============================] - 97s 3s/step - loss: 5.7820e-08 - accuracy: 0.5150 - val_loss: 7.9247e-08 - val_accuracy: 0.3352
Epoch 22/50
32/32 [==============================] - 97s 3s/step - loss: 5.7582e-08 - accuracy: 0.5170 - val_loss: 7.9924e-08 - val_accuracy: 0.3295
Epoch 23/50
32/32 [==============================] - 97s 3s/step - loss: 5.8058e-08 - accuracy: 0.5130 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 24/50
32/32 [==============================] - 97s 3s/step - loss: 5.8772e-08 - accuracy: 0.5070 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 25/50
32/32 [==============================] - 97s 3s/step - loss: 5.7820e-08 - accuracy: 0.5150 - val_loss: 7.7215e-08 - val_accuracy: 0.3523
Epoch 26/50
32/32 [==============================] - 97s 3s/step - loss: 5.6868e-08 - accuracy: 0.5230 - val_loss: 7.8570e-08 - val_accuracy: 0.3409
Epoch 27/50
32/32 [==============================] - 97s 3s/step - loss: 5.8296e-08 - accuracy: 0.5110 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 28/50
32/32 [==============================] - 97s 3s/step - loss: 5.9010e-08 - accuracy: 0.5050 - val_loss: 7.7215e-08 - val_accuracy: 0.3523
Epoch 29/50
32/32 [==============================] - 97s 3s/step - loss: 5.7344e-08 - accuracy: 0.5190 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 30/50
32/32 [==============================] - 97s 3s/step - loss: 5.8058e-08 - accuracy: 0.5130 - val_loss: 7.5860e-08 - val_accuracy: 0.3636
Epoch 31/50
32/32 [==============================] - 97s 3s/step - loss: 5.6868e-08 - accuracy: 0.5230 - val_loss: 8.1279e-08 - val_accuracy: 0.3182
Epoch 32/50
32/32 [==============================] - 97s 3s/step - loss: 5.7582e-08 - accuracy: 0.5170 - val_loss: 7.9247e-08 - val_accuracy: 0.3352
Epoch 33/50
32/32 [==============================] - 97s 3s/step - loss: 5.7106e-08 - accuracy: 0.5210 - val_loss: 7.9247e-08 - val_accuracy: 0.3352
Epoch 34/50
32/32 [==============================] - 97s 3s/step - loss: 5.7106e-08 - accuracy: 0.5210 - val_loss: 7.5860e-08 - val_accuracy: 0.3636
Epoch 35/50
32/32 [==============================] - 97s 3s/step - loss: 5.7582e-08 - accuracy: 0.5170 - val_loss: 7.8570e-08 - val_accuracy: 0.3409
Epoch 36/50
32/32 [==============================] - 97s 3s/step - loss: 5.8296e-08 - accuracy: 0.5110 - val_loss: 7.7215e-08 - val_accuracy: 0.3523
Epoch 37/50
32/32 [==============================] - 97s 3s/step - loss: 5.7106e-08 - accuracy: 0.5210 - val_loss: 7.9247e-08 - val_accuracy: 0.3352
Epoch 38/50
32/32 [==============================] - 97s 3s/step - loss: 5.9010e-08 - accuracy: 0.5050 - val_loss: 7.7215e-08 - val_accuracy: 0.3523
Epoch 39/50
32/32 [==============================] - 97s 3s/step - loss: 5.6630e-08 - accuracy: 0.5250 - val_loss: 7.9924e-08 - val_accuracy: 0.3295
Epoch 40/50
32/32 [==============================] - 97s 3s/step - loss: 5.7106e-08 - accuracy: 0.5210 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 41/50
32/32 [==============================] - 97s 3s/step - loss: 5.7582e-08 - accuracy: 0.5170 - val_loss: 7.6538e-08 - val_accuracy: 0.3580
Epoch 42/50
32/32 [==============================] - 99s 3s/step - loss: 5.7344e-08 - accuracy: 0.5190 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 43/50
32/32 [==============================] - 97s 3s/step - loss: 5.8058e-08 - accuracy: 0.5130 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 44/50
32/32 [==============================] - 97s 3s/step - loss: 5.7820e-08 - accuracy: 0.5150 - val_loss: 7.7215e-08 - val_accuracy: 0.3523
Epoch 45/50
32/32 [==============================] - 97s 3s/step - loss: 5.7820e-08 - accuracy: 0.5150 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 46/50
32/32 [==============================] - 97s 3s/step - loss: 5.8534e-08 - accuracy: 0.5090 - val_loss: 7.9247e-08 - val_accuracy: 0.3352
Epoch 47/50
32/32 [==============================] - 97s 3s/step - loss: 5.8058e-08 - accuracy: 0.5130 - val_loss: 7.7215e-08 - val_accuracy: 0.3523
Epoch 48/50
32/32 [==============================] - 97s 3s/step - loss: 5.7106e-08 - accuracy: 0.5210 - val_loss: 7.9924e-08 - val_accuracy: 0.3295
Epoch 49/50
32/32 [==============================] - 97s 3s/step - loss: 5.7344e-08 - accuracy: 0.5190 - val_loss: 7.7892e-08 - val_accuracy: 0.3466
Epoch 50/50
32/32 [==============================] - 97s 3s/step - loss: 5.7344e-08 - accuracy: 0.5190 - val_loss: 7.6538e-08 - val_accuracy: 0.3580
###Markdown
Custom CNN ModelIn this step, write and train your own convolutional neural network using Keras. You can use any architecture that suits you as long as it has at least one convolutional and one pooling layer at the beginning of the network - you can add more if you want.
###Code
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Flatten
train_data_gen[0][0][0].shape
train_data_gen[0][1]
# Define the Model
model = Sequential()
model.add(Conv2D(32, kernel_size=(3,3), activation='relu', input_shape=(224,224,3)))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile Model
model.compile(optimizer='nadam',
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
# Fit Model
model.fit(train_data_gen, epochs=10, validation_data=(val_data_gen))
###Output
Epoch 1/10
34/34 [==============================] - 27s 804ms/step - loss: 5.2051 - accuracy: 0.7261 - val_loss: 0.3281 - val_accuracy: 0.9026
Epoch 2/10
34/34 [==============================] - 27s 795ms/step - loss: 0.1801 - accuracy: 0.9287 - val_loss: 0.1752 - val_accuracy: 0.9333
Epoch 3/10
34/34 [==============================] - 27s 796ms/step - loss: 0.1264 - accuracy: 0.9493 - val_loss: 0.3436 - val_accuracy: 0.8615
Epoch 4/10
34/34 [==============================] - 27s 796ms/step - loss: 0.0629 - accuracy: 0.9775 - val_loss: 0.1464 - val_accuracy: 0.9385
Epoch 5/10
34/34 [==============================] - 27s 799ms/step - loss: 0.0309 - accuracy: 0.9887 - val_loss: 0.1604 - val_accuracy: 0.9282
Epoch 6/10
34/34 [==============================] - 27s 796ms/step - loss: 0.0134 - accuracy: 1.0000 - val_loss: 0.2989 - val_accuracy: 0.9282
Epoch 7/10
34/34 [==============================] - 27s 795ms/step - loss: 0.0117 - accuracy: 0.9981 - val_loss: 0.1433 - val_accuracy: 0.9436
Epoch 8/10
34/34 [==============================] - 27s 796ms/step - loss: 0.0062 - accuracy: 1.0000 - val_loss: 0.1824 - val_accuracy: 0.9282
Epoch 9/10
34/34 [==============================] - 27s 797ms/step - loss: 0.0045 - accuracy: 1.0000 - val_loss: 0.1826 - val_accuracy: 0.9282
Epoch 10/10
34/34 [==============================] - 27s 797ms/step - loss: 0.0026 - accuracy: 1.0000 - val_loss: 0.2014 - val_accuracy: 0.9282
###Markdown
Custom CNN Model with Image ManipulationsTo simulate an increase in a sample of image, you can apply image manipulation techniques: cropping, rotation, stretching, etc. Luckily Keras has some handy functions for us to apply these techniques to our mountain and forest example. Simply, you should be able to modify our image generator for the problem. Check out these resources to help you get started: 1. [Keras `ImageGenerator` Class](https://keras.io/preprocessing/image/imagedatagenerator-class)2. [Building a powerful image classifier with very little data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html)
###Code
batch_size = 16
epochs = 50
IMG_HEIGHT = 224
IMG_WIDTH = 224
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_image_generator = ImageDataGenerator(rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
width_shift_range=0.2,
height_shift_range=0.2) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
from tensorflow.keras.callbacks import EarlyStopping
# Compile Model
# Setup Architecture
model = Sequential()
model.add(Conv2D(32, kernel_size=(3,3), activation='relu', input_shape=(224,224,3)))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Conv2D(64, kernel_size=(3,3), activation='relu'))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
# Compile Model
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
# Print summary
model.summary()
model.fit(train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size,
callbacks=[EarlyStopping(min_delta=.02, monitor='val_loss', patience=5)])
###Output
_____no_output_____
###Markdown
*Data Science Unit 4 Sprint 3 Assignment 2* Convolutional Neural Networks (CNNs) Assignment- Part 1: Pre-Trained Model- Part 2: Custom CNN Model- Part 3: CNN with Data AugmentationYou will apply three different CNN models to a binary image classification model using Keras. Classify images of Mountains (`./data/train/mountain/*`) and images of forests (`./data/train/forest/*`). Treat mountains as the positive class (1) and the forest images as the negative (zero). |Mountain (+)|Forest (-)||---|---||||The problem is relatively difficult given that the sample is tiny: there are about 350 observations per class. This sample size might be something that you can expect with prototyping an image classification problem/solution at work. Get accustomed to evaluating several different possible models. Pre - Trained ModelLoad a pretrained network from Keras, [ResNet50](https://tfhub.dev/google/imagenet/resnet_v1_50/classification/1) - a 50 layer deep network trained to recognize [1000 objects](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). Starting usage:```pythonimport numpy as npfrom tensorflow.keras.applications.resnet50 import ResNet50from tensorflow.keras.preprocessing import imagefrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictionsfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2Dfrom tensorflow.keras.models import Model This is the functional APIresnet = ResNet50(weights='imagenet', include_top=False)```The `include_top` parameter in `ResNet50` will remove the full connected layers from the ResNet model. The next step is to turn off the training of the ResNet layers. We want to use the learned parameters without updating them in future training passes. ```pythonfor layer in resnet.layers: layer.trainable = False```Using the Keras functional API, we will need to additional additional full connected layers to our model. We we removed the top layers, we removed all preivous fully connected layers. In other words, we kept only the feature processing portions of our network. You can expert with additional layers beyond what's listed here. The `GlobalAveragePooling2D` layer functions as a really fancy flatten function by taking the average of each of the last convolutional layer outputs (which is two dimensional still). ```pythonx = resnet.outputx = GlobalAveragePooling2D()(x) This layer is a really fancy flattenx = Dense(1024, activation='relu')(x)predictions = Dense(1, activation='sigmoid')(x)model = Model(resnet.input, predictions)```Your assignment is to apply the transfer learning above to classify images of Mountains (`./data/train/mountain/*`) and images of forests (`./data/train/forest/*`). Treat mountains as the positive class (1) and the forest images as the negative (zero). Steps to complete assignment: 1. Load in Image Data into numpy arrays (`X`) 2. Create a `y` for the labels3. Train your model with pre-trained layers from resnet4. Report your model's accuracy Load in DataThis surprisingly more difficult than it seems, because you are working with directories of images instead of a single file. This boiler plate will help you download a zipped version of the directory of images. The directory is organized into "train" and "validation" which you can use inside an `ImageGenerator` class to stream batches of images thru your model. Download & Summarize the DataThis step is completed for you. Just run the cells and review the results.
###Code
! git clone https://github.com/noah40povis/DS-Unit-4-Sprint-3-Deep-Learning.git
import numpy as np
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.models import Model # This is the functional API
resnet = ResNet50(weights='imagenet', include_top=False)
import tensorflow as tf
import os
PATH = '/content/DS-Unit-4-Sprint-3-Deep-Learning/module2-convolutional-neural-networks/data'
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_mountain_dir = os.path.join(train_dir, 'mountain') # directory with our training cat pictures
train_forest_dir = os.path.join(train_dir, 'forest') # directory with our training dog pictures
validation_mountain_dir = os.path.join(validation_dir, 'mountain') # directory with our validation cat pictures
validation_forest_dir = os.path.join(validation_dir, 'forest') # directory with our validation dog pictures
train_mountain_dir
num_mountain_tr = len(os.listdir(train_mountain_dir))
num_forest_tr = len(os.listdir(train_forest_dir))
num_mountain_val = len(os.listdir(validation_mountain_dir))
num_forest_val = len(os.listdir(validation_forest_dir))
total_train = num_mountain_tr + num_forest_tr
total_val = num_mountain_val + num_forest_val
print('total training mountain images:', num_mountain_tr)
print('total training forest images:', num_forest_tr)
print('total validation mountain images:', num_mountain_val)
print('total validation forest images:', num_forest_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
total training mountain images: 253
total training forest images: 269
total validation mountain images: 124
total validation forest images: 61
--
Total training images: 522
Total validation images: 185
###Markdown
Keras `ImageGenerator` to Process the DataThis step is completed for you, but please review the code. The `ImageGenerator` class reads in batches of data from a directory and pass them to the model one batch at a time. Just like large text files, this method is advantageous, because it stifles the need to load a bunch of images into memory. Check out the documentation for this class method: [Keras `ImageGenerator` Class](https://keras.io/preprocessing/image/imagedatagenerator-class). You'll expand it's use in the third assignment objective.
###Code
batch_size = 16
epochs = 50
IMG_HEIGHT = 224
IMG_WIDTH = 224
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
###Output
Found 182 images belonging to 2 classes.
###Markdown
Instatiate Model
###Code
for layer in resnet.layers:
layer.trainable = False
x = resnet.output
x = GlobalAveragePooling2D()(x) # This layer is a really fancy flatten
x = Dense(1024, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
model = Model(resnet.input, predictions)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['binary_accuracy'])
###Output
_____no_output_____
###Markdown
Fit Model
###Code
history = model.fit(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size
)
###Output
Epoch 1/50
32/32 [==============================] - 3s 81ms/step - loss: 0.9911 - binary_accuracy: 0.4940 - val_loss: 0.5924 - val_binary_accuracy: 0.6818
Epoch 2/50
32/32 [==============================] - 2s 53ms/step - loss: 0.6616 - binary_accuracy: 0.5933 - val_loss: 0.5826 - val_binary_accuracy: 0.6761
Epoch 3/50
32/32 [==============================] - 2s 52ms/step - loss: 0.5442 - binary_accuracy: 0.7956 - val_loss: 0.6306 - val_binary_accuracy: 0.6193
Epoch 4/50
32/32 [==============================] - 2s 52ms/step - loss: 0.5266 - binary_accuracy: 0.7222 - val_loss: 0.7359 - val_binary_accuracy: 0.4602
Epoch 5/50
32/32 [==============================] - 2s 54ms/step - loss: 0.4837 - binary_accuracy: 0.8075 - val_loss: 0.4304 - val_binary_accuracy: 0.8750
Epoch 6/50
32/32 [==============================] - 2s 53ms/step - loss: 0.4368 - binary_accuracy: 0.8413 - val_loss: 0.3980 - val_binary_accuracy: 0.8750
Epoch 7/50
32/32 [==============================] - 2s 51ms/step - loss: 0.4395 - binary_accuracy: 0.8155 - val_loss: 0.3604 - val_binary_accuracy: 0.8523
Epoch 8/50
32/32 [==============================] - 2s 53ms/step - loss: 0.3894 - binary_accuracy: 0.8373 - val_loss: 0.3621 - val_binary_accuracy: 0.8864
Epoch 9/50
32/32 [==============================] - 2s 52ms/step - loss: 0.3596 - binary_accuracy: 0.8690 - val_loss: 0.4191 - val_binary_accuracy: 0.8409
Epoch 10/50
32/32 [==============================] - 2s 52ms/step - loss: 0.3878 - binary_accuracy: 0.8274 - val_loss: 0.5005 - val_binary_accuracy: 0.7955
Epoch 11/50
32/32 [==============================] - 2s 51ms/step - loss: 0.3252 - binary_accuracy: 0.8770 - val_loss: 0.3538 - val_binary_accuracy: 0.8750
Epoch 12/50
32/32 [==============================] - 2s 54ms/step - loss: 0.3175 - binary_accuracy: 0.8929 - val_loss: 0.4059 - val_binary_accuracy: 0.8523
Epoch 13/50
32/32 [==============================] - 2s 53ms/step - loss: 0.2866 - binary_accuracy: 0.8909 - val_loss: 0.6436 - val_binary_accuracy: 0.6705
Epoch 14/50
32/32 [==============================] - 2s 54ms/step - loss: 0.2757 - binary_accuracy: 0.8988 - val_loss: 0.3369 - val_binary_accuracy: 0.8523
Epoch 15/50
32/32 [==============================] - 2s 51ms/step - loss: 0.2561 - binary_accuracy: 0.9286 - val_loss: 0.3208 - val_binary_accuracy: 0.8523
Epoch 16/50
32/32 [==============================] - 2s 51ms/step - loss: 0.2218 - binary_accuracy: 0.9266 - val_loss: 0.2891 - val_binary_accuracy: 0.8977
Epoch 17/50
32/32 [==============================] - 2s 51ms/step - loss: 0.2088 - binary_accuracy: 0.9286 - val_loss: 0.4897 - val_binary_accuracy: 0.8125
Epoch 18/50
32/32 [==============================] - 2s 50ms/step - loss: 0.2682 - binary_accuracy: 0.9067 - val_loss: 0.3153 - val_binary_accuracy: 0.8636
Epoch 19/50
32/32 [==============================] - 2s 51ms/step - loss: 0.2196 - binary_accuracy: 0.9345 - val_loss: 0.4399 - val_binary_accuracy: 0.8352
Epoch 20/50
32/32 [==============================] - 2s 51ms/step - loss: 0.1904 - binary_accuracy: 0.9385 - val_loss: 0.2539 - val_binary_accuracy: 0.9205
Epoch 21/50
32/32 [==============================] - 2s 51ms/step - loss: 0.1694 - binary_accuracy: 0.9544 - val_loss: 0.2238 - val_binary_accuracy: 0.9148
Epoch 22/50
32/32 [==============================] - 2s 51ms/step - loss: 0.2023 - binary_accuracy: 0.9306 - val_loss: 0.3657 - val_binary_accuracy: 0.8466
Epoch 23/50
32/32 [==============================] - 2s 52ms/step - loss: 0.1762 - binary_accuracy: 0.9385 - val_loss: 0.3246 - val_binary_accuracy: 0.8580
Epoch 24/50
32/32 [==============================] - 2s 51ms/step - loss: 0.1610 - binary_accuracy: 0.9524 - val_loss: 0.2013 - val_binary_accuracy: 0.9261
Epoch 25/50
32/32 [==============================] - 2s 51ms/step - loss: 0.1668 - binary_accuracy: 0.9385 - val_loss: 0.2560 - val_binary_accuracy: 0.8920
Epoch 26/50
32/32 [==============================] - 2s 51ms/step - loss: 0.2086 - binary_accuracy: 0.9187 - val_loss: 0.6193 - val_binary_accuracy: 0.7443
Epoch 27/50
32/32 [==============================] - 2s 52ms/step - loss: 0.1462 - binary_accuracy: 0.9544 - val_loss: 0.2262 - val_binary_accuracy: 0.9375
Epoch 28/50
32/32 [==============================] - 2s 52ms/step - loss: 0.2026 - binary_accuracy: 0.9206 - val_loss: 0.5737 - val_binary_accuracy: 0.7727
Epoch 29/50
32/32 [==============================] - 2s 51ms/step - loss: 0.1414 - binary_accuracy: 0.9405 - val_loss: 0.2184 - val_binary_accuracy: 0.9375
Epoch 30/50
32/32 [==============================] - 2s 51ms/step - loss: 0.1385 - binary_accuracy: 0.9524 - val_loss: 0.5167 - val_binary_accuracy: 0.8068
Epoch 31/50
32/32 [==============================] - 2s 51ms/step - loss: 0.2651 - binary_accuracy: 0.8710 - val_loss: 0.2142 - val_binary_accuracy: 0.9375
Epoch 32/50
32/32 [==============================] - 2s 52ms/step - loss: 0.1162 - binary_accuracy: 0.9663 - val_loss: 0.2241 - val_binary_accuracy: 0.9375
Epoch 33/50
32/32 [==============================] - 2s 51ms/step - loss: 0.1970 - binary_accuracy: 0.9147 - val_loss: 0.6985 - val_binary_accuracy: 0.7216
Epoch 34/50
32/32 [==============================] - 2s 50ms/step - loss: 0.1510 - binary_accuracy: 0.9444 - val_loss: 0.3455 - val_binary_accuracy: 0.8466
Epoch 35/50
32/32 [==============================] - 2s 51ms/step - loss: 0.1021 - binary_accuracy: 0.9663 - val_loss: 0.2252 - val_binary_accuracy: 0.9091
Epoch 36/50
32/32 [==============================] - 2s 51ms/step - loss: 0.1227 - binary_accuracy: 0.9603 - val_loss: 0.3417 - val_binary_accuracy: 0.8636
Epoch 37/50
32/32 [==============================] - 2s 52ms/step - loss: 0.0906 - binary_accuracy: 0.9702 - val_loss: 0.2179 - val_binary_accuracy: 0.9375
Epoch 38/50
32/32 [==============================] - 2s 52ms/step - loss: 0.1445 - binary_accuracy: 0.9444 - val_loss: 0.5662 - val_binary_accuracy: 0.8011
Epoch 39/50
32/32 [==============================] - 2s 51ms/step - loss: 0.1733 - binary_accuracy: 0.9345 - val_loss: 0.1963 - val_binary_accuracy: 0.9261
Epoch 40/50
32/32 [==============================] - 2s 51ms/step - loss: 0.0941 - binary_accuracy: 0.9590 - val_loss: 0.1957 - val_binary_accuracy: 0.9375
Epoch 41/50
32/32 [==============================] - 2s 53ms/step - loss: 0.0982 - binary_accuracy: 0.9643 - val_loss: 0.3232 - val_binary_accuracy: 0.8807
Epoch 42/50
32/32 [==============================] - 2s 52ms/step - loss: 0.0794 - binary_accuracy: 0.9762 - val_loss: 0.2214 - val_binary_accuracy: 0.9148
Epoch 43/50
32/32 [==============================] - 2s 52ms/step - loss: 0.0821 - binary_accuracy: 0.9762 - val_loss: 0.3992 - val_binary_accuracy: 0.8466
Epoch 44/50
32/32 [==============================] - 2s 52ms/step - loss: 0.0785 - binary_accuracy: 0.9782 - val_loss: 0.2468 - val_binary_accuracy: 0.9148
Epoch 45/50
32/32 [==============================] - 2s 49ms/step - loss: 0.0937 - binary_accuracy: 0.9722 - val_loss: 0.1856 - val_binary_accuracy: 0.9432
Epoch 46/50
32/32 [==============================] - 2s 52ms/step - loss: 0.0757 - binary_accuracy: 0.9702 - val_loss: 0.1811 - val_binary_accuracy: 0.9375
Epoch 47/50
32/32 [==============================] - 2s 52ms/step - loss: 0.0795 - binary_accuracy: 0.9841 - val_loss: 0.1868 - val_binary_accuracy: 0.9432
Epoch 48/50
32/32 [==============================] - 2s 52ms/step - loss: 0.0795 - binary_accuracy: 0.9762 - val_loss: 0.2295 - val_binary_accuracy: 0.9205
Epoch 49/50
32/32 [==============================] - 2s 51ms/step - loss: 0.0705 - binary_accuracy: 0.9762 - val_loss: 0.1910 - val_binary_accuracy: 0.9432
Epoch 50/50
32/32 [==============================] - 2s 52ms/step - loss: 0.0792 - binary_accuracy: 0.9643 - val_loss: 0.5104 - val_binary_accuracy: 0.8182
###Markdown
Custom CNN ModelIn this step, write and train your own convolutional neural network using Keras. You can use any architecture that suits you as long as it has at least one convolutional and one pooling layer at the beginning of the network - you can add more if you want.
###Code
# Define the Model
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, Dropout
model = Sequential()
model.add(Conv2D(32, (3,3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)))
model.add(MaxPooling2D((2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(16, (3,3), activation='relu'))
model.add(MaxPooling2D((2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3,3), activation='relu'))
model.add(Flatten())
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
# Compile Model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['binary_accuracy'])
# Fit Model
model.fit(
train_data_gen,
epochs=50,
validation_data=val_data_gen
)
###Output
Epoch 1/50
33/33 [==============================] - 2s 51ms/step - loss: 0.6030 - binary_accuracy: 0.7096 - val_loss: 0.2120 - val_binary_accuracy: 0.9231
Epoch 2/50
33/33 [==============================] - 2s 46ms/step - loss: 0.2261 - binary_accuracy: 0.9192 - val_loss: 0.1770 - val_binary_accuracy: 0.9396
Epoch 3/50
33/33 [==============================] - 2s 47ms/step - loss: 0.1845 - binary_accuracy: 0.9288 - val_loss: 0.2339 - val_binary_accuracy: 0.8846
Epoch 4/50
33/33 [==============================] - 2s 48ms/step - loss: 0.1587 - binary_accuracy: 0.9462 - val_loss: 0.1853 - val_binary_accuracy: 0.9341
Epoch 5/50
33/33 [==============================] - 2s 53ms/step - loss: 0.1071 - binary_accuracy: 0.9615 - val_loss: 0.1704 - val_binary_accuracy: 0.9341
Epoch 6/50
33/33 [==============================] - 2s 48ms/step - loss: 0.0937 - binary_accuracy: 0.9577 - val_loss: 0.1978 - val_binary_accuracy: 0.9341
Epoch 7/50
33/33 [==============================] - 2s 46ms/step - loss: 0.1086 - binary_accuracy: 0.9615 - val_loss: 0.4861 - val_binary_accuracy: 0.7747
Epoch 8/50
33/33 [==============================] - 2s 47ms/step - loss: 0.1018 - binary_accuracy: 0.9635 - val_loss: 0.1990 - val_binary_accuracy: 0.9286
Epoch 9/50
33/33 [==============================] - 2s 47ms/step - loss: 0.0858 - binary_accuracy: 0.9731 - val_loss: 0.1989 - val_binary_accuracy: 0.9341
Epoch 10/50
33/33 [==============================] - 2s 48ms/step - loss: 0.0943 - binary_accuracy: 0.9769 - val_loss: 0.2037 - val_binary_accuracy: 0.9286
Epoch 11/50
33/33 [==============================] - 2s 48ms/step - loss: 0.0368 - binary_accuracy: 0.9865 - val_loss: 0.1815 - val_binary_accuracy: 0.9286
Epoch 12/50
33/33 [==============================] - 2s 49ms/step - loss: 0.0316 - binary_accuracy: 0.9904 - val_loss: 0.1995 - val_binary_accuracy: 0.9341
Epoch 13/50
33/33 [==============================] - 2s 48ms/step - loss: 0.0103 - binary_accuracy: 0.9981 - val_loss: 0.5248 - val_binary_accuracy: 0.8681
Epoch 14/50
33/33 [==============================] - 2s 48ms/step - loss: 0.1607 - binary_accuracy: 0.9462 - val_loss: 0.1920 - val_binary_accuracy: 0.9451
Epoch 15/50
33/33 [==============================] - 2s 48ms/step - loss: 0.1892 - binary_accuracy: 0.9288 - val_loss: 0.4871 - val_binary_accuracy: 0.7692
Epoch 16/50
33/33 [==============================] - 2s 47ms/step - loss: 0.0781 - binary_accuracy: 0.9712 - val_loss: 0.2139 - val_binary_accuracy: 0.9231
Epoch 17/50
33/33 [==============================] - 2s 48ms/step - loss: 0.0420 - binary_accuracy: 0.9808 - val_loss: 0.2802 - val_binary_accuracy: 0.9066
Epoch 18/50
33/33 [==============================] - 2s 47ms/step - loss: 0.0416 - binary_accuracy: 0.9788 - val_loss: 0.1866 - val_binary_accuracy: 0.9286
Epoch 19/50
33/33 [==============================] - 2s 47ms/step - loss: 0.0273 - binary_accuracy: 0.9885 - val_loss: 0.2188 - val_binary_accuracy: 0.9286
Epoch 20/50
33/33 [==============================] - 2s 46ms/step - loss: 0.0121 - binary_accuracy: 0.9962 - val_loss: 0.2625 - val_binary_accuracy: 0.9341
Epoch 21/50
33/33 [==============================] - 2s 47ms/step - loss: 0.0053 - binary_accuracy: 1.0000 - val_loss: 0.2467 - val_binary_accuracy: 0.9341
Epoch 22/50
33/33 [==============================] - 2s 47ms/step - loss: 0.0047 - binary_accuracy: 1.0000 - val_loss: 0.4112 - val_binary_accuracy: 0.9011
Epoch 23/50
33/33 [==============================] - 2s 47ms/step - loss: 0.0020 - binary_accuracy: 1.0000 - val_loss: 0.4174 - val_binary_accuracy: 0.9121
Epoch 24/50
33/33 [==============================] - 2s 47ms/step - loss: 0.0047 - binary_accuracy: 0.9981 - val_loss: 0.4618 - val_binary_accuracy: 0.8901
Epoch 25/50
33/33 [==============================] - 2s 47ms/step - loss: 0.0091 - binary_accuracy: 0.9962 - val_loss: 0.4528 - val_binary_accuracy: 0.8901
Epoch 26/50
33/33 [==============================] - 2s 47ms/step - loss: 0.0026 - binary_accuracy: 1.0000 - val_loss: 0.5493 - val_binary_accuracy: 0.9011
Epoch 27/50
33/33 [==============================] - 2s 47ms/step - loss: 5.3023e-04 - binary_accuracy: 1.0000 - val_loss: 0.5254 - val_binary_accuracy: 0.9121
Epoch 28/50
33/33 [==============================] - 2s 47ms/step - loss: 0.0024 - binary_accuracy: 1.0000 - val_loss: 0.4076 - val_binary_accuracy: 0.9121
Epoch 29/50
33/33 [==============================] - 2s 48ms/step - loss: 0.0027 - binary_accuracy: 1.0000 - val_loss: 0.3792 - val_binary_accuracy: 0.9231
Epoch 30/50
33/33 [==============================] - 2s 49ms/step - loss: 0.0021 - binary_accuracy: 1.0000 - val_loss: 0.3682 - val_binary_accuracy: 0.9121
Epoch 31/50
33/33 [==============================] - 2s 50ms/step - loss: 0.0078 - binary_accuracy: 0.9962 - val_loss: 0.2743 - val_binary_accuracy: 0.9176
Epoch 32/50
33/33 [==============================] - 2s 49ms/step - loss: 0.0100 - binary_accuracy: 0.9981 - val_loss: 0.3407 - val_binary_accuracy: 0.9286
Epoch 33/50
33/33 [==============================] - 2s 49ms/step - loss: 0.0025 - binary_accuracy: 1.0000 - val_loss: 0.2869 - val_binary_accuracy: 0.9286
Epoch 34/50
33/33 [==============================] - 2s 49ms/step - loss: 9.9163e-04 - binary_accuracy: 1.0000 - val_loss: 0.5542 - val_binary_accuracy: 0.9011
Epoch 35/50
33/33 [==============================] - 2s 49ms/step - loss: 2.7336e-04 - binary_accuracy: 1.0000 - val_loss: 0.4618 - val_binary_accuracy: 0.9176
Epoch 36/50
33/33 [==============================] - 2s 47ms/step - loss: 4.9672e-04 - binary_accuracy: 1.0000 - val_loss: 0.6965 - val_binary_accuracy: 0.8681
Epoch 37/50
33/33 [==============================] - 2s 48ms/step - loss: 1.8629e-04 - binary_accuracy: 1.0000 - val_loss: 0.6010 - val_binary_accuracy: 0.9011
Epoch 38/50
33/33 [==============================] - 2s 48ms/step - loss: 3.0239e-04 - binary_accuracy: 1.0000 - val_loss: 0.4298 - val_binary_accuracy: 0.9121
Epoch 39/50
33/33 [==============================] - 2s 47ms/step - loss: 2.7331e-04 - binary_accuracy: 1.0000 - val_loss: 0.6392 - val_binary_accuracy: 0.8956
Epoch 40/50
33/33 [==============================] - 2s 47ms/step - loss: 3.8646e-04 - binary_accuracy: 1.0000 - val_loss: 0.5027 - val_binary_accuracy: 0.9121
Epoch 41/50
33/33 [==============================] - 2s 47ms/step - loss: 1.0202e-04 - binary_accuracy: 1.0000 - val_loss: 0.4992 - val_binary_accuracy: 0.9121
Epoch 42/50
33/33 [==============================] - 2s 47ms/step - loss: 9.3407e-05 - binary_accuracy: 1.0000 - val_loss: 0.5892 - val_binary_accuracy: 0.9121
Epoch 43/50
33/33 [==============================] - 2s 48ms/step - loss: 1.8462e-04 - binary_accuracy: 1.0000 - val_loss: 0.5105 - val_binary_accuracy: 0.9121
Epoch 44/50
33/33 [==============================] - 2s 47ms/step - loss: 3.2220e-04 - binary_accuracy: 1.0000 - val_loss: 0.7467 - val_binary_accuracy: 0.8846
Epoch 45/50
33/33 [==============================] - 2s 47ms/step - loss: 8.9869e-05 - binary_accuracy: 1.0000 - val_loss: 0.7730 - val_binary_accuracy: 0.8956
Epoch 46/50
33/33 [==============================] - 2s 47ms/step - loss: 3.5590e-04 - binary_accuracy: 1.0000 - val_loss: 1.0908 - val_binary_accuracy: 0.8681
Epoch 47/50
33/33 [==============================] - 2s 48ms/step - loss: 1.9287e-04 - binary_accuracy: 1.0000 - val_loss: 0.5829 - val_binary_accuracy: 0.9066
Epoch 48/50
33/33 [==============================] - 2s 47ms/step - loss: 3.2824e-04 - binary_accuracy: 1.0000 - val_loss: 0.4595 - val_binary_accuracy: 0.9231
Epoch 49/50
33/33 [==============================] - 2s 48ms/step - loss: 1.1844e-04 - binary_accuracy: 1.0000 - val_loss: 0.6372 - val_binary_accuracy: 0.9121
Epoch 50/50
33/33 [==============================] - 2s 48ms/step - loss: 5.9334e-05 - binary_accuracy: 1.0000 - val_loss: 0.6802 - val_binary_accuracy: 0.9066
###Markdown
Custom CNN Model with Image ManipulationsTo simulate an increase in a sample of image, you can apply image manipulation techniques: cropping, rotation, stretching, etc. Luckily Keras has some handy functions for us to apply these techniques to our mountain and forest example. Simply, you should be able to modify our image generator for the problem. Check out these resources to help you get started: 1. [Keras `ImageGenerator` Class](https://keras.io/preprocessing/image/imagedatagenerator-class)2. [Building a powerful image classifier with very little data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html)
###Code
train_image_generator = ImageDataGenerator(rescale=1./255, rotation_range=90, width_shift_range=.5, height_shift_range=.5)
validation_image_generator = ImageDataGenerator(rescale=1./255, rotation_range=90, width_shift_range=.5, height_shift_range=.5)
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
model = Sequential()
model.add(Conv2D(32, (3,3), activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH, 3)))
model.add(MaxPooling2D((2,2)))
model.add(Dropout(0.2))
model.add(Conv2D(32, (3,3), activation='relu'))
model.add(Flatten())
model.add(Dense(16, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['binary_accuracy'])
model.fit(
train_data_gen,
epochs=30,
validation_data=val_data_gen
)
###Output
Epoch 1/30
33/33 [==============================] - 8s 244ms/step - loss: 1.0208 - binary_accuracy: 0.7404 - val_loss: 0.4560 - val_binary_accuracy: 0.7967
Epoch 2/30
33/33 [==============================] - 8s 239ms/step - loss: 0.4125 - binary_accuracy: 0.8519 - val_loss: 0.4879 - val_binary_accuracy: 0.7912
Epoch 3/30
33/33 [==============================] - 8s 237ms/step - loss: 0.3766 - binary_accuracy: 0.8596 - val_loss: 0.3422 - val_binary_accuracy: 0.8516
Epoch 4/30
33/33 [==============================] - 8s 236ms/step - loss: 0.3601 - binary_accuracy: 0.8846 - val_loss: 0.4648 - val_binary_accuracy: 0.7912
Epoch 5/30
33/33 [==============================] - 8s 237ms/step - loss: 0.3012 - binary_accuracy: 0.8769 - val_loss: 0.4205 - val_binary_accuracy: 0.7692
Epoch 6/30
33/33 [==============================] - 8s 237ms/step - loss: 0.3589 - binary_accuracy: 0.8462 - val_loss: 0.2952 - val_binary_accuracy: 0.8571
Epoch 7/30
33/33 [==============================] - 8s 237ms/step - loss: 0.3842 - binary_accuracy: 0.8442 - val_loss: 0.3543 - val_binary_accuracy: 0.8681
Epoch 8/30
33/33 [==============================] - 8s 237ms/step - loss: 0.3487 - binary_accuracy: 0.8731 - val_loss: 0.4123 - val_binary_accuracy: 0.8626
Epoch 9/30
33/33 [==============================] - 8s 240ms/step - loss: 0.2957 - binary_accuracy: 0.8731 - val_loss: 0.4818 - val_binary_accuracy: 0.7967
Epoch 10/30
33/33 [==============================] - 8s 238ms/step - loss: 0.4365 - binary_accuracy: 0.8058 - val_loss: 0.3334 - val_binary_accuracy: 0.8736
Epoch 11/30
33/33 [==============================] - 8s 237ms/step - loss: 0.3534 - binary_accuracy: 0.8712 - val_loss: 0.4295 - val_binary_accuracy: 0.8022
Epoch 12/30
33/33 [==============================] - 8s 238ms/step - loss: 0.2905 - binary_accuracy: 0.8942 - val_loss: 0.3389 - val_binary_accuracy: 0.8407
Epoch 13/30
33/33 [==============================] - 8s 238ms/step - loss: 0.3083 - binary_accuracy: 0.8942 - val_loss: 0.4375 - val_binary_accuracy: 0.8516
Epoch 14/30
33/33 [==============================] - 8s 237ms/step - loss: 0.3534 - binary_accuracy: 0.8750 - val_loss: 0.3506 - val_binary_accuracy: 0.8626
Epoch 15/30
33/33 [==============================] - 8s 238ms/step - loss: 0.3175 - binary_accuracy: 0.8827 - val_loss: 0.3015 - val_binary_accuracy: 0.8681
Epoch 16/30
33/33 [==============================] - 8s 243ms/step - loss: 0.2990 - binary_accuracy: 0.8865 - val_loss: 0.2701 - val_binary_accuracy: 0.8901
Epoch 17/30
33/33 [==============================] - 8s 238ms/step - loss: 0.2589 - binary_accuracy: 0.9154 - val_loss: 0.3590 - val_binary_accuracy: 0.8571
Epoch 18/30
33/33 [==============================] - 8s 241ms/step - loss: 0.2896 - binary_accuracy: 0.8654 - val_loss: 0.5332 - val_binary_accuracy: 0.8297
Epoch 19/30
33/33 [==============================] - 8s 240ms/step - loss: 0.3304 - binary_accuracy: 0.8519 - val_loss: 0.4603 - val_binary_accuracy: 0.8242
Epoch 20/30
33/33 [==============================] - 8s 244ms/step - loss: 0.3097 - binary_accuracy: 0.8904 - val_loss: 0.2969 - val_binary_accuracy: 0.8791
Epoch 21/30
33/33 [==============================] - 8s 246ms/step - loss: 0.2928 - binary_accuracy: 0.9019 - val_loss: 0.3788 - val_binary_accuracy: 0.8462
Epoch 22/30
33/33 [==============================] - 8s 238ms/step - loss: 0.2941 - binary_accuracy: 0.8885 - val_loss: 0.3163 - val_binary_accuracy: 0.8681
Epoch 23/30
33/33 [==============================] - 8s 237ms/step - loss: 0.3152 - binary_accuracy: 0.9019 - val_loss: 0.3602 - val_binary_accuracy: 0.8791
Epoch 24/30
33/33 [==============================] - 8s 238ms/step - loss: 0.2869 - binary_accuracy: 0.8885 - val_loss: 0.3071 - val_binary_accuracy: 0.8736
Epoch 25/30
33/33 [==============================] - 8s 238ms/step - loss: 0.3083 - binary_accuracy: 0.8558 - val_loss: 0.3575 - val_binary_accuracy: 0.8242
Epoch 26/30
33/33 [==============================] - 8s 238ms/step - loss: 0.3470 - binary_accuracy: 0.8500 - val_loss: 0.3486 - val_binary_accuracy: 0.8736
Epoch 27/30
33/33 [==============================] - 8s 238ms/step - loss: 0.2629 - binary_accuracy: 0.9115 - val_loss: 0.3170 - val_binary_accuracy: 0.8901
Epoch 28/30
33/33 [==============================] - 8s 240ms/step - loss: 0.3272 - binary_accuracy: 0.8808 - val_loss: 0.5631 - val_binary_accuracy: 0.7527
Epoch 29/30
33/33 [==============================] - 8s 240ms/step - loss: 0.2697 - binary_accuracy: 0.8962 - val_loss: 0.2943 - val_binary_accuracy: 0.8571
Epoch 30/30
33/33 [==============================] - 8s 238ms/step - loss: 0.2842 - binary_accuracy: 0.9019 - val_loss: 0.3013 - val_binary_accuracy: 0.8901
###Markdown
*Data Science Unit 4 Sprint 3 Assignment 2* Convolutional Neural Networks (CNNs) Assignment- Part 1: Pre-Trained Model- Part 2: Custom CNN Model- Part 3: CNN with Data AugmentationYou will apply three different CNN models to a binary image classification model using Keras. Classify images of Mountains (`./data/mountain/*`) and images of forests (`./data/forest/*`). Treat mountains as the postive class (1) and the forest images as the negative (zero). |Mountain (+)|Forest (-)||---|---||||The problem is realively difficult given that the sample is tiny: there are about 350 observations per class. This sample size might be something that you can expect with prototyping an image classification problem/solution at work. Get accustomed to evaluating several differnet possible models. Pre - Trained ModelLoad a pretrained network from Keras, [ResNet50](https://tfhub.dev/google/imagenet/resnet_v1_50/classification/1) - a 50 layer deep network trained to recognize [1000 objects](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). Starting usage:```pythonimport numpy as npfrom tensorflow.keras.applications.resnet50 import ResNet50from tensorflow.keras.preprocessing import imagefrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictionsfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D()from tensorflow.keras.models import Model This is the functional APIresnet = ResNet50(weights='imagenet', include_top=False)```The `include_top` parameter in `ResNet50` will remove the full connected layers from the ResNet model. The next step is to turn off the training of the ResNet layers. We want to use the learned parameters without updating them in future training passes. ```pythonfor layer in resnet.layers: layer.trainable = False```Using the Keras functional API, we will need to additional additional full connected layers to our model. We we removed the top layers, we removed all preivous fully connected layers. In other words, we kept only the feature processing portions of our network. You can expert with additional layers beyond what's listed here. The `GlobalAveragePooling2D` layer functions as a really fancy flatten function by taking the average of each of the last convolutional layer outputs (which is two dimensional still). ```pythonx = res.outputx = GlobalAveragePooling2D()(x) This layer is a really fancy flattenx = Dense(1024, activation='relu')(x)predictions = Dense(1, activation='sigmoid')(x)model = Model(res.input, predictions)```Your assignment is to apply the transfer learning above to classify images of Mountains (`./data/mountain/*`) and images of forests (`./data/forest/*`). Treat mountains as the postive class (1) and the forest images as the negative (zero). Steps to complete assignment: 1. Load in Image Data into numpy arrays (`X`) 2. Create a `y` for the labels3. Train your model with pretrained layers from resnet4. Report your model's accuracy Load in DataCheck out out [`skimage`](https://scikit-image.org/) for useful functions related to processing the images. In particular checkout the documentation for `skimage.io.imread_collection` and `skimage.transform.resize`.
###Code
import numpy as np
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D
from tensorflow.keras.models import Model # This is the functional API
resnet = ResNet50(weights='imagenet', include_top=False)
for layer in resnet.layers:
layer.trainable = False
x = resnet.output
x = GlobalAveragePooling2D()(x) # This layer is a really fancy flatten
x = Dense(1024, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
model = Model(resnet.input, predictions)
###Output
_____no_output_____
###Markdown
Instatiate Model
###Code
import numpy as np
import random
import sys
import os
data_files = os.listdir('./data/mountain')
# Read in Data
data_mountain = []
for file in data_files:
if file[-3:] == 'jpg':
with open(f'./data/mountain/{file}', 'r') as f:
data_mountain.append(f)
data_files = os.listdir('./data/forest')
#datatees
data_forest = []
for file in data_files:
if file[-3:] == 'jpg':
with open(f'./data/forest/{file}', 'r') as f:
data_forest.append(f)
x = np.asarray(data_forest)
type(x)
y =
x[0]
import matplotlib.pyplot as plt
plt.figure(figsize=(10,10))
plt.imshow(data_forest[0])
# The CIFAR labels happen to be arrays,
# which is why you need the extra index
plt.show()
(train_images, train_labels), (test_images, test_labels) = image.load_img(image)
# Normalize pixel values to be between 0 and 1
train_images, test_images = train_images / 255.0, test_images / 255.0
# Fit Model
model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels))
###Output
_____no_output_____
###Markdown
Fit Model Custom CNN ModelIn this step, write and train your own convolutional neural network using Keras. You can use any architecture that suits you as long as it has at least one convolutional and one pooling layer at the beginning of the network - you can add more if you want.
###Code
# Compile Model
# Fit Model
###Output
Train on 561 samples, validate on 141 samples
Epoch 1/5
561/561 [==============================] - 18s 32ms/sample - loss: 0.2667 - accuracy: 0.9073 - val_loss: 0.1186 - val_accuracy: 0.9858
Epoch 2/5
561/561 [==============================] - 18s 32ms/sample - loss: 0.2046 - accuracy: 0.9073 - val_loss: 0.3342 - val_accuracy: 0.8511
Epoch 3/5
561/561 [==============================] - 18s 32ms/sample - loss: 0.1778 - accuracy: 0.9287 - val_loss: 0.2746 - val_accuracy: 0.8723
Epoch 4/5
561/561 [==============================] - 18s 32ms/sample - loss: 0.1681 - accuracy: 0.9323 - val_loss: 0.8487 - val_accuracy: 0.5957
Epoch 5/5
561/561 [==============================] - 18s 32ms/sample - loss: 0.1606 - accuracy: 0.9394 - val_loss: 0.3903 - val_accuracy: 0.8582
###Markdown
Custom CNN Model with Image Manipulations *This a stretch goal, and it's relatively difficult*To simulate an increase in a sample of image, you can apply image manipulation techniques: cropping, rotation, stretching, etc. Luckily Keras has some handy functions for us to apply these techniques to our mountain and forest example. Check out these resources to help you get started: 1. [Keras `ImageGenerator` Class](https://keras.io/preprocessing/image/imagedatagenerator-class)2. [Building a powerful image classifier with very little data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html)
###Code
# State Code for Image Manipulation Here
###Output
_____no_output_____
###Markdown
*Data Science Unit 4 Sprint 3 Assignment 2* Convolutional Neural Networks (CNNs) Assignment- Part 1: Pre-Trained Model- Part 2: Custom CNN Model- Part 3: CNN with Data AugmentationYou will apply three different CNN models to a binary image classification model using Keras. Classify images of Mountains (`./data/mountain/*`) and images of forests (`./data/forest/*`). Treat mountains as the postive class (1) and the forest images as the negative (zero). |Mountain (+)|Forest (-)||---|---||||The problem is realively difficult given that the sample is tiny: there are about 350 observations per class. This sample size might be sometime that can expect with prototyping an image classification problem/solution at work. Get accustomed to evaluating several differnet possible models. Pre - Trained ModelLoad a pretrained network from Keras, [ResNet50](https://tfhub.dev/google/imagenet/resnet_v1_50/classification/1) - a 50 layer deep network trained to recognize [1000 objects](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). Starting usage:```pythonimport numpy as npfrom tensorflow.keras.applications.resnet50 import ResNet50from tensorflow.keras.preprocessing import imagefrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictionsfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D()from tensorflow.keras.models import Model This is the functional APIresnet = ResNet50(weights='imagenet', include_top=False)```The `include_top` parameter in `ResNet50` will remove the full connected layers from the ResNet model. The next step is to turn off the training of the ResNet layers. We want to use the learned parameters without updating them in future training passes. ```pythonfor layer in resnet.layers: layer.trainable = False```Using the Keras functional API, we will need to additional additional full connected layers to our model. We we removed the top layers, we removed all preivous fully connected layers. In other words, we kept only the feature processing portions of our network. You can expert with additional layers beyond what's listed here. The `GlobalAveragePooling2D` layer functions as a really fancy flatten function by taking the average of each of the last convolutional layer outputs (which is two dimensional still). ```pythonx = res.outputx = GlobalAveragePooling2D()(x) This layer is a really fancy flattenx = Dense(1024, activation='relu')(x)predictions = Dense(1, activation='sigmoid')(x)model = Model(res.input, predictions)```Your assignment is to apply the transfer learning above to classify images of Mountains (`./data/mountain/*`) and images of forests (`./data/forest/*`). Treat mountains as the postive class (1) and the forest images as the negative (zero). Steps to complete assignment: 1. Load in Image Data into numpy arrays (`X`) 2. Create a `y` for the labels3. Train your model with pretrained layers from resnet4. Report your model's accuracy Load in DataCheck out out [`skimage`](https://scikit-image.org/) for useful functions related to processing the images. In particular checkout the documentation for `skimage.io.imread_collection` and `skimage.transform.resize`.
###Code
import pandas as pd
import numpy as np
import glob
#setting the filepaths for all the images for forests and mountains
mountains = glob.glob("data/mountain/*.jpg")
forests = glob.glob("data/forest/*.jpg")
!conda install -c conda-forge -y tensorflow
#adding the image processing function from keras
from tensorflow.keras.preprocessing import image
#resnet uses 224 by 224 images, so we need to resize images to that spec
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
###Output
/home/rob/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/rob/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/rob/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/rob/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/rob/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/rob/anaconda3/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/home/rob/anaconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/home/rob/anaconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/home/rob/anaconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/home/rob/anaconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/home/rob/anaconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/home/rob/anaconda3/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
###Markdown
Instatiate Model
###Code
import numpy as np
from tensorflow.keras.applications.resnet50 import ResNet50
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions
from tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Dropout
from tensorflow.keras.models import Model # This is the functional API
from skimage import io, transform
resnet = ResNet50(input_shape=(224, 224, 3),weights='imagenet', include_top=False)
#preventing the model that is being imported from being retrainable
for layer in resnet.layers:
layer.trainable = False
#taking the output layer and setting it to variable X
#then applying the GlobalAveragePooling flattening function to it
#then applying a dense layer on top of that
#then creating a predictions layer with 1 node as our output
#finally instantiating a model using our previous imported model as an input, and the prediction variable as the output
x = resnet.output
x = GlobalAveragePooling2D()(x) # This layer is a really fancy flatten
x = Dense(1024, activation='relu')(x)
predictions = Dense(1, activation='sigmoid')(x)
model = Model(inputs=resnet.input, outputs=predictions)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
def process_img_to_array(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return x
import os
#creating a list of filepaths for all the images of mountains and forests
forest = [x for x in os.listdir('./data/forest') if x[-3:] == 'jpg']
mountains = [x for x in os.listdir('./data/mountain') if x[-3:] == 'jpg']
#assigning forrests to 0, and mountains to 1.
zeros = np.zeros(len(forest))
ones = np.ones(len(mountains))
#appending the two variables to a y variable
y = np.append(zeros, ones)
y.shape
#reshaping the array so there is one column
y = y.reshape(-1,1)
y.shape
#creating X variables of numpy arrays for each image in the filepath.
data = []
for i in ['forest', 'mountain']:
for file in os.listdir('./data/'+i):
#only images
if file[-3:] == 'jpg':
#filepath for images
path = os.path.join(f'./data/{i}/' + file)
#transforming the image
img = process_img_path(path)
#converting to array
x = image.img_to_array(img)
#expanding dimensions, preprocessing, and appending to the data variable
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
data.append(x)
#reshaping the data array to input into the developed model
X =np.asarray(data).reshape(len(data),224,224,3)
###Output
_____no_output_____
###Markdown
Fit Model
###Code
model.fit(X, y, epochs=10, batch_size=10, validation_split=0.1)
#lets test an image:
###Output
_____no_output_____
###Markdown
###Code
!wget https://ca.slack-edge.com/T4JUEB3ME-ULJ9DTDKL-246bfe8730a9-512
test = 'T4JUEB3ME-ULJ9DTDKL-246bfe8730a9-512'
test_data = []
if file[-3:] == 'jpg':
path = "./"+test
img = process_img_path(path)
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
test_data.append(x)
model.predict(test_data)
model.evaluate(X, y)
###Output
702/702 [==============================] - 3s 4ms/sample - loss: 0.0128 - acc: 0.9943
###Markdown
Custom CNN Model
###Code
# Compile Model
# Fit Model
###Output
Train on 561 samples, validate on 141 samples
Epoch 1/5
561/561 [==============================] - 18s 32ms/sample - loss: 0.2667 - accuracy: 0.9073 - val_loss: 0.1186 - val_accuracy: 0.9858
Epoch 2/5
561/561 [==============================] - 18s 32ms/sample - loss: 0.2046 - accuracy: 0.9073 - val_loss: 0.3342 - val_accuracy: 0.8511
Epoch 3/5
561/561 [==============================] - 18s 32ms/sample - loss: 0.1778 - accuracy: 0.9287 - val_loss: 0.2746 - val_accuracy: 0.8723
Epoch 4/5
561/561 [==============================] - 18s 32ms/sample - loss: 0.1681 - accuracy: 0.9323 - val_loss: 0.8487 - val_accuracy: 0.5957
Epoch 5/5
561/561 [==============================] - 18s 32ms/sample - loss: 0.1606 - accuracy: 0.9394 - val_loss: 0.3903 - val_accuracy: 0.8582
|
Atividade_02_JoaoDenilson.ipynb | ###Markdown
Curso FIC - Data Science com Python - Atividade tipos básicos Python Lista de Exercícios I
###Code
# Exercício 1 - Apresente na tela os números de 1 a 10. Crie uma lista para armazenar esses números.
numeros = []
for x in range(1,11):
print(x)
numeros.append(x)
print(numeros)
# Exercício 2 - Crie uma lista de 5 objetos e apresente na tela.
veiculos = ["moto", "carro", "onibus", "caminhao", "trem"]
for x in veiculos:
print(x)
# Exercício 3 - Crie três strings e concatene as três em uma quarta string
str1 = 'João'
str2 = 'Denilson'
str3 = 'Santos'
str4 = str1+' '+str2+' '+str3
print(str4)
# Exercício 4 - Crie uma tupla com os seguintes elementos: 1, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5, 6 e depois utilize a função count do
# objeto tupla para verificar quantas vezes o número 3 aparece na tupla
tupla = (1, 1, 1, 2, 2, 3, 3, 3, 4, 5, 5, 6)
count = tupla.count(3)
print(count)
# Exercício 5 - Crie um dicionário sem valores e em seguida apresente na tela.
dicionario = {}
print(dicionario)
# Exercício 6 - Crie um dicionário contendo as seguintes informações: 3 chaves e 3 valores. Após a criação exiba esse dicionário na tela.
dicionario = {"veiculo": "moto", "Dono": "Joao Denilson", "rodas": 2 }
print("Dicionário:",dicionario)
# Exercício 7 - Adicione mais dois elemento ao dicionário criado no Exercício 6 e exiba na tela.
update_dic = {"cidade": "Cedro-CE"}
dicionario.update(update_dic)
print("Update dicionario",dicionario)
# Exercício 8 - Crie um dicionário com 4 chaves e 4 valores. Um dos valores deve ser uma lista de 3 elementos numéricos.
# Exiba o dicionário na tela.
list_Numeros =[1,2,3]
discionario2 = {"animal": "cobra", "espécie": "réptil", "venenosa": "sim", "numeros": list_Numeros
}
print(discionario2)
# Exercício 9 - Crie uma lista de 5 elementos. O primeiro elemento deve ser uma string,
# o segundo uma tupla de 3 elementos, o terceiro um dicionário com 3 chaves e 3 valores
# o quarto elemento um valor do tipo float.
# e o quinto elemento um valor do tipo inteiro.
# Exiba a lista na tela.
tupla = (1,2,3)
discionario = {"mora: ": "no Brasil",
"estado": "do Ceara",
"municipio": " de Cedro"}
lista = ["joao", tupla, discionario, 2.2, 8]
print(lista)
# Exercício 10 - Analise a string apresentada abaixo e imprima apenas os caracteres da posição 1 a 18.
frase = 'Infelizmente esse ano não haverá são joão. :('
tam = len(frase)
for x in range(1,tam):
if(x < 18):
print(frase[x])
###Output
n
f
e
l
i
z
m
e
n
t
e
e
s
s
e
|
examples/notebooks/WWW/optimal_power_gaussian_channel_BV4.62.ipynb | ###Markdown
Optimal Power and Bandwidth Allocation in a Gaussian Channelby Robert Gowers, Roger Hill, Sami Al-Izzi, Timothy Pollington and Keith Briggsfrom Boyd and Vandenberghe, Convex Optimization, exercise 4.62 page 210Consider a system in which a central node transmits messages to $n$ receivers. Each receiver channel $i \in \{1,...,n\}$ has a transmit power $P_i$ and bandwidth $W_i$. A fraction of the total power and bandwidth is allocated to each channel, such that $\sum_{i=1}^{n}P_i = P_{tot}$ and $\sum_{i=1}^{n}W_i = W_{tot}$. Given some utility function of the bit rate of each channel, $u_i(R_i)$, the objective is to maximise the total utility $U = \sum_{i=1}^{n}u_i(R_i)$.Assuming that each channel is corrupted by Gaussian white noise, the signal to noise ratio is given by $\beta_i P_i/W_i$. This means that the bit rate is given by:$R_i = \alpha_i W_i \log_2(1+\beta_iP_i/W_i)$where $\alpha_i$ and $\beta_i$ are known positive constants.One of the simplest utility functions is the data rate itself, which also gives a convex objective function.The optimisation problem can be thus be formulated as:minimise $\sum_{i=1}^{n}-\alpha_i W_i \log_2(1+\beta_iP_i/W_i)$subject to $\sum_{i=1}^{n}P_i = P_{tot} \quad \sum_{i=1}^{n}W_i = W_{tot} \quad P \succeq 0 \quad W \succeq 0$Although this is a convex optimisation problem, it must be rewritten in DCP form since $P_i$ and $W_i$ are variables and DCP prohibits dividing one variable by another directly. In order to rewrite the problem in DCP format, we utilise the $\texttt{kl_div}$ function in CVXPY, which calculates the Kullback-Leibler divergence.$\text{kl_div}(x,y) = x\log(x/y)-x+y$$-R_i = \text{kl_div}(\alpha_i W_i, \alpha_i(W_i+\beta_iP_i)) - \alpha_i\beta_iP_i$Now that the objective function is in DCP form, the problem can be solved using CVXPY.
###Code
#!/usr/bin/env python3
# @author: R. Gowers, S. Al-Izzi, T. Pollington, R. Hill & K. Briggs
import numpy as np
import cvxpy as cp
def optimal_power(n, a_val, b_val, P_tot=1.0, W_tot=1.0):
# Input parameters: α and β are constants from R_i equation
n = len(a_val)
if n != len(b_val):
print('alpha and beta vectors must have same length!')
return 'failed', np.nan, np.nan, np.nan
P = cp.Variable(shape=n)
W = cp.Variable(shape=n)
alpha = cp.Parameter(shape=n)
beta = cp.Parameter(shape=n)
alpha.value = np.array(a_val)
beta.value = np.array(b_val)
# This function will be used as the objective so must be DCP;
# i.e. elementwise multiplication must occur inside kl_div,
# not outside otherwise the solver does not know if it is DCP...
R = cp.kl_div(cp.multiply(alpha, W),
cp.multiply(alpha, W + cp.multiply(beta, P))) - \
cp.multiply(alpha, cp.multiply(beta, P))
objective = cp.Minimize(cp.sum(R))
constraints = [P>=0.0,
W>=0.0,
cp.sum(P)-P_tot==0.0,
cp.sum(W)-W_tot==0.0]
prob = cp.Problem(objective, constraints)
prob.solve()
return prob.status, -prob.value, P.value, W.value
###Output
_____no_output_____
###Markdown
ExampleConsider the case where there are 5 channels, $n=5$, $\alpha = \beta = (2.0,2.2,2.4,2.6,2.8)$, $P_{\text{tot}} = 0.5$ and $W_{\text{tot}}=1$.
###Code
np.set_printoptions(precision=3)
n = 5 # number of receivers in the system
a_val = np.arange(10,n+10)/(1.0*n) # α
b_val = np.arange(10,n+10)/(1.0*n) # β
P_tot = 0.5
W_tot = 1.0
status, utility, power, bandwidth = optimal_power(n, a_val, b_val, P_tot, W_tot)
print('Status: {}'.format(status))
print('Optimal utility value = {:.4g}'.format(utility))
print('Optimal power level:\n{}'.format(power))
print('Optimal bandwidth:\n{}'.format(bandwidth))
###Output
Status: optimal
Optimal utility value = 2.451
Optimal power level:
[1.151e-09 1.708e-09 2.756e-09 5.788e-09 5.000e-01]
Optimal bandwidth:
[3.091e-09 3.955e-09 5.908e-09 1.193e-08 1.000e+00]
###Markdown
Optimal Power and Bandwidth Allocation in a Gaussian Channelby Robert Gowers, Roger Hill, Sami Al-Izzi, Timothy Pollington and Keith Briggsfrom Boyd and Vandenberghe, Convex Optimization, exercise 4.62 page 210Consider a system in which a central node transmits messages to $n$ receivers. Each receiver channel $i \in \{1,...,n\}$ has a transmit power $P_i$ and bandwidth $W_i$. A fraction of the total power and bandwidth is allocated to each channel, such that $\sum_{i=1}^{n}P_i = P_{tot}$ and $\sum_{i=1}^{n}W_i = W_{tot}$. Given some utility function of the bit rate of each channel, $u_i(R_i)$, the objective is to maximise the total utility $U = \sum_{i=1}^{n}u_i(R_i)$.Assuming that each channel is corrupted by Gaussian white noise, the signal to noise ratio is given by $\beta_i P_i/W_i$. This means that the bit rate is given by:$R_i = \alpha_i W_i \log_2(1+\beta_iP_i/W_i)$where $\alpha_i$ and $\beta_i$ are known positive constants.One of the simplest utility functions is the data rate itself, which also gives a convex objective function.The optimisation problem can be thus be formulated as:minimise $\sum_{i=1}^{n}-\alpha_i W_i \log_2(1+\beta_iP_i/W_i)$subject to $\sum_{i=1}^{n}P_i = P_{tot} \quad \sum_{i=1}^{n}W_i = W_{tot} \quad P \succeq 0 \quad W \succeq 0$Although this is a convex optimisation problem, it must be rewritten in DCP form since $P_i$ and $W_i$ are variables and DCP prohibits dividing one variable by another directly. In order to rewrite the problem in DCP format, we utilise the $\texttt{kl_div}$ function in CVXPY, which calculates the Kullback-Leibler divergence.$\text{kl_div}(x,y) = x\log(x/y)-x+y$$-R_i = \text{kl_div}(\alpha_i W_i, \alpha_i(W_i+\beta_iP_i)) - \alpha_i\beta_iP_i$Now that the objective function is in DCP form, the problem can be solved using CVXPY.
###Code
#!/usr/bin/env python3
# @author: R. Gowers, S. Al-Izzi, T. Pollington, R. Hill & K. Briggs
import numpy as np
import cvxpy as cvx
def optimal_power(n, a_val, b_val, P_tot=1.0, W_tot=1.0):
# Input parameters: α and β are constants from R_i equation
n=len(a_val)
if n!=len(b_val):
print('alpha and beta vectors must have same length!')
return 'failed',np.nan,np.nan,np.nan
P=cvx.Variable(n)
W=cvx.Variable(n)
alpha=cvx.Parameter(n)
beta =cvx.Parameter(n)
alpha.value=np.array(a_val)
beta.value =np.array(b_val)
# This function will be used as the objective so must be DCP;
# i.e. elementwise multiplication must occur inside kl_div, not outside otherwise the solver does not know if it is DCP...
R=cvx.kl_div(cvx.mul_elemwise(alpha, W),
cvx.mul_elemwise(alpha, W + cvx.mul_elemwise(beta, P))) - \
cvx.mul_elemwise(alpha, cvx.mul_elemwise(beta, P))
objective=cvx.Minimize(cvx.sum_entries(R))
constraints=[P>=0.0,
W>=0.0,
cvx.sum_entries(P)-P_tot==0.0,
cvx.sum_entries(W)-W_tot==0.0]
prob=cvx.Problem(objective, constraints)
prob.solve()
return prob.status,-prob.value,P.value,W.value
###Output
_____no_output_____
###Markdown
ExampleConsider the case where there are 5 channels, $n=5$, $\alpha = \beta = (2.0,2.2,2.4,2.6,2.8)$, $P_{\text{tot}} = 0.5$ and $W_{\text{tot}}=1$.
###Code
np.set_printoptions(precision=3)
n=5 # number of receivers in the system
a_val=np.arange(10,n+10)/(1.0*n) # α
b_val=np.arange(10,n+10)/(1.0*n) # β
P_tot=0.5
W_tot=1.0
status,utility,power,bandwidth=optimal_power(n,a_val,b_val,P_tot,W_tot)
print('Status: ',status)
print('Optimal utility value = %.4g '%utility)
print('Optimal power level:\n', power)
print('Optimal bandwidth:\n', bandwidth)
###Output
Status = optimal
Optimal utility value = 2.451
Optimal power level:
[[ 1.150e-09]
[ 1.706e-09]
[ 2.754e-09]
[ 5.785e-09]
[ 5.000e-01]]
Optimal bandwidth:
[[ 3.091e-09]
[ 3.956e-09]
[ 5.910e-09]
[ 1.193e-08]
[ 1.000e+00]]
###Markdown
Optimal Power and Bandwidth Allocation in a Gaussian Channelby Robert Gowers, Roger Hill, Sami Al-Izzi, Timothy Pollington and Keith Briggsfrom Boyd and Vandenberghe, Convex Optimization, exercise 4.62 page 210Consider a system in which a central node transmits messages to $n$ receivers. Each receiver channel $i \in \{1,...,n\}$ has a transmit power $P_i$ and bandwidth $W_i$. A fraction of the total power and bandwidth is allocated to each channel, such that $\sum_{i=1}^{n}P_i = P_{tot}$ and $\sum_{i=1}^{n}W_i = W_{tot}$. Given some utility function of the bit rate of each channel, $u_i(R_i)$, the objective is to maximise the total utility $U = \sum_{i=1}^{n}u_i(R_i)$.Assuming that each channel is corrupted by Gaussian white noise, the signal to noise ratio is given by $\beta_i P_i/W_i$. This means that the bit rate is given by:$R_i = \alpha_i W_i \log_2(1+\beta_iP_i/W_i)$where $\alpha_i$ and $\beta_i$ are known positive constants.One of the simplest utility functions is the data rate itself, which also gives a convex objective function.The optimisation problem can be thus be formulated as:minimise $\sum_{i=1}^{n}-\alpha_i W_i \log_2(1+\beta_iP_i/W_i)$subject to $\sum_{i=1}^{n}P_i = P_{tot} \quad \sum_{i=1}^{n}W_i = W_{tot} \quad P \succeq 0 \quad W \succeq 0$Although this is a convex optimisation problem, it must be rewritten in DCP form since $P_i$ and $W_i$ are variables and DCP prohibits dividing one variable by another directly. In order to rewrite the problem in DCP format, we utilise the $\texttt{kl_div}$ function in CVXPY, which calculates the Kullback-Leibler divergence.$\text{kl_div}(x,y) = x\log(x/y)-x+y$$-R_i = \text{kl_div}(\alpha_i W_i, \alpha_i(W_i+\beta_iP_i)) - \alpha_i\beta_iP_i$Now that the objective function is in DCP form, the problem can be solved using CVXPY.
###Code
#!/usr/bin/env python3
# @author: R. Gowers, S. Al-Izzi, T. Pollington, R. Hill & K. Briggs
import numpy as np
import cvxpy as cvx
def optimal_power(n, a_val, b_val, P_tot=1.0, W_tot=1.0):
# Input parameters: α and β are constants from R_i equation
n=len(a_val)
if n!=len(b_val):
print('alpha and beta vectors must have same length!')
return 'failed',np.nan,np.nan,np.nan
P=cvx.Variable(shape=(n,1))
W=cvx.Variable(shape=(n,1))
alpha=cvx.Parameter(shape=(n,1))
beta =cvx.Parameter(shape=(n,1))
alpha.value=np.array(a_val)
beta.value =np.array(b_val)
# This function will be used as the objective so must be DCP;
# i.e. elementwise multiplication must occur inside kl_div, not outside otherwise the solver does not know if it is DCP...
R=cvx.kl_div(cvx.multiply(alpha, W),
cvx.multiply(alpha, W + cvx.multiply(beta, P))) - \
cvx.multiply(alpha, cvx.multiply(beta, P))
objective=cvx.Minimize(cvx.sum(R))
constraints=[P>=0.0,
W>=0.0,
cvx.sum(P)-P_tot==0.0,
cvx.sum(W)-W_tot==0.0]
prob=cvx.Problem(objective, constraints)
prob.solve()
return prob.status,-prob.value,P.value,W.value
###Output
_____no_output_____
###Markdown
ExampleConsider the case where there are 5 channels, $n=5$, $\alpha = \beta = (2.0,2.2,2.4,2.6,2.8)$, $P_{\text{tot}} = 0.5$ and $W_{\text{tot}}=1$.
###Code
np.set_printoptions(precision=3)
n=5 # number of receivers in the system
a_val=np.arange(10,n+10)/(1.0*n) # α
b_val=np.arange(10,n+10)/(1.0*n) # β
P_tot=0.5
W_tot=1.0
status,utility,power,bandwidth=optimal_power(n,a_val,b_val,P_tot,W_tot)
print('Status: ',status)
print('Optimal utility value = %.4g '%utility)
print('Optimal power level:\n', power)
print('Optimal bandwidth:\n', bandwidth)
###Output
Status = optimal
Optimal utility value = 2.451
Optimal power level:
[[ 1.150e-09]
[ 1.706e-09]
[ 2.754e-09]
[ 5.785e-09]
[ 5.000e-01]]
Optimal bandwidth:
[[ 3.091e-09]
[ 3.956e-09]
[ 5.910e-09]
[ 1.193e-08]
[ 1.000e+00]]
|
docs/notebooks/examples/2D_simulation(crystalline)/plot_9_shifting-d.ipynb | ###Markdown
MCl₂.2D₂O, ²H (I=1) Shifting-d echo²H (I=1) 2D NMR CSA-Quad 1st order correlation spectrum simulation. The following is a static shifting-*d* echo NMR correlation simulation of$\text{MCl}_2\cdot 2\text{D}_2\text{O}$ crystalline solid, where$M \in [\text{Cu}, \text{Ni}, \text{Co}, \text{Fe}, \text{Mn}]$. The tensorparameters for the simulation and the corresponding spectrum are reported byWalder `et al.` [f1]_.
###Code
import matplotlib.pyplot as plt
from mrsimulator import Simulator, SpinSystem, Site
from mrsimulator.methods import Method2D
from mrsimulator import signal_processing as sp
from mrsimulator.spin_system.tensors import SymmetricTensor
from mrsimulator.method.event import SpectralEvent
from mrsimulator.method.spectral_dimension import SpectralDimension
###Output
_____no_output_____
###Markdown
Generate the site and spin system objects.
###Code
site_Ni = Site(
isotope="2H",
isotropic_chemical_shift=-97, # in ppm
shielding_symmetric=SymmetricTensor(
zeta=-551, # in ppm
eta=0.12,
alpha=62 * 3.14159 / 180, # in rads
beta=114 * 3.14159 / 180, # in rads
gamma=171 * 3.14159 / 180, # in rads
),
quadrupolar=SymmetricTensor(Cq=77.2e3, eta=0.9), # Cq in Hz
)
site_Cu = Site(
isotope="2H",
isotropic_chemical_shift=51, # in ppm
shielding_symmetric=SymmetricTensor(
zeta=146, # in ppm
eta=0.84,
alpha=95 * 3.14159 / 180, # in rads
beta=90 * 3.14159 / 180, # in rads
gamma=0 * 3.14159 / 180, # in rads
),
quadrupolar=SymmetricTensor(Cq=118.2e3, eta=0.8), # Cq in Hz
)
site_Co = Site(
isotope="2H",
isotropic_chemical_shift=215, # in ppm
shielding_symmetric=SymmetricTensor(
zeta=-1310, # in ppm
eta=0.23,
alpha=180 * 3.14159 / 180, # in rads
beta=90 * 3.14159 / 180, # in rads
gamma=90 * 3.14159 / 180, # in rads
),
quadrupolar=SymmetricTensor(Cq=114.6e3, eta=0.95), # Cq in Hz
)
site_Fe = Site(
isotope="2H",
isotropic_chemical_shift=101, # in ppm
shielding_symmetric=SymmetricTensor(
zeta=-1187, # in ppm
eta=0.4,
alpha=122 * 3.14159 / 180, # in rads
beta=90 * 3.14159 / 180, # in rads
gamma=90 * 3.14159 / 180, # in rads
),
quadrupolar=SymmetricTensor(Cq=114.2e3, eta=0.98), # Cq in Hz
)
site_Mn = Site(
isotope="2H",
isotropic_chemical_shift=145, # in ppm
shielding_symmetric=SymmetricTensor(
zeta=-1236, # in ppm
eta=0.23,
alpha=136 * 3.14159 / 180, # in rads
beta=90 * 3.14159 / 180, # in rads
gamma=90 * 3.14159 / 180, # in rads
),
quadrupolar=SymmetricTensor(Cq=1.114e5, eta=1.0), # Cq in Hz
)
spin_systems = [
SpinSystem(sites=[s], name=f"{n}Cl$_2$.2D$_2$O")
for s, n in zip(
[site_Ni, site_Cu, site_Co, site_Fe, site_Mn], ["Ni", "Cu", "Co", "Fe", "Mn"]
)
]
###Output
_____no_output_____
###Markdown
Use the generic 2D method, `Method2D`, to generate a shifting-d echo method. Thereported shifting-d 2D sequence is a correlation of the shielding frequencies to thefirst-order quadrupolar frequencies. Here, we create a correlation method using the:attr:`~mrsimulator.method.event.freq_contrib` attribute, which acts as a switchfor including the frequency contributions from interaction during the event.In the following method, we assign the ``["Quad1_2"]`` and``["Shielding1_0", "Shielding1_2"]`` as the value to the ``freq_contrib`` key. The*Quad1_2* is an enumeration for selecting the first-order second-rank quadrupolarfrequency contributions. *Shielding1_0* and *Shielding1_2* are enumerations forthe first-order shielding with zeroth and second-rank tensor contributions,respectively. See `freq_contrib_api` for details.
###Code
shifting_d = Method2D(
name="Shifting-d",
channels=["2H"],
magnetic_flux_density=9.395, # in T
spectral_dimensions=[
SpectralDimension(
count=512,
spectral_width=2.5e5, # in Hz
label="Quadrupolar frequency",
events=[
SpectralEvent(
rotor_frequency=0,
transition_query={"P": [-1]},
freq_contrib=["Quad1_2"],
)
],
),
SpectralDimension(
count=256,
spectral_width=2e5, # in Hz
reference_offset=2e4, # in Hz
label="Paramagnetic shift",
events=[
SpectralEvent(
rotor_frequency=0,
transition_query={"P": [-1]},
freq_contrib=["Shielding1_0", "Shielding1_2"],
)
],
),
],
)
# A graphical representation of the method object.
plt.figure(figsize=(5, 2.5))
shifting_d.plot()
plt.show()
###Output
_____no_output_____
###Markdown
Create the Simulator object, add the method and spin system objects, andrun the simulation.
###Code
sim = Simulator(spin_systems=spin_systems, methods=[shifting_d])
# Configure the simulator object. For non-coincidental tensors, set the value of the
# `integration_volume` attribute to `hemisphere`.
sim.config.integration_volume = "hemisphere"
sim.config.decompose_spectrum = "spin_system" # simulate spectra per spin system
sim.run()
###Output
_____no_output_____
###Markdown
Add post-simulation signal processing.
###Code
data = sim.methods[0].simulation
processor = sp.SignalProcessor(
operations=[
# Gaussian convolution along both dimensions.
sp.IFFT(dim_index=(0, 1)),
sp.apodization.Gaussian(FWHM="9 kHz", dim_index=0), # along dimension 0
sp.apodization.Gaussian(FWHM="9 kHz", dim_index=1), # along dimension 1
sp.FFT(dim_index=(0, 1)),
]
)
processed_data = processor.apply_operations(data=data).real
###Output
_____no_output_____
###Markdown
The plot of the simulation. Because we configured the simulator object to simulatespectrum per spin system, the following data is a CSDM object containing fivesimulations (dependent variables). Let's visualize the first data corresponding to$\text{NiCl}_2\cdot 2 \text{D}_2\text{O}$.
###Code
data_Ni = data.split()[0]
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
cb = ax.imshow(data_Ni / data_Ni.max(), aspect="auto", cmap="gist_ncar_r")
plt.title(None)
plt.colorbar(cb)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
The plot of the simulation after signal processing.
###Code
proc_data_Ni = processed_data.split()[0]
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
cb = ax.imshow(proc_data_Ni / proc_data_Ni.max(), cmap="gist_ncar_r", aspect="auto")
plt.title(None)
plt.colorbar(cb)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Let's plot all the simulated datasets.
###Code
fig, ax = plt.subplots(
2, 5, sharex=True, sharey=True, figsize=(12, 5.5), subplot_kw={"projection": "csdm"}
)
for i, data_obj in enumerate([data, processed_data]):
for j, datum in enumerate(data_obj.split()):
ax[i, j].imshow(datum / datum.max(), aspect="auto", cmap="gist_ncar_r")
ax[i, j].invert_xaxis()
ax[i, j].invert_yaxis()
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
MCl₂.2D₂O, ²H (I=1) Shifting-d echo²H (I=1) 2D NMR CSA-Quad 1st order correlation spectrum simulation. The following is a static shifting-*d* echo NMR correlation simulation of$\text{MCl}_2\cdot 2\text{D}_2\text{O}$ crystalline solid, where$M \in [\text{Cu}, \text{Ni}, \text{Co}, \text{Fe}, \text{Mn}]$. The tensorparameters for the simulation and the corresponding spectrum are reported byWalder `et al.` [f1]_.
###Code
import matplotlib.pyplot as plt
from mrsimulator import Simulator, SpinSystem, Site
from mrsimulator.methods import Method2D
from mrsimulator import signal_processing as sp
###Output
_____no_output_____
###Markdown
Generate the site and spin system objects.
###Code
site_Ni = Site(
isotope="2H",
isotropic_chemical_shift=-97, # in ppm
shielding_symmetric={
"zeta": -551, # in ppm
"eta": 0.12,
"alpha": 62 * 3.14159 / 180, # in rads
"beta": 114 * 3.14159 / 180, # in rads
"gamma": 171 * 3.14159 / 180, # in rads
},
quadrupolar={"Cq": 77.2e3, "eta": 0.9}, # Cq in Hz
)
site_Cu = Site(
isotope="2H",
isotropic_chemical_shift=51, # in ppm
shielding_symmetric={
"zeta": 146, # in ppm
"eta": 0.84,
"alpha": 95 * 3.14159 / 180, # in rads
"beta": 90 * 3.14159 / 180, # in rads
"gamma": 0 * 3.14159 / 180, # in rads
},
quadrupolar={"Cq": 118.2e3, "eta": 0.86}, # Cq in Hz
)
site_Co = Site(
isotope="2H",
isotropic_chemical_shift=215, # in ppm
shielding_symmetric={
"zeta": -1310, # in ppm
"eta": 0.23,
"alpha": 180 * 3.14159 / 180, # in rads
"beta": 90 * 3.14159 / 180, # in rads
"gamma": 90 * 3.14159 / 180, # in rads
},
quadrupolar={"Cq": 114.6e3, "eta": 0.95}, # Cq in Hz
)
site_Fe = Site(
isotope="2H",
isotropic_chemical_shift=101, # in ppm
shielding_symmetric={
"zeta": -1187, # in ppm
"eta": 0.4,
"alpha": 122 * 3.14159 / 180, # in rads
"beta": 90 * 3.14159 / 180, # in rads
"gamma": 90 * 3.14159 / 180, # in rads
},
quadrupolar={"Cq": 114.2e3, "eta": 0.98}, # Cq in Hz
)
site_Mn = Site(
isotope="2H",
isotropic_chemical_shift=145, # in ppm
shielding_symmetric={
"zeta": -1236, # in ppm
"eta": 0.23,
"alpha": 136 * 3.14159 / 180, # in rads
"beta": 90 * 3.14159 / 180, # in rads
"gamma": 90 * 3.14159 / 180, # in rads
},
quadrupolar={"Cq": 1.114e5, "eta": 1.0}, # Cq in Hz
)
spin_systems = [
SpinSystem(sites=[s], name=f"{n}Cl$_2$.2D$_2$O")
for s, n in zip(
[site_Ni, site_Cu, site_Co, site_Fe, site_Mn], ["Ni", "Cu", "Co", "Fe", "Mn"]
)
]
###Output
_____no_output_____
###Markdown
Use the generic 2D method, `Method2D`, to generate a shifting-d echo method. Thereported shifting-d 2D sequence is a correlation of the shielding frequencies to thefirst-order quadrupolar frequencies. Here, we create a correlation method using the:attr:`~mrsimulator.method.event.freq_contrib` attribute, which acts as a switchfor including the frequency contributions from interaction during the event.In the following method, we assign the ``["Quad1_2"]`` and``["Shielding1_0", "Shielding1_2"]`` as the value to the ``freq_contrib`` key. The*Quad1_2* is an enumeration for selecting the first-order second-rank quadrupolarfrequency contributions. *Shielding1_0* and *Shielding1_2* are enumerations forthe first-order shielding with zeroth and second-rank tensor contributions,respectively. See `freq_contrib_api` for details.
###Code
shifting_d = Method2D(
name="Shifting-d",
channels=["2H"],
magnetic_flux_density=9.395, # in T
spectral_dimensions=[
{
"count": 512,
"spectral_width": 2.5e5, # in Hz
"label": "Quadrupolar frequency",
"events": [
{
"rotor_frequency": 0,
"transition_query": {"P": [-1]},
"freq_contrib": ["Quad1_2"],
}
],
},
{
"count": 256,
"spectral_width": 2e5, # in Hz
"reference_offset": 2e4, # in Hz
"label": "Paramagnetic shift",
"events": [
{
"rotor_frequency": 0,
"transition_query": {"P": [-1]},
"freq_contrib": ["Shielding1_0", "Shielding1_2"],
}
],
},
],
)
# A graphical representation of the method object.
plt.figure(figsize=(5, 2.5))
shifting_d.plot()
plt.show()
###Output
_____no_output_____
###Markdown
Create the Simulator object, add the method and spin system objects, andrun the simulation.
###Code
sim = Simulator(spin_systems=spin_systems, methods=[shifting_d])
# Configure the simulator object. For non-coincidental tensors, set the value of the
# `integration_volume` attribute to `hemisphere`.
sim.config.integration_volume = "hemisphere"
sim.config.decompose_spectrum = "spin_system" # simulate spectra per spin system
sim.run()
###Output
_____no_output_____
###Markdown
Add post-simulation signal processing.
###Code
data = sim.methods[0].simulation
processor = sp.SignalProcessor(
operations=[
# Gaussian convolution along both dimensions.
sp.IFFT(dim_index=(0, 1)),
sp.apodization.Gaussian(FWHM="9 kHz", dim_index=0), # along dimension 0
sp.apodization.Gaussian(FWHM="9 kHz", dim_index=1), # along dimension 1
sp.FFT(dim_index=(0, 1)),
]
)
processed_data = processor.apply_operations(data=data).real
###Output
_____no_output_____
###Markdown
The plot of the simulation. Because we configured the simulator object to simulatespectrum per spin system, the following data is a CSDM object containing fivesimulations (dependent variables). Let's visualize the first data corresponding to$\text{NiCl}_2\cdot 2 \text{D}_2\text{O}$.
###Code
data_Ni = data.split()[0]
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
cb = ax.imshow(data_Ni / data_Ni.max(), aspect="auto", cmap="gist_ncar_r")
plt.title(None)
plt.colorbar(cb)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
The plot of the simulation after signal processing.
###Code
proc_data_Ni = processed_data.split()[0]
plt.figure(figsize=(4.25, 3.0))
ax = plt.subplot(projection="csdm")
cb = ax.imshow(proc_data_Ni / proc_data_Ni.max(), cmap="gist_ncar_r", aspect="auto")
plt.title(None)
plt.colorbar(cb)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Let's plot all the simulated datasets.
###Code
fig, ax = plt.subplots(
2, 5, sharex=True, sharey=True, figsize=(12, 5.5), subplot_kw={"projection": "csdm"}
)
for i, data_obj in enumerate([data, processed_data]):
for j, datum in enumerate(data_obj.split()):
ax[i, j].imshow(datum / datum.max(), aspect="auto", cmap="gist_ncar_r")
ax[i, j].invert_xaxis()
ax[i, j].invert_yaxis()
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
Customer segments/customer_segments.ipynb | ###Markdown
Machine Learning Engineer Nanodegree Unsupervised Learning Project: Creating Customer Segments Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting StartedIn this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in *monetary units*) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.The dataset for this project can be found on the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Wholesale+customers). For the purposes of this project, the features `'Channel'` and `'Region'` will be excluded in the analysis — with focus instead on the six product categories recorded for customers.Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
###Code
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
data.head()
###Output
_____no_output_____
###Markdown
Data ExplorationIn this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: **'Fresh'**, **'Milk'**, **'Grocery'**, **'Frozen'**, **'Detergents_Paper'**, and **'Delicatessen'**. Consider what each category represents in terms of products you could purchase.
###Code
# Display a description of the dataset
display(data.describe(np.linspace(0.9,1,11)))
###Output
_____no_output_____
###Markdown
Implementation: Selecting SamplesTo get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add **three** indices of your choice to the `indices` list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
###Code
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [95,181,0]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
###Output
Chosen samples of wholesale customers dataset:
###Markdown
Question 1Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers. * What kind of establishment (customer) could each of the three samples you've chosen represent?**Hint:** Examples of establishments include places like markets, cafes, delis, wholesale retailers, among many others. Avoid using names for establishments, such as saying *"McDonalds"* when describing a sample customer as a restaurant. You can use the mean values for reference to compare your samples with. The mean values are as follows:* Fresh: 12000.2977* Milk: 5796.2* Grocery: 3071.9* Detergents_paper: 2881.4* Delicatessen: 1524.8Knowing this, how do your samples compare? Does that help in driving your insight into what kind of establishments they might be? **Answer:****For cust 0**:* Fresh, Frozen, and detergent paper products are very low that it was below Q1, even the fresh products purchase was the minimum of all Fresh products purchases* Delicat. and milk purchases values are above Q1 * Only the grocery that is below Q3 * From these data, I think it must be a very small market that sells grocery. **For cust 1**:* All it's purchases are above Q3, even it's purchases of fresh is the maximum * I think it's something like a grand market that has hundreds of customers every day (to sell all these fresh products) **For cust 2**:* only Frozen products are below the Q1 * milk purchases are above Q3 * All other products are below Q3 * I think it's a cafe, where it uses a lot of milk, it serves some food but it's not a restaurant as it consumes a small number of frozen products Implementation: Feature RelevanceOne interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.In the code block below, you will need to implement the following: - Assign `new_data` a copy of the data by removing a feature of your choice using the `DataFrame.drop` function. - Use `sklearn.cross_validation.train_test_split` to split the dataset into training and testing sets. - Use the removed feature as your target label. Set a `test_size` of `0.25` and set a `random_state`. - Import a decision tree regressor, set a `random_state`, and fit the learner to the training data. - Report the prediction score of the testing set using the regressor's `score` function.
###Code
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeRegressor
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.drop(['Detergents_Paper'],axis =1)
target= data['Detergents_Paper']
# TODO: Split the data into training and testing sets(0.25) using the given feature as the target
# Set a random state.
X_train, X_test, y_train, y_test = train_test_split(new_data,target, test_size=0.25, random_state = 0)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state=0).fit(X_train,y_train)
# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test,y_test)
score
###Output
_____no_output_____
###Markdown
Question 2* Which feature did you attempt to predict? * What was the reported prediction score? * Is this feature necessary for identifying customers' spending habits?**Hint:** The coefficient of determination, `R^2`, is scored between 0 and 1, with 1 being a perfect fit. A negative `R^2` implies the model fails to fit the data. If you get a low score for a particular feature, that lends us to beleive that that feature point is hard to predict using the other features, thereby making it an important feature to consider when considering relevance. **Answer:*** Detergents_Paper feature* The reported prediction score was 0.73* I think so It has a high R2 score, So there's a correlation between the model and predicted feature Visualize Feature DistributionsTo get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
###Code
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (20,10), diagonal = 'kde');
###Output
C:\Python27\lib\site-packages\ipykernel_launcher.py:2: FutureWarning: pandas.scatter_matrix is deprecated. Use pandas.plotting.scatter_matrix instead
###Markdown
Question 3* Using the scatter matrix as a reference, discuss the distribution of the dataset, specifically talk about the normality, outliers, large number of data points near 0 among others. If you need to sepearate out some of the plots individually to further accentuate your point, you may do so as well.* Are there any pairs of features which exhibit some degree of correlation? * Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? * How is the data for those features distributed?**Hint:** Is the data normally distributed? Where do most of the data points lie? You can use [corr()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corr.html) to get the feature correlations and then visualize them using a [heatmap](http://seaborn.pydata.org/generated/seaborn.heatmap.html)(the data that would be fed into the heatmap would be the correlation values, for eg: `data.corr()`) to gain further insight.
###Code
import seaborn as sns
import matplotlib.pyplot as plt
hmap=data.corr()
_,ax=plt.subplots(figsize=(12,10))
cmap=sns.diverging_palette(220,10,as_cmap=True)
sns.heatmap(hmap,cmap=cmap,ax=ax,square=True,annot=True)
fig=plt.figure()
for i, var_name in enumerate(data.columns):
ax=fig.add_subplot(2,3,i+1)
data[var_name].hist(bins=10,ax=ax,figsize=(15,8))
ax.set_title(var_name+" Distribution")
fig.tight_layout() # Improves appearance a bit.
plt.show()
###Output
_____no_output_____
###Markdown
**Answer:*** By looking at the histogram of every feature and their distribution with each other on the scatter grid, also by revising the statistics summary table we have made before, all the data are right-skewed since most of the values above the third quartile is very large and if we searched for outliers locating the values above $Q3+1.5*IQR$ and below $Q1-1.5*IQR$ without normalization we would find that |-|Fresh|Milk|Grocery|Frozen|Detergents_paper|Delicatessen||-|-|-|--|-----|---|---||Q1|3127|1533|2153|742|256|408||Q3|16933|7190|10655|3554|3922|1820||IQR|13806|5657|8502|2812|3666|1412||1.5*IQR|20709|8485.5|12753|4218|5499|2118||outliers|above 37642|above 15675.5|above 23408|above 7772|above 9421|above 3938||amount of data|about 5%|about 6%|about 5%|about 10%|about 7%|about 6%| * from the table above we can figure there are outliers in the right end of the distribution and that's why there are a lot of points around the zero, the scale of the distribution is very large so all points below the third quartile appear to be close to zero * It's clear from the scatter grid and clearer from the heatmap that there is a strong correlation between grocery and detergents_paper, and high correlation between Milk Vs. Grocery and Milk Vs. Detergents_paper* It makes me suspicious as it seems that there aren't a strong correlation between it and any other product, I thought that a special product like Delicatessen is something that used in restaurants and cafes, but I was wrong* Those features have a distribution that almost linear Data PreprocessingIn this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful. Implementation: Feature ScalingIf data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most [often appropriate](http://econbrowser.com/archives/2014/02/use-of-logarithms-in-economics) to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a [Box-Cox test](http://scipy.github.io/devdocs/generated/scipy.stats.boxcox.html), which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.In the code block below, you will need to implement the following: - Assign a copy of the data to `log_data` after applying logarithmic scaling. Use the `np.log` function for this. - Assign a copy of the sample data to `log_samples` after applying logarithmic scaling. Again, use `np.log`.
###Code
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
###Output
C:\Python27\lib\site-packages\ipykernel_launcher.py:8: FutureWarning: pandas.scatter_matrix is deprecated. Use pandas.plotting.scatter_matrix instead
###Markdown
ObservationAfter applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
###Code
# Display the log-transformed sample data
display(log_samples)
###Output
_____no_output_____
###Markdown
Implementation: Outlier DetectionDetecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use [Tukey's Method for identfying outliers](http://datapigtechnologies.com/blog/index.php/highlighting-outliers-in-your-data-with-the-tukey-method/): An *outlier step* is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.In the code block below, you will need to implement the following: - Assign the value of the 25th percentile for the given feature to `Q1`. Use `np.percentile` for this. - Assign the value of the 75th percentile for the given feature to `Q3`. Again, use `np.percentile`. - Assign the calculation of an outlier step for the given feature to `step`. - Optionally remove data points from the dataset by adding indices to the `outliers` list.**NOTE:** If you choose to remove any outliers, ensure that the sample data does not contain any of these points! Once you have performed this implementation, the dataset will be stored in the variable `good_data`.
###Code
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature],25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature],75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = (Q3-Q1)*1.5
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
#display(step,Q1,Q3)
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# OPTIONAL: Select the indices for data points you wish to remove
outliers = [65,266,128,75,154]
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
###Output
Data points considered outliers for the feature 'Fresh':
###Markdown
Question 4* Are there any data points considered outliers for more than one feature based on the definition above? * Should these data points be removed from the dataset? * If any data points were added to the `outliers` list to be removed, explain why.** Hint: ** If you have datapoints that are outliers in multiple categories think about why that may be and if they warrant removal. Also note how k-means is affected by outliers and whether or not this plays a factor in your analysis of whether or not to remove them. **After normalization** |-|Step|Q1|Q3|Lower edge|Higher edge|outliers|percentage||--|---|--|---|--|---|---|---||Fresh|2.533|8.05|9.74|5.52|12.27|16|3.6%||Milk|2.32|7.33|8.88|5.01|11.2|4|0.9%||Grocery|2.4|7.67|9.27|5.27|11.67|2|0.45%||Frozen|2.35|6.61|8.18|4.26|10.53|10|2.27%||Detergents_paper|4.09|5.55|8.27|1.46|12.36|2|0.45%||Delicatessen|2.24|6.01|7.51|3.77|9.75|14|3.18%| **Answer:*** There are some data points that considered outliers in more than one features which they were removed : |no.|category||----|---||65|Fresh & Frozen||66|Fresh & Delicatessen||75|Grocery & detergents_paper||128|Fresh & Delicatessen||154|Milk & Grocery & Delicatessen|* every data point is a valuable piece of information even if it's outlier, here we can determine the customer bands, where the lower outliers could represent the least band of customers don't buy a lot in usual and to put that in consideration when dealing with them again, and the higher outliers represent the grand customers that buy a lot and also represent the highest customer band in purchasing * I was with the idea of eliminating the outliers in the features with a low percentage (less than 1%), as the percentage of outliers increases their effect will decrease(that what I thought)* There was another idea that we should eliminate the non-redundant outliers(that aren't outliers in more than one features) * But when I recalled how K-means work and how it will be affected by outliers, where the K-means will move their centroids toward the outliers instead of the mean of the cluster, so I think that maybe the outliers that fall under more than one category will have a powerful effect on the centroids than the one-featured outlier* So I removed the outliers trading the information they give with the accuracy **note**: can someone explain this point further because I'm not sure 100% of my answer Feature TransformationIn this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
###Code
log_samples.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3 entries, 0 to 2
Data columns (total 6 columns):
Fresh 3 non-null float64
Milk 3 non-null float64
Grocery 3 non-null float64
Frozen 3 non-null float64
Detergents_Paper 3 non-null float64
Delicatessen 3 non-null float64
dtypes: float64(6)
memory usage: 180.0 bytes
###Markdown
Implementation: PCANow that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the `good_data` to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the *explained variance ratio* of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.In the code block below, you will need to implement the following: - Import `sklearn.decomposition.PCA` and assign the results of fitting PCA in six dimensions with `good_data` to `pca`. - Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
###Code
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
from sklearn.decomposition import PCA
pca = PCA()
pca.fit(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_data)
# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)
###Output
_____no_output_____
###Markdown
Question 5* How much variance in the data is explained* **in total** *by the first and second principal component? * How much variance in the data is explained by the first four principal components? * Using the visualization provided above, talk about each dimension and the cumulative variance explained by each, stressing upon which features are well represented by each dimension(both in terms of positive and negative variance explained). Discuss what the first four dimensions best represent in terms of customer spending.**Hint:** A positive increase in a specific dimension corresponds with an *increase* of the *positive-weighted* features and a *decrease* of the *negative-weighted* features. The rate of increase or decrease is based on the individual feature weights.
###Code
display(pca_results)
###Output
_____no_output_____
###Markdown
**Answer:*** by the first two principal components there is 0.7101 variance explained in total * for the first four principal components there is 0.9319 variance explained in total * **First dim.**: the first dimension explained about 44% which is nearly half of the variation in data,the first dimension is inversly correlated with Detergents_Paper, second in correlation power is grocery then Milk which also has negative weights so inversely correlated ,so these three features are correlated also, and when one of them decreases all of them tends to decrease. * **Second dim.**: the second dimension explained about 27% which is nearly quarter of the variation, so by using both dimensions we will have cum. explained variance of 71%, where Fresh products are negatively correlated with the variance represented by the second dimension, then Frozen and Delicatessen, and those are the three features effect the variation explained by the second dim., so when they increase the second dimension decreases * **Third dim.**:For the third one, it explained another 12% to be the explained variance so far 83%, there is high negative correlation of Fresh products with this dimension, and high positive correlation of Delicatessen with this dimension, also there is positive correlation of frozen and negative one of Detergents_paper, so it represents the four features with different proportion and how they affect this dimension* **Forth dim.**: Finally this dim. adds another 10% to make the cumulative sum of explained variance equal 93%, there is a very high correlation of Frozen, and high negative correlation with Delicatessen, also a positive correlation of Detergents_paper, and a negative one of fresh ObservationRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
###Code
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
###Output
_____no_output_____
###Markdown
Implementation: Dimensionality ReductionWhen using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the *cumulative explained variance ratio* is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.In the code block below, you will need to implement the following: - Assign the results of fitting PCA in two dimensions with `good_data` to `pca`. - Apply a PCA transformation of `good_data` using `pca.transform`, and assign the results to `reduced_data`. - Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
###Code
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2).fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
###Output
_____no_output_____
###Markdown
ObservationRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
###Code
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(log_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
###Output
_____no_output_____
###Markdown
Visualizing a BiplotA biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case `Dimension 1` and `Dimension 2`). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.Run the code cell below to produce a biplot of the reduced-dimension data.
###Code
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
###Output
_____no_output_____
###Markdown
ObservationOnce we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on `'Milk'`, `'Grocery'` and `'Detergents_Paper'`, but not so much on the other product categories. From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier? ClusteringIn this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. Question 6* What are the advantages to using a K-Means clustering algorithm? * What are the advantages to using a Gaussian Mixture Model clustering algorithm? * Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?** Hint: ** Think about the differences between hard clustering and soft clustering and which would be appropriate for our dataset. **Answer:****K-Means main Advantages** * It's very simple to implement and it uses relatively fewer computer resources * If the data is normally distributed, it will be very effective since the technique of K-means algorithm works best here**GMM main Advantages*** It's soft clustering technique that works on probability it gives for each point to belong to a certain cluster, and for those who are in the middle that we aren't sure about them will remain unclustered * It's more efficient in dealing with non-normal data * It has less sensitivity to outliers**Observation*** the data has outliers, there aren't single criteria to cluster the data every customer was interested in some products and with different ratios, so we need to figure out the probability of belonging of each data to each cluster **I will go with GMM** Implementation: Creating ClustersDepending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known *a priori*, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's *silhouette coefficient*. The [silhouette coefficient](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the *mean* silhouette coefficient provides for a simple scoring method of a given clustering.In the code block below, you will need to implement the following: - Fit a clustering algorithm to the `reduced_data` and assign it to `clusterer`. - Predict the cluster for each data point in `reduced_data` using `clusterer.predict` and assign them to `preds`. - Find the cluster centers using the algorithm's respective attribute and assign them to `centers`. - Predict the cluster for each sample data point in `pca_samples` and assign them `sample_preds`. - Import `sklearn.metrics.silhouette_score` and calculate the silhouette score of `reduced_data` against `preds`. - Assign the silhouette score to `score` and print the result.
###Code
# TODO: Apply your clustering algorithm of choice to the reduced data
from sklearn.mixture import GaussianMixture
from sklearn.metrics import silhouette_score
from sklearn.cluster import KMeans
clusterer = GaussianMixture(n_components=2,random_state=0).fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.means_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data,preds)
###Output
_____no_output_____
###Markdown
Question 7* Report the silhouette score for several cluster numbers you tried. * Of these, which number of clusters has the best silhouette score? **Answer:****random_state of 0** 0.419498932943 number of clusters is 2 0.299649783025 number of clusters is 3 0.326175272763 number of clusters is 4 0.264592980821 number of clusters is 5 0.307539334736 number of clusters is 6 0.3335068059 number of clusters is 7 0.3314659256 number of clusters is 8 0.257957166318 number of clusters is 9 **random_state of 1** 0.419498932943 number of clusters is 2 0.407239079648 number of clusters is 3 0.296633680381 number of clusters is 4 0.302078577041 number of clusters is 5 0.290372137931 number of clusters is 6 0.312967837138 number of clusters is 7 0.323622138892 number of clusters is 8 0.305100787162 number of clusters is 9 **random_state of 2** 0.419498932943 number of clusters is 2 0.404207647731 number of clusters is 3 0.268366007617 number of clusters is 4 0.298377814088 number of clusters is 5 0.30755522733 number of clusters is 6 0.337041406946 number of clusters is 7 0.328205028225 number of clusters is 8 0.300977126895 number of clusters is 9 **The best is when we used 2 clusters** Cluster VisualizationOnce you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
###Code
# Display the results of the clustering from implementation
vs.cluster_results(reduced_data, preds, centers, pca_samples)
###Output
_____no_output_____
###Markdown
Implementation: Data RecoveryEach cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the *averages* of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to *the average customer of that segment*. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.In the code block below, you will need to implement the following: - Apply the inverse transform to `centers` using `pca.inverse_transform` and assign the new centers to `log_centers`. - Apply the inverse function of `np.log` to `log_centers` using `np.exp` and assign the true centers to `true_centers`.
###Code
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
###Output
_____no_output_____
###Markdown
Question 8* Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project(specifically looking at the mean values for the various feature points). What set of establishments could each of the customer segments represent?**Hint:** A customer who is assigned to `'Cluster X'` should best identify with the establishments represented by the feature set of `'Segment X'`. Think about what each segment represents in terms their values for the feature points chosen. Reference these values with the mean values to get some perspective into what kind of establishment they represent. **Answer:**for more insight we should put the "mean" row with segment 0 and 1 |-|Fresh|Milk|Grocery|Frozen|Detergents_Paper|Delicatessen||--|---|--|---|---|----|---||**Segment 0**|3398|7658|12029|850|4626|920||**Segment 1**|9041|2128|2780|2083|356|739||**Mean**| 12000|5796|7951|3071|2881|1524|* For fresh column, it's obvious that it has outliers since the mean is larger than both means, we could say that the segment 1 takes more Fresh products than S0* For Milk column, S0 consumes more than that of S1* For Grocery column, S0 buys more, and S0 takes grocery products fur more than S1* For Frozen column, there are outliers here to make mean increase like this, also S1 buys more than S0 * For Detergents_Paper S0 buys more than S1, and the gap between the two segments is too large * For Delicatessen, there's a lot of outliers, and S0 buys more than S1 but not too much **Conc**: we can say that S0: are supermarkets so they sell a lot of Grocery(far a lot), and detergents_Paper also Milk S1: could be cafes and restaurants or any place that serves Fresh and Frozen products and use delicatessen on the food Question 9* For each sample point, which customer segment from* **Question 8** *best represents it? * Are the predictions for each sample point consistent with this?*Run the code block below to find which cluster each sample point is predicted to be.
###Code
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
###Output
Sample point 0 predicted to be in Cluster 0
Sample point 1 predicted to be in Cluster 0
Sample point 2 predicted to be in Cluster 0
###Markdown
**Answer:**point 0 is an outlier in Fresh products (bought 3) so I think this will dominate and make it from cluster 0 (we will name it supermarkets), also it bought a lot of grocery and milk point 1 is a bit weird, it bought a lot of things from all products, but the most purchases were Fresh products which distinguish the second cluster(called it restaurants), also the second most purchases were Frozen and delicatessen which also point to restaurants cluster _I don't know why maybe a mistake_ point 2 also made moderate purchases from all products except for frozen, and the most purchases are Milk so it supposed to be supermarkets yes Conclusion In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the ***customer segments***, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which *segment* that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the ***customer segments*** to a hidden variable present in the data, to see whether the clustering identified certain relationships. Question 10Companies will often run [A/B tests](https://en.wikipedia.org/wiki/A/B_testing) when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. * How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?***Hint:** Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most? **Answer:**He can use the segments above to determine which of them need more frequent service, where restaurants cluster need more frequent service because there are fresh products and cant just stack them, while supermarkets don't need this frequent service so maybe the 3 days a week is enough To apply A/B test,we need two groups one is the control group that doesn't face any change and one is the variation group that will be applied changes to, for these two groups we can't just split the whole data into two because there are different customers and we don't know how they will be represented in the groups , so to solve this we need to split every segment(we have 2 only here) into two and every group take half from each segment (we could use random sampling or any other sampling technique it won't matter since we know that half of the data from one cluster and the other half from the second) we leave the control group with the service as usual 5 days a week , and the variation group 3 days a week and compare the two feedbacks from the two group and to see which cluster will be happy with the new service we could make 2 A/B tests each for every cluster , so every cluster for 2 groups and the test run as usual indpendently Question 11Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a ***customer segment*** it best identifies with (depending on the clustering algorithm applied), we can consider *'customer segment'* as an **engineered feature** for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a ***customer segment*** to determine the most appropriate delivery service. * How can the wholesale distributor label the new customers using only their estimated product spending and the **customer segment** data?**Hint:** A supervised learner could be used to train on the original customers. What would be the target variable? **Answer:**By using the data and their labels (that's the output of our clustering) we can train a supervised model like logistic regression then and classify the new customers into one of our predefined clusters Visualizing Underlying DistributionsAt the beginning of this project, it was discussed that the `'Channel'` and `'Region'` features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the `'Channel'` feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.Run the code block below to see how each data point is labeled either `'HoReCa'` (Hotel/Restaurant/Cafe) or `'Retail'` the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
###Code
# Display the clustering results based on 'Channel' data
vs.channel_results(reduced_data, outliers, pca_samples)
###Output
_____no_output_____ |
sphinx/datascience/source/gradient-descent.ipynb | ###Markdown
Gradient descent[Gradient descent](https://en.wikipedia.org/wiki/Gradient_descent) is an optimization algorithm to find the minimum of some function. Typically, in machine learning, the function is a [loss function](https://en.wikipedia.org/wiki/Loss_function), which essentially captures the difference between the true and predicted values. Gradient descent has many applications in machine learning and may be applied to (or is the heart and soul of) many machine learning approaches such as find weights for * [regression](https://en.wikipedia.org/wiki/Regression_analysis), * [support vector machines](https://en.wikipedia.org/wiki/Support_vector_machine), and * [deep learning (artificial neural networks)](https://en.wikipedia.org/wiki/Artificial_neural_network).This notebook aims to show the mechanics of gradient descent with no tears (in an easy way). Simple linear regressionLet's say we have a simple linear regression.$y = b + wx$where,* $b$ is the y-intercept,* $w$ is the coefficient,* $x$ is the an independent variable value, and* $y$ is the predicted, dependent variable value.Now, we want to estimate $w$. There are many ways to estimate $w$, however, we want to use gradient descent to do so (we will not go into the other ways to estimate $w$). The first thing we have to do is to be able to formulate a loss function. Let's introduce some convenience notation. Assume $\hat{y}$ is what the model predicts as follows.$\hat{y} = f(x) = b + wx$Note that $\hat{y}$ is just an approximation of the true value $y$. We can define the loss function as follows.$L(\hat{y}, y) = (y - \hat{y})^2 = (y - (b + wx))^2$The loss function essentially measures the error of the model; the difference in what it predicts $\hat{y}$ and the true value $y$. Note that we square the difference between $y$ and $\hat{y}$ as a convenience to get rid of the influence of negative differences. This loss function tells us how much error there is in each of our prediction given our model (the model includes the linear relationship and weight). Since typically we are making several predictions, we want an overall estimation of the error.$L(\hat{Y}, Y) = \frac{1}{N} \sum{(y - \hat{y})^2} = \frac{1}{N} \sum{(y - (b + wx))^2}$But how does this loss function really guides us to learn or estimate $w$? The best way to understand how the loss function guides us in estimating or learning the weight $w$ is visually. The loss function, in this case, is convex (U-shaped). Notice that the functional form of the loss function is just a squared function not unlike the following.$y = f(x) = x^2$If we are asked to find the minimum of such a function, we already know that the lowest point for $y = x^2$ is $y = 0$, and substituting $y = 0$ into the equation, $x = 0$ is the input for which we find the minimum for the function. Another way would be to take the derivative of $f(x)$, $f'(x) = 2x$, and find the value $x$ for which $f'(x) = 0$.However, our situation is slightly different because we need to find $b$ and $w$ to minimize the loss function. The simplest way to find the minimum of the loss function would be to exhaustively iterate through every combination of $b$ and $w$ and see which pair gives us the minimum value. But such approach is computationally expensive. A smart way would be to take the first order partial derivatives of $L$ with respect to $b$ and $w$, and search for values that will minimize simultaneously the partial derivatives.$\frac{\partial L}{\partial b} = \frac{2}{N} \sum{-(y - (b + wx))}$$\frac{\partial L}{\partial w} = \frac{2}{N} \sum{-x (y - (b + wx))}$Remember that the first order derivative gives us the slope of the tanget line to a point on the curve. At this point, the gradient descent algorithm comes into play to help us by using those slopes to move towards the minimum. We already have the training data composed of $N$ pairs of $(y, x)$, but we need to find a pair $b$ and $w$ such that when plugged into the partial derivative functions will minimize the functions. The algorithm for the gradient descent algorithm is as follows.* given * $(X, Y)$ data of $N$ observations, * $b$ initial guess, * $w$ initial guess, and * $\alpha$ learning rate* repeat until convergence * $\nabla_b = 0$ * $\nabla_w = 0$ * for each $(x, y)$ in $(X, Y)$ * $\nabla_b = \nabla_b - \frac{2}{N} (y - (b + wx))$ * $\nabla_w = \nabla_w - \frac{2}{N} x (y - (b + wx))$ * $b = b - \alpha \nabla_b$ * $w = w - \alpha \nabla_w$ Batch gradient descentBatch gradient descent learns the parameters by looking at all the data for each iteration.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import networkx as nx
np.random.seed(37)
num_samples = 100
x = 2.0 + np.random.standard_normal(num_samples)
y = 5.0 + 2.0 * x + np.random.standard_normal(num_samples)
data = np.column_stack((x, y))
print('data shape {}'.format(data.shape))
plt.figure(figsize=(10, 5))
plt.plot(x, y, '.', color='blue', markersize=2.5)
plt.plot(x, 5. + 2. * x, '*', color='red', markersize=1.5)
def batch_step(data, b, w, alpha=0.005):
b_grad = 0
w_grad = 0
N = data.shape[0]
for i in range(N):
x = data[i][0]
y = data[i][1]
b_grad += -(2./float(N)) * (y - (b + w * x))
w_grad += -(2./float(N)) * x * (y - (b + w * x))
b_new = b - alpha * b_grad
w_new = w - alpha * w_grad
return b_new, w_new
b = 0.
w = 0.
alpha = 0.01
for i in range(10000):
b_new, w_new = batch_step(data, b, w, alpha=alpha)
b = b_new
w = w_new
if i % 1000 == 0:
print('{}: b = {}, w = {}'.format(i, b_new, w_new))
print('final: b = {}, w = {}'.format(b, w))
plt.figure(figsize=(10, 5))
plt.plot(x, y, '.', color='blue', markersize=2.5)
plt.plot(x, 5. + 2. * x, '*', color='red', markersize=1.5)
plt.plot(x, b + w * x, 'v', color='green', markersize=1.5)
###Output
_____no_output_____
###Markdown
Stochastic gradient descentStochastic gradient descent shuffles the data and looks at one data point at a time to learn/update the parameters.
###Code
def stochastic_step(x, y, b, w, N, alpha=0.005):
b_grad = -(2./N) * (y - (b + w * x))
w_grad = -(2./N) * x * (y - (b + w * x))
b_new = b - alpha * b_grad
w_new = w - alpha * w_grad
return b_new, w_new
from random import shuffle
b = 0.
w = 0.
alpha = 0.01
N = float(data.shape[0])
for i in range(2000):
indices = list(range(data.shape[0]))
shuffle(indices)
for j in indices:
b_new, w_new = stochastic_step(data[j][0], data[j][1], b, w, N, alpha=alpha)
b = b_new
w = w_new
if i % 1000 == 0:
print('{}: b = {}, w = {}'.format(i, b_new, w_new))
print('final: b = {}, w = {}'.format(b, w))
###Output
0: b = 0.1722709914399821, w = 0.3940699436533831
1000: b = 4.712535292062745, w = 2.1222815300547304
final: b = 4.8219485582693515, w = 2.079108647996962
###Markdown
scikit-learnAs you can see below, the intercept and coefficient are nearly identical to batch and stochastic gradient descent algorithms.
###Code
from sklearn.linear_model import LinearRegression
lr = LinearRegression(fit_intercept=True, normalize=False)
lr.fit(data[:, 0].reshape(-1, 1), data[:, 1])
print(lr.intercept_)
print(lr.coef_)
###Output
4.825522182175062
[2.07713235]
###Markdown
Multiple linear regressionThis time we apply the gradient descent algorithm to a multiple linear regression problem.$y = 5.0 + 2.0 x_0 + 1.0 x_1 + 3.0 x_2 + 0.5 x_3 + 1.5 x_4$
###Code
x0 = 2.0 + np.random.standard_normal(num_samples)
x1 = 1.0 + np.random.standard_normal(num_samples)
x2 = -1.0 + np.random.standard_normal(num_samples)
x3 = -2.0 + np.random.standard_normal(num_samples)
x4 = 0.5 + np.random.standard_normal(num_samples)
y = 5.0 + 2.0 * x0 + 1.0 * x1 + 3.0 * x2 + 0.5 * x3 + 1.5 * x4 + np.random.standard_normal(num_samples)
data = np.column_stack((x0, x1, x2, x3, x4, y))
print('data shape {}'.format(data.shape))
###Output
data shape (100, 6)
###Markdown
Batch gradient descent
###Code
def multi_batch_step(data, b, w, alpha=0.005):
num_x = data.shape[1] - 1
b_grad = 0
w_grad = np.zeros(num_x)
N = data.shape[0]
for i in range(N):
y = data[i][num_x]
x = data[i, 0:num_x]
b_grad += -(2./float(N)) * (y - (b + w.dot(x)))
for j in range(num_x):
x_ij = data[i][j]
w_grad[j] += -(2./float(N)) * x_ij * (y - (b + w.dot(x)))
b_new = b - alpha * b_grad
w_new = np.array([w[i] - alpha * w_grad[i] for i in range(num_x)])
return b_new, w_new
b = 0.
w = np.zeros(data.shape[1] - 1)
alpha = 0.01
for i in range(10000):
b_new, w_new = multi_batch_step(data, b, w, alpha=alpha)
b = b_new
w = w_new
if i % 1000 == 0:
print('{}: b = {}, w = {}'.format(i, b_new, w_new))
print('final: b = {}, w = {}'.format(b, w))
###Output
0: b = 0.13632797883173225, w = [ 0.29275746 0.15943176 -0.06731627 -0.2838181 0.1087194 ]
1000: b = 3.690745585464014, w = [ 2.05046789 0.99662839 2.91470927 -0.01336945 1.51371104]
2000: b = 4.51136474574727, w = [1.89258252 0.96694568 2.9696926 0.15595645 1.47558119]
3000: b = 4.7282819202927415, w = [1.8508481 0.95909955 2.98422654 0.20071495 1.46550219]
4000: b = 4.785620406833327, w = [1.83981629 0.95702555 2.98806834 0.21254612 1.46283797]
5000: b = 4.800776892462706, w = [1.83690022 0.95647732 2.98908386 0.2156735 1.46213373]
6000: b = 4.804783260096269, w = [1.8361294 0.95633241 2.9893523 0.21650017 1.46194757]
7000: b = 4.8058422774706, w = [1.83592565 0.9562941 2.98942325 0.21671869 1.46189837]
8000: b = 4.806122211291387, w = [1.83587179 0.95628398 2.98944201 0.21677645 1.46188536]
9000: b = 4.8061962071916655, w = [1.83585755 0.9562813 2.98944697 0.21679172 1.46188192]
final: b = 4.806215757433297, w = [1.83585379 0.95628059 2.98944828 0.21679575 1.46188101]
###Markdown
scikit-learn
###Code
lr = LinearRegression(fit_intercept=True, normalize=False)
lr.fit(data[:, 0:data.shape[1] - 1], data[:, data.shape[1] - 1])
print(lr.intercept_)
print(lr.coef_)
###Output
4.806222794782926
[1.83585244 0.95628034 2.98944875 0.2167972 1.46188068]
|
notebooks/semisupervised/plot-results/plot-all-ssl-results-table.ipynb | ###Markdown
plot naive
###Code
color_list = [
{
"mask": results_df.augmented == "not_augmented",
"color": pal[16],
"ls": "solid",
"marker": "o",
"label": "Baseline",
},
{
"mask": results_df.augmented == "umap_euclidean",
"color": pal[0],
"ls": "solid",
"marker": "o",
"label": "+ UMAP (Euclidean)",
},
]
alpha = 0.75
linewidth = 2
for dataset in datasets:
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 2.5),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"] & (results_df.dataset == dataset)
color = col_dict["color"]
ls = col_dict["ls"]
label = col_dict["label"]
marker = col_dict["marker"]
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(
nex, acc, color=color, s=50, alpha=1, marker=marker
) # , facecolors="none")
ax.plot(
nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls
) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
# display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = (
nex + li / 100 - len(color_list) / 2 / 100
) # +(np.random.rand(1)-0.5)*.025
ax2.scatter(
nex, acc, color=color, s=50, alpha=1, marker=marker
) # , facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
# markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
# ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
ymin, ymax = ax.get_ylim()
if ymax > 1:
ymax = 1
ax.set_ylim([ymin, ymax])
ax.set_title(dataset.upper(), x=0.605)
ax.set_ylabel("Accuracy")
ax.set_xlabel("# Training Examples", x=0.605)
ensure_dir(FIGURE_DIR / "ssl_results")
save_fig(FIGURE_DIR / 'ssl_results' /(dataset + '_umap_euclidean'), save_pdf = True)
plt.show()
###Output
_____no_output_____
###Markdown
plot consistency-euclidean
###Code
color_list = [
{
"mask": results_df.augmented == 'not_augmented',
"color": pal[16],
"ls": 'solid',
"marker": 'o',
"label": "Baseline"
},
{
"mask": results_df.augmented == 'augmented',
"color": pal[16],
"ls": 'dashed',
"marker": 'X',
"label": "+ Aug."
},
{
"mask": results_df.augmented == 'umap_euclidean_augmented',
"color": pal[0],
"ls": 'dashed',
"marker": 'X',
"label": "+ Aug. + UMAP (Euclidean)"
},
]
for dataset in datasets:
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 2.5),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"] & (results_df.dataset == dataset)
color = col_dict["color"]
ls = col_dict["ls"]
label = col_dict["label"]
marker = col_dict["marker"]
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(
nex, acc, color=color, s=50, alpha=1, marker=marker
) # , facecolors="none")
ax.plot(
nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls
) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
# display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = (
nex + li / 100 - len(color_list) / 2 / 100
) # +(np.random.rand(1)-0.5)*.025
ax2.scatter(
nex, acc, color=color, s=50, alpha=1, marker=marker
) # , facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
# markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
# ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
ymin, ymax = ax.get_ylim()
if ymax > 1:
ymax = 1
ax.set_ylim([ymin, ymax])
ax.set_title(dataset.upper(), x=0.605)
ax.set_ylabel("Accuracy")
ax.set_xlabel("# Training Examples", x=0.605)
ensure_dir(FIGURE_DIR / "ssl_results")
save_fig(FIGURE_DIR / 'ssl_results' /(dataset + '_umap_euclidean_consistency'), save_pdf = True)
plt.show()
###Output
_____no_output_____
###Markdown
plot learned metric
###Code
color_list = [
{
"mask": results_df.augmented == "not_augmented",
"color": pal[16],
"ls": "solid",
"marker": "o",
"label": "Baseline",
},
{
"mask": results_df.augmented == "augmented",
"color": pal[16],
"ls": "dashed",
"marker": "X",
"label": "+ Aug.",
},
{
"mask": results_df.augmented == "umap_learned",
"color": pal[4],
"ls": "solid",
"marker": "o",
"label": "+ UMAP (learned)",
},
{
"mask": results_df.augmented == "umap_augmented_learned",
"color": pal[4],
"ls": "dashed",
"marker": "X",
"label": "+Aug + UMAP (learned)",
},
]
for dataset in datasets:
fig, (ax, ax2) = plt.subplots(
1,
2,
figsize=(5, 2.5),
dpi=100,
sharey=True,
gridspec_kw={"width_ratios": [5, 1], "wspace": 0.05},
)
for li, col_dict in enumerate(color_list):
mask = col_dict["mask"] & (results_df.dataset == dataset)
color = col_dict["color"]
ls = col_dict["ls"]
label = col_dict["label"]
marker = col_dict["marker"]
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title != "full"]
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
ax.scatter(
nex, acc, color=color, s=50, alpha=1, marker=marker
) # , facecolors="none")
ax.plot(
nex, acc, linewidth=linewidth, alpha=alpha, color=color, ls=ls
) # , label = label
subset_ds = results_df[mask]
subset_ds = subset_ds[subset_ds.dset_size_title == "full"]
# display(subset_ds)
nex = subset_ds.labels_per_class.values.astype("int")
acc = subset_ds.test_acc.values
nex = (
nex + li / 100 - len(color_list) / 2 / 100
) # +(np.random.rand(1)-0.5)*.025
ax2.scatter(
nex, acc, color=color, s=50, alpha=1, marker=marker
) # , facecolors="none")
ax.plot(
[],
[],
"-" + marker,
color=color,
linewidth=linewidth,
label=label,
alpha=alpha,
markersize=7,
# markerfacecolor="none",
ls=ls,
)
ax.set_xscale("log")
ax.set_xticks([4, 16, 64, 256, 1024])
ax.set_xticklabels([4, 16, 64, 256, 1024])
# ax.set_ylim([0, 1])
ax.spines["right"].set_visible(False)
ax.legend()
ax.set_xlim([2, 2048])
# ax2.set_xscale('log')
ax2.set_xticks([4096])
ax2.set_xticklabels(["full"])
ax2.spines["left"].set_visible(False)
ax2.yaxis.tick_right()
d = 0.015 # how big to make the diagonal lines in axes coordinates
# arguments to pass plot, just so we don't keep repeating them
kwargs = dict(transform=ax.transAxes, color="k", clip_on=False)
ax.plot((1 - d, 1 + d), (-d, +d), **kwargs)
ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs)
d = 0.015
offset = 5
kwargs.update(transform=ax2.transAxes) # switch to the bottom axes
ax2.plot((-d * offset, +d * offset), (1 - d, 1 + d), **kwargs)
ax2.plot((-d * offset, +d * offset), (-d, +d), **kwargs)
ax.minorticks_on()
ax.tick_params(axis="y", which="minor", direction="out")
ymin, ymax = ax.get_ylim()
if ymax > 1:
ymax = 1
ax.set_ylim([ymin, ymax])
ax.set_title(dataset.upper(), x=0.605)
ax.set_ylabel("Accuracy")
ax.set_xlabel("# Training Examples", x=0.605)
ensure_dir(FIGURE_DIR / "ssl_results")
save_fig(FIGURE_DIR / 'ssl_results' /(dataset + '_umap_learned_consistency'), save_pdf = True)
plt.show()
### create tables
results_df[:3]
"""results_only = results_df[['dataset', 'labels_per_class', 'augmented', 'test_acc']]
r_only_cols = results_only.assign(key=results_only.groupby('augmented').cumcount()).pivot('key','augmented','test_acc')
r_only_addtl = results_only.assign(key=results_only.groupby('augmented').cumcount())[['dataset', 'labels_per_class', 'key']]
results_only = r_only_addtl.merge(r_only_cols, on = 'key').set_index(['dataset', 'labels_per_class']).drop_duplicates()
results_only = results_only.drop(columns='key')
results_only"""
results_only = results_df[['dataset', 'labels_per_class', 'augmented', 'test_acc']]
r_only_cols = results_only.assign(key=results_only.groupby('labels_per_class').cumcount()).pivot('key','labels_per_class','test_acc')
r_only_addtl = results_only.assign(key=results_only.groupby('labels_per_class').cumcount())[['dataset', 'augmented', 'key']]
results_only = r_only_addtl.merge(r_only_cols, on = 'key').set_index(['dataset', 'augmented']).drop_duplicates()
results_only = results_only.drop(columns='key')
results_only = results_only[['4', '64', '256', '1024',4096]]
results_only
print(
results_only.to_latex(bold_rows=True)
.replace("umap\_learned", "+ UMAP (learned)")
.replace("umap\_augmented\_learned", "+Aug. + UMAP (learned)")
.replace("umap\_euclidean\_augmented", "Aug. + UMAP (Euclidean)")
.replace("4096", "full")
.replace("not\_augmented", "Baseline")
.replace("augmented", "+ Aug.")
.replace("umap\_euclidean", "+ UMAP (Euclidean)")
.replace("cifar10", "CIFAR10")
.replace("mnist", "MNIST")
.replace("fmnist", "FMNIST")
)
###Output
\begin{tabular}{llrrrrr}
\toprule
& & 4 & 64 & 256 & 1024 & full \\
\textbf{dataset} & \textbf{+ Aug.} & & & & & \\
\midrule
\textbf{MNIST} & \textbf{Baseline} & 0.8143 & 0.9787 & 0.9896 & 0.9941 & 0.9965 \\
& \textbf{+ Aug.} & 0.9280 & 0.9860 & 0.9905 & 0.9939 & 0.9963 \\
& \textbf{+ UMAP (Euclidean)} & 0.9785 & 0.9855 & 0.9895 & 0.9933 & 0.9964 \\
& \textbf{+ UMAP (learned)} & 0.8325 & 0.9788 & 0.9905 & 0.9938 & 0.9957 \\
& \textbf{+Aug. + UMAP (learned)} & 0.9550 & 0.9907 & 0.9944 & 0.9960 & 0.9960 \\
& \textbf{Aug. + UMAP (Euclidean)} & 0.9779 & 0.9925 & 0.9930 & 0.9951 & 0.9967 \\
\textbf{fMNIST} & \textbf{Baseline} & 0.6068 & 0.8351 & 0.8890 & 0.9205 & 0.9427 \\
& \textbf{+ Aug.} & 0.6920 & 0.8598 & 0.9009 & 0.9322 & 0.9488 \\
& \textbf{+ UMAP (Euclidean)} & 0.7144 & 0.8410 & 0.8846 & 0.9165 & 0.9466 \\
& \textbf{+ UMAP (learned)} & 0.6286 & 0.8352 & 0.8887 & 0.9196 & 0.9443 \\
& \textbf{+Aug. + UMAP (learned)} & 0.7470 & 0.8797 & 0.9081 & 0.9318 & 0.9525 \\
& \textbf{Aug. + UMAP (Euclidean)} & 0.7373 & 0.8640 & 0.9003 & 0.9299 & 0.9521 \\
\textbf{CIFAR10} & \textbf{Baseline} & 0.2170 & 0.4992 & 0.7220 & 0.8380 & 0.9049 \\
& \textbf{+ Aug.} & 0.2814 & 0.5993 & 0.7664 & 0.8667 & 0.9332 \\
& \textbf{+ UMAP (Euclidean)} & 0.1895 & 0.4503 & 0.6737 & 0.8289 & 0.9129 \\
& \textbf{+ UMAP (learned)} & 0.1988 & 0.5148 & 0.7475 & 0.8505 & 0.9118 \\
& \textbf{+Aug. + UMAP (learned)} & 0.3509 & 0.6742 & 0.8190 & 0.8907 & 0.9324 \\
& \textbf{Aug. + UMAP (Euclidean)} & 0.2427 & 0.5596 & 0.7476 & 0.8524 & 0.9319 \\
\bottomrule
\end{tabular}
|
GoogleMaps.ipynb | ###Markdown
Import data Data structure requirement:- Has a column named "Lat" for latitude- Has a column named "Long" for longitude- Has filtered out all non-relevant rows
###Code
filename ='data/airbnb.csv'
encoding = None
cols = None # Specify if need to consider a subset of columns
df = pd.read_csv(filename,encoding=encoding)
df['Weight'] = np.random.rand(len(df))
###Output
_____no_output_____
###Markdown
Google Maps
###Code
class GMAPS():
def __init__(self, figure_layout = None):
self.fig = gmaps.figure(layout = figure_layout, display_toolbar = False,
map_type = "TERRAIN") # Could be HYBRID
def add_heatmap(self, data, latcol = 'Lat', loncol = 'Long', weightcol = None, point_radius = 20, **kwargs):
"""
Creates a heatmap
data: pandas dataframe. Has columns: Lat, Long, Weight. Must be cleaned beforehand
latcol, loncol: name of latitude & longitude cols
weightcol: name of the numerical column used for weighting
**kwargs:
max_intensity
point_radius
opacity
gradient
"""
if weightcol != None:
heatmap = gmaps.heatmap_layer(locations = data[[latcol, loncol]], weights = data['Weight'],
point_radius = point_radius,**kwargs)
else:
heatmap = gmaps.heatmap_layer(locations = data[[latcol, loncol]],
point_radius = point_radius, **kwargs)
self.fig.add_layer(heatmap)
def add_symbols(self, symbols, latcol = 'Lat', loncol = 'Long', fill_color = 'red', stroke_color = 'red', **kwargs):
"""
Add individual points
symbols: pandas dataframe. Has columns: Lat, Long. Must be cleaned beforehand
**kwargs:
fill_color
fill_opacity
stroke_color
stroke_opacity
scale
"""
symbol_layer = gmaps.symbol_layer(locations = symbols[[latcol, loncol]], fill_color = fill_color,
stroke_color = stroke_color, **kwargs)
self.fig.add_layer(symbol_layer)
def add_json(self, filename, fill_opacity = 0, stroke_weight = 1, **kwargs):
"""
Add geojson layer. Useful for districts, neighborhoods, US states etc
**kwargs:
fill_opacity
fill_color
stroke_color
stroke_opacity
stroke_weight = 3, range 0 to 20
"""
with open(filename) as f:
geojson_file = json.load(f)
f.close
geojson = gmaps.geojson_layer(geojson_file,
fill_opacity = fill_opacity, stroke_weight = stroke_weight, **kwargs)
self.fig.add_layer(geojson)
def display(self):
display(self.fig)
###Output
_____no_output_____
###Markdown
Exemple
###Code
latcol = 'latitude'
loncol = 'longitude'
jsonpath = 'Boston_Neighborhoods.geojson'
catcol = 'room_type'
layout={
'width': '600px',
'height': '600px',
'padding': '3px',
'border': '1px solid black'
}
df = df[df.city == 'Boston']
for category in df[catcol].unique():
mymap = GMAPS(layout)
mymap.add_heatmap(df[df[catcol] == category], latcol = latcol, loncol = loncol, point_radius = 5)
#mymap.add_symbols(df[df['room_type'] == category].iloc[:5], latcol = latcol, loncol = loncol)
mymap.add_json(filename = jsonpath)
mymap.display()
mymap = GMAPS(layout)
mymap.add_heatmap(df, latcol = latcol, loncol = loncol, point_radius = 5)
mymap.add_symbols(df.iloc[:5], latcol = latcol, loncol = loncol)
mymap.add_json('Boston_Neighborhoods.geojson')
mymap.display()
###Output
_____no_output_____
###Markdown
Bournemouth venues
###Code
venues = pd.read_csv("Datasets/bournemouth_venues.csv")
venues.rename(columns = {'Venue Latitude':'Lat', 'Venue Longitude':'Long'}, inplace = True)
layout={
'width': '600px',
'height': '600px',
'padding': '3px',
'border': '1px solid black'
}
mymap = GMAPS(layout)
mymap.add_heatmap(venues, point_radius = 20)
mymap.display()
mymap = GMAPS(layout)
mymap.add_symbols(venues)
mymap.display()
df = pd.read_csv("airbnb.csv")
df.rename(columns = {'latitude':'Lat', 'longitude': 'Long'}, inplace = True)
mymap = GMAPS(layout)
mymap.add_heatmap(df, point_radius = 20, weights = df['log_price'])
mymap.display()
###Output
_____no_output_____ |
ch11/tv_embeddings.ipynb | ###Markdown
Sentiment Analysis with Region Embeddings
###Code
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import collections
import math
import numpy as np
import os
import random
import tensorflow as tf
import tarfile
from matplotlib import pylab
from six.moves import range
from six.moves.urllib.request import urlretrieve
from sklearn.manifold import TSNE
from sklearn.cluster import KMeans
import nltk # standard preprocessing
import operator # sorting items in dictionary by value
from sklearn.utils import shuffle
from math import ceil
###Output
c:\users\thushan\documents\python_virtualenvs\tensorflow_venv\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Download dataHere we download the sentiment data from this [website](http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz). These are movie reviews submitted by users classfied according to if it is a positive/negative sentiment.
###Code
url = 'http://ai.stanford.edu/~amaas/data/sentiment/'
def maybe_download(filename, expected_bytes):
"""Download a file if not present, and make sure it's the right size."""
if not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('aclImdb_v1.tar.gz', 84125825)
###Output
Found and verified aclImdb_v1.tar.gz
###Markdown
Read dataHere the data is read into the program.
###Code
# Number of read files
files_read = 0
# Contains positive and negative sentiments
pos_members = []
neg_members = []
# Number of files to read
files_to_read = 400
# Creates a temporary directory to extract data to
if not os.path.exists('tmp_reviews'):
os.mkdir('tmp_reviews')
def read_data(filename):
"""Extract the first file enclosed in a tar.z file as a list of words"""
# Check if the directory is empty or not
if os.listdir('tmp_reviews') == []:
# If not empty read both postive and negative files upto
# files_to_read many files and extract them to tmp_review folder
with tarfile.open("aclImdb_v1.tar.gz") as t:
for m in t.getmembers():
# Extract positive sentiments and update files_read
if 'aclImdb/train/pos' in m.name and '.txt' in m.name:
pos_members.append(m)
files_read += 1
if files_read >= files_to_read:
break
files_read = 0 # reset files_read
# Extract negative sentiments and update files_read
if 'aclImdb/train/neg' in m.name and '.txt' in m.name:
neg_members.append(m)
files_read += 1
if files_read >= files_to_read:
break
t.extractall(path='tmp_reviews',members=pos_members+neg_members)
print('Extracted (or already had) all data')
# These lists will contain all the postive and negative
# reviews we read above
data = []
data_sentiment, data_labels = [],[]
print('Reading positive data')
# Here we read all the postive data
for file in os.listdir(os.path.join('tmp_reviews',*('aclImdb','train','pos'))):
if file.endswith(".txt"):
with open(os.path.join('tmp_reviews',*('aclImdb','train','pos',file)),'r',encoding='utf-8') as f:
# Convert all the words to lower and tokenize
file_string = f.read().lower()
file_string = nltk.word_tokenize(file_string)
# Add the words to data list
data.extend(file_string)
# If a review has more than 100 words truncate it to 100
data_sentiment.append(file_string[:100])
# If a review has less than 100 words add </s> tokens to make it 100
if len(data_sentiment[-1])<100:
data_sentiment[-1].extend(['</s>' for _ in range(100-len(data_sentiment[-1]))])
data_labels.append(1)
print('Reading negative data')
# Here we read all the negative data
for file in os.listdir(os.path.join('tmp_reviews',*('aclImdb','train','neg'))):
if file.endswith(".txt"):
with open(os.path.join('tmp_reviews',*('aclImdb','train','neg',file)),'r',encoding='utf-8') as f:
# Convert all the words to lower and tokenize
file_string = f.read().lower()
file_string = nltk.word_tokenize(file_string)
# Add the words to data list
data.extend(file_string)
# If a review has more than 100 words truncate it to 100
data_sentiment.append(file_string[:100])
# If a review has less than 100 words add </s> tokens to make it 100
if len(data_sentiment[-1])<100:
data_sentiment[-1].extend(['</s>' for _ in range(100-len(data_sentiment[-1]))])
data_labels.append(0)
return data, data_sentiment, data_labels
words, sentiments_words, sentiment_labels = read_data(filename)
# Print some statistics of the dta
print('Data size %d' % len(words))
print('Example words (start): ',words[:10])
print('Example words (end): ',words[-10:])
###Output
Extracted (or already had) all data
Reading positive data
Reading negative data
Data size 7054759
Example words (start): ['bromwell', 'high', 'is', 'a', 'cartoon', 'comedy', '.', 'it', 'ran', 'at']
Example words (end): ['do', "n't", 'waste', 'your', 'time', ',', 'this', 'is', 'painful', '.']
###Markdown
Building the DictionariesBuilds the following. To understand each of these elements, let us also assume the text "I like to go to school"* `dictionary`: maps a string word to an ID (e.g. {I:0, like:1, to:2, go:3, school:4})* `reverse_dictionary`: maps an ID to a string word (e.g. {0:I, 1:like, 2:to, 3:go, 4:school}* `count`: List of list of (word, frequency) elements (e.g. [(I,1),(like,1),(to,2),(go,1),(school,1)]* `data` : Contain the string of text we read, where string words are replaced with word IDs (e.g. [0, 1, 2, 3, 2, 4])It also introduces an additional special token `UNK` to denote rare words to are too rare to make use of.
###Code
# We set max vocabulary to this
vocabulary_size = 20000
def build_dataset(words):
global vocabulary_size
count = [['UNK', -1]]
# Sorts words by their frequency
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
# Define IDs for special tokens
dictionary = dict({'<unk>':0, '</s>':1})
# Crude Vocabulary Control
# We ignore the most commone (words like a , the , ...)
# and most rare (having a repetition of less than 10)
# to reduce size of the vocabulary
count_dict = collections.Counter(words)
for word in words:
# Add the word to dictionary if already not encounterd
if word not in dictionary:
if count_dict[word]<50000 and count_dict[word] > 10:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
# Replacing word strings with word IDs
for word in words:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count = unk_count + 1
data.append(index)
count[0][1] = unk_count
# Create a reverse dictionary with the above created dictionary
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
# Update the vocabulary
vocabulary_size = len(dictionary)
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
# Print some statistics about the data
print('Most common words (+UNK)', count[:25])
print('Sample data', data[:10])
print('Vocabulary size: ',vocabulary_size)
del words # Hint to reduce memory.
###Output
Most common words (+UNK) [['UNK', 2710699], ('the', 334680), (',', 275887), ('.', 235397), ('and', 163334), ('a', 162144), ('of', 145399), ('to', 135145), ('is', 110248), ('/', 102097), ('>', 102036), ('<', 101971), ('br', 101871), ('it', 94863), ('in', 93175), ('i', 86498), ('this', 75507), ('that', 72962), ("'s", 62159), ('was', 50367), ('as', 46818), ('for', 44050), ('with', 44001), ('movie', 42547), ('but', 42358)]
Sample data [0, 2, 0, 0, 3, 4, 0, 0, 5, 6]
Vocabulary size: 19908
###Markdown
Processing data for the Region Embedding Learning Processing Data for the Sentiment AnalysisHere we define a function as well as run that function which converts the above words in the postive/negative reviews into word IDs.
###Code
def build_sentiment_dataset(sentiment_words, sentiment_labels):
'''
This function takes in reviews and labels, and then replace
all the words in the reviews with word IDs we assigned to each
word in our dictionary
'''
data = [[] for _ in range(len(sentiment_words))]
unk_count = 0
for sent_id,sent in enumerate(sentiment_words):
for word in sent:
if word in dictionary:
index = dictionary[word]
else:
index = 0 # dictionary['UNK']
unk_count = unk_count + 1
data[sent_id].append(index)
return data, sentiment_labels
# Run the operation
sentiment_data, sentiment_labels = build_sentiment_dataset(sentiments_words, sentiment_labels)
print('Sample data')
for rev in sentiment_data[:10]:
print('\t',rev)
del sentiments_words # Hint to reduce memory.
###Output
Sample data
[0, 2, 0, 0, 3, 4, 0, 0, 5, 6, 0, 7, 8, 9, 10, 11, 12, 13, 14, 15, 0, 16, 9, 17, 18, 19, 0, 20, 21, 22, 0, 0, 23, 24, 25, 26, 0, 27, 0, 0, 2, 0, 28, 0, 29, 30, 0, 31, 32, 0, 17, 18, 19, 0, 0, 0, 0, 33, 34, 0, 0, 35, 36, 37, 38, 39, 40, 41, 42, 43, 18, 44, 0, 0, 0, 0, 0, 0, 45, 46, 0, 47, 48, 26, 0, 0, 49, 0, 50, 0, 42, 36, 0, 51, 0, 52, 0, 53, 0, 54]
[0, 84, 85, 0, 9, 86, 87, 88, 89, 90, 91, 92, 93, 94, 22, 95, 96, 0, 97, 0, 98, 99, 100, 0, 101, 0, 102, 103, 104, 105, 37, 106, 107, 108, 109, 0, 14, 0, 110, 0, 85, 111, 94, 0, 112, 0, 113, 114, 77, 0, 0, 115, 9, 116, 0, 117, 118, 119, 120, 13, 121, 16, 9, 122, 0, 0, 123, 100, 124, 0, 0, 125, 0, 126, 0, 127, 0, 0, 0, 0, 0, 0, 85, 120, 128, 129, 130, 131, 132, 0, 133, 134, 100, 0, 0, 0, 0, 0, 0, 0]
[268, 269, 193, 0, 203, 204, 0, 270, 271, 0, 272, 0, 273, 274, 275, 0, 0, 229, 276, 0, 277, 278, 216, 279, 0, 280, 0, 0, 281, 100, 282, 0, 0, 63, 0, 9, 283, 9, 284, 0, 244, 245, 0, 0, 285, 100, 286, 0, 287, 288, 0, 194, 224, 289, 0, 224, 0, 0, 0, 290, 291, 183, 292, 0, 0, 224, 293, 0, 294, 0, 0, 153, 0, 0, 17, 0, 67, 0, 294, 19, 169, 295, 0, 296, 297, 298, 0, 299, 0, 0, 0, 300, 108, 0, 301, 302, 303, 248, 0, 0]
[0, 0, 332, 0, 113, 333, 264, 334, 0, 155, 335, 0, 336, 0, 337, 338, 0, 0, 339, 221, 259, 0, 340, 341, 0, 0, 84, 342, 0, 343, 0, 344, 345, 346, 347, 0, 340, 341, 0, 348, 349, 0, 85, 350, 347, 0, 340, 341, 0, 351, 135, 352, 89, 0, 74, 0, 0, 353, 354, 355, 0, 95, 356, 0, 0, 264, 0, 357, 358, 0, 0, 359, 74, 360, 216, 221, 0, 0, 0, 361, 0, 190, 0, 0, 362, 13, 10, 0, 0, 113, 363, 0, 364, 0, 365, 0, 366, 367, 0, 337]
[0, 0, 221, 0, 380, 154, 155, 264, 0, 0, 0, 29, 381, 243, 32, 113, 0, 183, 382, 0, 383, 142, 0, 233, 0, 0, 0, 0, 384, 203, 204, 385, 0, 326, 0, 386, 0, 16, 0, 304, 0, 387, 388, 0, 389, 102, 10, 390, 0, 391, 273, 91, 392, 291, 0, 393, 179, 0, 0, 10, 276, 0, 391, 394, 273, 91, 395, 0, 164, 0, 396, 0, 174, 379, 0, 95, 47, 0, 47, 0, 0, 0, 397, 0, 398, 0, 399, 0, 39, 0, 0, 0, 400, 0, 283, 401, 0, 155, 402, 106]
[0, 0, 82, 0, 414, 415, 416, 0, 417, 0, 0, 0, 0, 415, 416, 0, 418, 419, 420, 0, 0, 0, 0, 421, 0, 0, 63, 422, 141, 0, 0, 423, 165, 415, 0, 424, 229, 0, 0, 419, 0, 95, 0, 0, 82, 0, 419, 0, 425, 426, 0, 0, 0, 179, 0, 0, 427, 41, 54, 416, 428, 0, 429, 0, 430, 431, 0, 183, 0, 0, 0, 0, 0, 0, 0, 0, 0, 287, 432, 433, 434, 0, 435, 436, 0, 0, 437, 422, 158, 438, 29, 151, 0, 439, 440, 0, 162, 441, 0, 310]
[470, 337, 92, 471, 62, 0, 472, 164, 0, 473, 474, 475, 0, 0, 0, 0, 0, 0, 0, 0, 0, 362, 476, 0, 477, 478, 119, 135, 174, 82, 479, 480, 0, 481, 0, 0, 0, 0, 0, 0, 0, 0, 482, 483, 193, 415, 416, 0, 0, 484, 485, 0, 0, 264, 38, 486, 487, 0, 38, 394, 488, 344, 29, 135, 489, 0, 264, 0, 490, 0, 0, 491, 233, 0, 357, 492, 0, 0, 326, 339, 82, 493, 494, 135, 0, 356, 495, 135, 496, 0, 497, 135, 498, 0, 0, 499, 0, 283, 0, 500]
[0, 0, 17, 513, 514, 515, 419, 516, 100, 517, 518, 0, 519, 84, 415, 416, 89, 0, 0, 520, 378, 0, 521, 522, 248, 523, 0, 524, 525, 193, 0, 526, 362, 0, 0, 246, 527, 0, 183, 0, 528, 84, 529, 530, 89, 0, 51, 531, 532, 533, 13, 0, 431, 0, 84, 362, 89, 0, 490, 0, 519, 534, 402, 535, 536, 0, 0, 537, 0, 538, 0, 539, 540, 19, 541, 0, 264, 0, 542, 543, 0, 0, 0, 0, 0, 0, 0, 0, 135, 544, 258, 545, 546, 547, 548, 0, 0, 465, 0, 264]
[0, 467, 527, 84, 636, 89, 0, 415, 416, 0, 529, 530, 0, 577, 578, 0, 435, 436, 0, 598, 599, 0, 433, 434, 0, 637, 0, 0, 638, 0, 0, 639, 203, 640, 0, 84, 641, 65, 621, 0, 89, 0, 0, 0, 0, 0, 0, 0, 0, 642, 0, 643, 416, 0, 644, 645, 646, 0, 0, 0, 0, 0, 0, 0, 0, 80, 0, 0, 13, 647, 0, 648, 152, 80, 0, 0, 649, 650, 70, 651, 165, 0, 11, 0, 652, 0, 0, 653, 0, 654, 152, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[135, 255, 0, 415, 416, 0, 755, 756, 730, 0, 0, 757, 758, 402, 0, 0, 759, 760, 165, 47, 547, 761, 360, 169, 90, 762, 0, 763, 84, 165, 764, 0, 765, 0, 17, 766, 0, 0, 19, 0, 54, 767, 51, 0, 768, 291, 95, 0, 549, 0, 769, 63, 89, 0, 0, 770, 169, 90, 385, 760, 273, 91, 304, 0, 771, 17, 772, 19, 0, 17, 70, 773, 774, 19, 0, 17, 0, 467, 527, 19, 0, 745, 775, 776, 0, 0, 777, 778, 779, 0, 0, 780, 183, 270, 110, 0, 781, 0, 0, 0]
###Markdown
Data GeneratorsWe define two data generators:* Data generator for generating data for classifiers* Data generator for generating data for region embedding algorithm Data Generator for Training ClassifiersHere we define a data generator function that generates data to train the classifier that identifies if a review is positive or negative
###Code
# Shuffle the data
sentiment_data, sentiment_labels = shuffle(sentiment_data, sentiment_labels)
sentiment_data_index = -1
def generate_sentiment_batch(batch_size, region_size,is_train):
global sentiment_data_index
# Number of regions in a single review
# as a single review has 100 words after preprocessing
num_r = 100//region_size
# Contains input data and output data
batches = [np.ndarray(shape=(batch_size, vocabulary_size), dtype=np.int32) for _ in range(num_r)]
labels = np.ndarray(shape=(batch_size), dtype=np.int32)
# Populate each batch index
for i in range(batch_size):
# Choose a data point index, we use the last 300 reviews (after shuffling)
# as test data and rest as training data
if is_train:
sentiment_data_index = np.random.randint(len(sentiment_data)-300)
else:
sentiment_data_index = max(len(sentiment_data)-300, (sentiment_data_index + 1)%len(sentiment_data))
# for each region
for reg_i in range(num_r):
batches[reg_i][i,:] = np.zeros(shape=(1, vocabulary_size), dtype=np.float32) #input
# for each word in region
for wi in sentiment_data[sentiment_data_index][reg_i*num_r:(reg_i+1)*num_r]:
# if the current word is informative (not <unk> or </s>)
# Update the bow representation for that region
if wi != dictionary['<unk>'] and wi != dictionary['</s>']:
batches[reg_i][i,wi] += 1
labels[i] = sentiment_labels[sentiment_data_index]
return batches, labels
# Print some data batches to see what they look like
for _ in range(10):
batches, labels = generate_sentiment_batch(batch_size=8, region_size=10, is_train=True)
print(' batch: sum: ', np.sum(batches[0],axis=1), np.argmax(batches[0],axis=1))
print(' labels: ', labels)
print('\nValid data')
# Print some data batches to see what they look like
for _ in range(10):
batches, labels = generate_sentiment_batch(batch_size=8, region_size=10, is_train=False)
print(' batch: sum: ', np.sum(batches[0],axis=1), np.argmax(batches[0],axis=1))
print(' labels: ', labels)
sentiment_data_index = -1 # Reset the index
###Output
batch: sum: [4 9 9 5 6 6 5 7] [264 108 22 165 80 74 71 9]
labels: [0 0 1 0 0 0 0 1]
batch: sum: [7 6 5 5 6 6 8 4] [ 108 80 3955 1660 55 6 17 385]
labels: [1 1 1 1 0 0 1 1]
batch: sum: [6 7 5 8 5 8 7 8] [ 38 116 419 3 134 83 17 92]
labels: [0 0 1 1 0 0 1 1]
batch: sum: [6 7 9 4 5 8 7 8] [221 85 17 51 65 9 264 13]
labels: [0 1 0 1 1 0 1 0]
batch: sum: [4 6 7 8 7 8 6 7] [326 70 95 100 248 9 67 70]
labels: [0 1 1 1 0 1 0 1]
batch: sum: [10 7 9 6 9 4 8 6] [ 22 91 17 4 71 409 51 20]
labels: [0 0 1 0 1 0 1 0]
batch: sum: [6 6 6 7 9 6 7 4] [ 8 9 11 100 82 165 44 10]
labels: [0 1 0 0 0 0 0 0]
batch: sum: [9 8 6 5 8 6 6 7] [ 17 6 39 248 37 221 94 44]
labels: [0 0 0 1 0 0 1 0]
batch: sum: [8 7 7 8 8 7 6 3] [ 17 6 413 74 92 20 77 103]
labels: [0 1 1 1 1 1 1 1]
batch: sum: [9 7 4 5 5 7 8 5] [152 37 568 131 100 6 114 70]
labels: [1 0 0 1 0 0 0 0]
Valid data
batch: sum: [6 6 7 5 7 5 6 7] [ 70 6 92 165 65 131 19 20]
labels: [1 1 1 0 0 1 0 1]
batch: sum: [ 4 8 7 8 10 6 7 7] [824 92 39 8 17 17 8 100]
labels: [0 0 1 1 1 1 0 0]
batch: sum: [7 6 6 6 6 6 8 4] [ 17 22 50 1901 229 326 37 131]
labels: [0 0 1 1 1 1 1 0]
batch: sum: [6 7 8 6 8 9 6 6] [94 13 95 4 13 94 51 17]
labels: [1 1 0 1 1 0 0 1]
batch: sum: [7 4 7 5 6 6 8 5] [ 90 362 152 116 17 131 17 39]
labels: [1 1 1 0 0 0 0 1]
batch: sum: [8 6 7 5 6 5 6 4] [ 17 20 216 94 326 90 20 95]
labels: [1 1 1 0 1 1 0 1]
batch: sum: [6 8 4 8 8 4 6 5] [106 17 52 15 54 264 179 52]
labels: [0 1 1 1 0 1 1 0]
batch: sum: [5 6 6 5 6 7 7 8] [ 20 990 90 273 20 131 110 37]
labels: [0 1 1 0 1 1 1 1]
batch: sum: [6 6 8 7 6 7 6 6] [ 83 326 9 20 20 91 20 128]
labels: [0 0 1 1 0 0 1 1]
batch: sum: [6 4 7 8 7 4 6 4] [ 17 8717 65 17 27 13 762 92]
labels: [1 0 1 0 0 1 1 1]
###Markdown
Sentiment Analysis without Region EmbeddingsThis is a standard sentiment classifier. It first starts with a convolution layer which sends the output to a fully connected classification layer.
###Code
batch_size = 50
tf.reset_default_graph()
graph = tf.Graph()
region_size = 10
conv_width = vocabulary_size
conv_stride = vocabulary_size
num_r = 100//region_size
with graph.as_default():
# Input/output data.
train_dataset = [tf.placeholder(tf.float32, shape=[batch_size, vocabulary_size]) for _ in range(num_r)]
train_labels = tf.placeholder(tf.float32, shape=[batch_size])
# Testing input/output data
valid_dataset = [tf.placeholder(tf.float32, shape=[batch_size, vocabulary_size]) for _ in range(num_r)]
valid_labels = tf.placeholder(tf.int32, shape=[batch_size])
with tf.variable_scope('sentiment_analysis'):
# First convolution layer weights/bias
sent_w1 = tf.get_variable('conv_w1', shape=[conv_width,1,1], initializer = tf.contrib.layers.xavier_initializer_conv2d())
sent_b1 = tf.get_variable('conv_b1',shape=[1], initializer = tf.random_normal_initializer(stddev=0.05))
# Concat all the train data and create a tensor of [batch_size, num_r, vocabulary_size]
concat_train_dataset = tf.concat([tf.expand_dims(t,0) for t in train_dataset],axis=0)
concat_train_dataset = tf.transpose(concat_train_dataset, [1,0,2]) # make batch-major (axis)
concat_train_dataset = tf.reshape(concat_train_dataset, [batch_size, -1])
# Compute the convolution output on the above transformation of inputs
sent_h = tf.nn.relu(
tf.nn.conv1d(tf.expand_dims(concat_train_dataset,-1),filters=sent_w1,stride=conv_stride, padding='SAME') + sent_b1
)
# Do the same for validation data
concat_valid_dataset = tf.concat([tf.expand_dims(t,0) for t in valid_dataset],axis=0)
concat_valid_dataset = tf.transpose(concat_valid_dataset, [1,0,2]) # make batch-major (axis)
concat_valid_dataset = tf.reshape(concat_valid_dataset, [batch_size, -1])
# Compute the validation output
sent_h_valid = tf.nn.relu(
tf.nn.conv1d(tf.expand_dims(concat_valid_dataset,-1),filters=sent_w1,stride=conv_stride, padding='SAME') + sent_b1
)
sent_h = tf.reshape(sent_h, [batch_size, -1])
sent_h_valid = tf.reshape(sent_h_valid, [batch_size, -1])
# Linear Layer
sent_w = tf.get_variable('linear_w', shape=[num_r, 1], initializer= tf.contrib.layers.xavier_initializer())
sent_b = tf.get_variable('linear_b', shape=[1], initializer= tf.random_normal_initializer(stddev=0.05))
# Compute the final output with the linear layer defined above
sent_out = tf.matmul(sent_h,sent_w)+sent_b
tr_train_predictions = tf.nn.sigmoid(tf.matmul(sent_h, sent_w) + sent_b)
tf_valid_predictions = tf.nn.sigmoid(tf.matmul(sent_h_valid, sent_w) + sent_b)
# Calculate valid accuracy
valid_pred_classes = tf.cast(tf.reshape(tf.greater(tf_valid_predictions, 0.5),[-1]),tf.int32)
# Loss computation and optimization
naive_sent_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.expand_dims(train_labels,-1), logits=sent_out))
naive_sent_optimizer = tf.train.AdamOptimizer(learning_rate = 0.0005).minimize(naive_sent_loss)
num_steps = 10001
naive_valid_ot = []
with tf.Session(graph=graph,config=tf.ConfigProto(allow_soft_placement=True)) as session:
tf.global_variables_initializer().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
if (step+1)%100==0:
print('.',end='')
if (step+1)%1000==0:
print('')
batches_data, batch_labels = generate_sentiment_batch(batch_size, region_size,is_train=True)
feed_dict = {}
#print(len(batches_data))
for ri, batch in enumerate(batches_data):
feed_dict[train_dataset[ri]] = batch
feed_dict.update({train_labels : batch_labels})
_, l, tr_batch_preds = session.run([naive_sent_optimizer, naive_sent_loss, tr_train_predictions], feed_dict=feed_dict)
if np.random.random()<0.002:
print('\nTrain Predictions:')
print(tr_batch_preds.reshape(-1))
print(batch_labels.reshape(-1))
average_loss += l
if (step+1) % 500 == 0:
sentiment_data_index = -1
if step > 0:
average_loss = average_loss / 500
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step+1, average_loss))
average_loss = 0
valid_accuracy = []
for vi in range(2):
batches_data, batch_labels = generate_sentiment_batch(batch_size, region_size,is_train=False)
feed_dict = {}
#print(len(batches_data))
for ri, batch in enumerate(batches_data):
feed_dict[valid_dataset[ri]] = batch
feed_dict.update({valid_labels : batch_labels})
batch_pred_classes, batch_preds = session.run([valid_pred_classes,tf_valid_predictions], feed_dict=feed_dict)
valid_accuracy.append(np.mean(batch_pred_classes==batch_labels)*100.0)
print(batch_pred_classes.reshape(-1))
print(batch_labels)
print()
print('Valid accuracy: %.5f'%np.mean(valid_accuracy))
naive_valid_ot.append(np.mean(valid_accuracy))
###Output
Initialized
.....Average loss at step 500: 0.692977
[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1]
[1 1 1 0 0 1 0 1 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 0 1 1 0 1 1 0 0 1 1 1 1 0 0
0 0 1 1 1 1 0 1 1 0 1 0 1]
[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1]
[1 1 0 1 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 0 0 1 1 1 1 0 1 1 0 0 0
0 0 1 1 0 0 0 0 0 0 0 0 0]
Valid accuracy: 53.00000
.
Train Predictions:
[0.5398299 0.52148247 0.57564086 0.6057088 0.5066216 0.4995854
0.52885103 0.5014624 0.5738266 0.51268613 0.5864872 0.5437006
0.5601032 0.49563897 0.5208909 0.54059374 0.5325987 0.56131095
0.5542273 0.54287297 0.6145708 0.56996554 0.53303754 0.5173336
0.54148394 0.59661555 0.501636 0.50293607 0.50782865 0.5669592
0.5849927 0.5362609 0.5282925 0.5504024 0.5769378 0.4991507
0.51285356 0.5656118 0.5401868 0.5354448 0.5168529 0.51380587
0.57529366 0.49920115 0.49638954 0.52626103 0.6057066 0.5653592
0.5580887 0.5387444 ]
[1 0 1 1 1 0 0 0 0 0 1 1 1 0 0 1 1 1 1 0 1 1 0 0 1 1 0 0 0 1 1 0 0 1 0 0 0
1 0 1 0 0 1 0 0 0 1 1 1 0]
.
Train Predictions:
[0.53347856 0.6433935 0.53555137 0.52461207 0.5592669 0.55867225
0.48304006 0.5134755 0.58634436 0.55636543 0.5513616 0.5296205
0.55686915 0.55245316 0.6191605 0.57981074 0.5266938 0.50180537
0.58336717 0.5536292 0.58773464 0.6358548 0.5962003 0.55425674
0.55611014 0.532857 0.54292274 0.59357536 0.54953194 0.6075142
0.53021115 0.59705454 0.52734005 0.51539934 0.61214507 0.5445286
0.53471303 0.5680835 0.5015975 0.5434597 0.51566607 0.5103009
0.52286553 0.576073 0.5658488 0.55518544 0.56711227 0.50235504
0.62453496 0.5030525 ]
[0 1 1 0 1 1 0 0 1 1 1 0 1 1 1 1 1 0 1 1 1 1 1 1 0 0 1 1 1 1 0 1 0 1 1 1 0
1 0 1 0 1 0 1 0 1 0 0 1 0]
..
Train Predictions:
[0.6281669 0.5693149 0.55835557 0.58671427 0.5954251 0.528519
0.544164 0.5538292 0.53524363 0.56180054 0.57146865 0.6600864
0.5950348 0.70271945 0.51135105 0.50162274 0.61257565 0.5346548
0.640032 0.47915387 0.57746637 0.55718285 0.6125209 0.5111741
0.55690384 0.48271057 0.4808032 0.5042626 0.55664563 0.49220365
0.5063578 0.616078 0.5226268 0.4923374 0.52179915 0.49335942
0.6447095 0.664009 0.56390184 0.6743909 0.5994003 0.5214779
0.6158631 0.49849156 0.68111086 0.6745049 0.5391151 0.63389707
0.4776263 0.62716454]
[1 1 0 1 0 0 0 0 0 0 0 1 1 1 0 0 1 0 1 0 0 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1
1 1 1 1 0 1 0 1 1 0 1 0 1]
.
Average loss at step 1000: 0.662195
[1 1 1 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 0 1]
[1 1 1 0 0 1 0 1 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 0 1 1 0 1 1 0 0 1 1 1 1 0 0
0 0 1 1 1 1 0 1 1 0 1 0 1]
[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 0 1
1 1 1 1 1 1 1 0 1 1 1 1 0]
[1 1 0 1 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 0 0 1 1 1 1 0 1 1 0 0 0
0 0 1 1 0 0 0 0 0 0 0 0 0]
Valid accuracy: 65.00000
.
Train Predictions:
[0.48791286 0.4814993 0.520895 0.5006049 0.49282727 0.5358579
0.5450098 0.67067295 0.55139095 0.49364725 0.7318095 0.47335815
0.52904165 0.7087658 0.46180406 0.54840136 0.5043462 0.65554774
0.7049975 0.6000343 0.47732767 0.54674435 0.65389025 0.5104879
0.58158284 0.4831877 0.5272517 0.57363915 0.6496185 0.5163783
0.51789886 0.6816802 0.5373244 0.49532205 0.5136856 0.5419302
0.5756462 0.50959605 0.61040175 0.6322758 0.6349979 0.51941353
0.5465167 0.6184086 0.51248395 0.7269664 0.7450346 0.6091433
0.47902462 0.55470395]
[0 0 0 0 0 0 0 1 0 0 1 0 1 1 0 1 0 1 1 1 0 0 1 0 0 0 0 1 1 0 1 1 0 1 0 0 1
0 1 1 1 0 1 1 0 1 1 1 0 0]
..
Train Predictions:
[0.5098728 0.5194385 0.6875274 0.61823547 0.7548512 0.44482762
0.44358504 0.7519445 0.6688327 0.50066286 0.59024185 0.5307202
0.47687507 0.53811574 0.63883483 0.6302862 0.49293384 0.5256194
0.7115141 0.658766 0.69207716 0.5598784 0.5788378 0.50958776
0.58766043 0.5487388 0.5533414 0.485756 0.5760571 0.602522
0.6970989 0.72705626 0.6940404 0.47365338 0.6532596 0.7173378
0.6458795 0.4766092 0.51574093 0.5136968 0.6480589 0.4484929
0.5416534 0.6618099 0.7148603 0.47062722 0.5329655 0.5864267
0.4752485 0.4735495 ]
[0 0 1 1 1 0 0 1 1 0 1 1 0 0 1 0 0 0 1 1 1 0 1 0 1 1 1 0 1 1 1 1 1 0 1 0 1
0 0 0 1 0 0 1 1 0 1 1 0 0]
..Average loss at step 1500: 0.604544
[1 1 1 0 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 0 1 1 0 1 1 1 1 1 0 1
0 0 1 1 1 1 1 1 1 1 1 0 0]
[1 1 1 0 0 1 0 1 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 0 1 1 0 1 1 0 0 1 1 1 1 0 0
0 0 1 1 1 1 0 1 1 0 1 0 1]
[1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 1 0 0 0 1 1 1 0 1 1 1 0 0 0
1 1 1 1 1 1 1 0 1 1 0 1 0]
[1 1 0 1 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 0 0 1 1 1 1 0 1 1 0 0 0
0 0 1 1 0 0 0 0 0 0 0 0 0]
Valid accuracy: 72.00000
.
Train Predictions:
[0.4315178 0.45389757 0.64561284 0.47607657 0.46056148 0.5454631
0.423389 0.7420383 0.5089923 0.48817888 0.6571946 0.48824877
0.6685424 0.53417534 0.568409 0.8927229 0.5833828 0.43795457
0.69632477 0.45559096 0.46451446 0.52789664 0.7403449 0.6548991
0.42082536 0.55061114 0.4865282 0.46559292 0.53477305 0.5916949
0.6541726 0.69069886 0.49422452 0.45782846 0.49949333 0.43304244
0.6285547 0.62949055 0.49287277 0.6041156 0.6176573 0.70044106
0.5232215 0.7203313 0.51045394 0.4905259 0.6457035 0.50709075
0.42574662 0.62621313]
[0 0 1 0 0 1 0 1 0 0 0 0 1 1 0 1 0 0 1 0 0 0 1 1 0 1 0 0 1 1 1 1 1 0 0 0 1
1 0 1 1 1 0 1 1 1 1 0 0 0]
....
Average loss at step 2000: 0.553805
[1 1 1 0 0 1 0 1 0 1 0 1 1 1 1 1 1 0 1 1 1 1 0 1 1 1 0 1 1 0 1 1 1 1 1 0 1
0 0 1 1 0 1 0 1 1 1 1 0 0]
[1 1 1 0 0 1 0 1 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 0 1 1 0 1 1 0 0 1 1 1 1 0 0
0 0 1 1 1 1 0 1 1 0 1 0 1]
[1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 1 0 0 0 1 1 1 1 0 1 1 0 0 0
1 1 1 1 0 1 0 0 0 1 0 1 0]
[1 1 0 1 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 0 0 1 1 1 1 0 1 1 0 0 0
0 0 1 1 0 0 0 0 0 0 0 0 0]
Valid accuracy: 77.00000
..
Train Predictions:
[0.41134104 0.61209303 0.56754607 0.5942014 0.3931868 0.39387563
0.34629035 0.888453 0.35071236 0.37173945 0.60314333 0.7663671
0.7520451 0.6520177 0.3991184 0.69260484 0.76425534 0.45837578
0.59188765 0.6158909 0.46639478 0.85858035 0.5178756 0.7395688
0.49670693 0.5715366 0.4865879 0.53280157 0.36719364 0.40153322
0.77727467 0.79807955 0.55325514 0.4443964 0.8349032 0.47773615
0.7588399 0.8458654 0.36635584 0.58734226 0.8620448 0.3692674
0.4402245 0.8446156 0.40765733 0.83284456 0.3977775 0.46450433
0.40592468 0.72534925]
[1 1 0 1 0 0 0 1 0 0 1 1 0 1 0 0 1 0 1 1 0 1 0 1 0 1 0 1 0 1 1 0 0 0 1 1 1
1 0 0 1 0 1 1 0 1 0 0 0 1]
...Average loss at step 2500: 0.506576
[1 1 1 0 0 1 0 1 0 1 0 1 1 1 0 1 0 0 1 1 1 1 0 1 1 1 0 1 1 0 1 1 1 1 1 0 1
0 0 1 1 0 1 0 1 1 1 1 0 0]
[1 1 1 0 0 1 0 1 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 0 1 1 0 1 1 0 0 1 1 1 1 0 0
0 0 1 1 1 1 0 1 1 0 1 0 1]
[1 1 0 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 1 0 0 0 1 1 1 1 0 1 1 0 0 0
1 1 1 1 1 1 0 0 0 1 0 1 0]
[1 1 0 1 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 0 0 1 1 1 1 0 1 1 0 0 0
0 0 1 1 0 0 0 0 0 0 0 0 0]
Valid accuracy: 79.00000
.
Train Predictions:
[0.33002415 0.4768433 0.71964574 0.36330324 0.38838214 0.5400787
0.9264936 0.8705246 0.36026055 0.5957628 0.31326458 0.5235184
0.40750557 0.52223855 0.35754454 0.7959072 0.88170767 0.5885287
0.86543477 0.88090783 0.8721879 0.847802 0.6520115 0.70028365
0.44879344 0.39379698 0.7983082 0.36156422 0.42037308 0.7572798
0.95379436 0.36944956 0.50819314 0.64749444 0.6819545 0.708875
0.3632382 0.3539295 0.34058285 0.42926428 0.61869305 0.42673722
0.6231301 0.47047475 0.7055723 0.40796205 0.72998905 0.51705253
0.47094813 0.89795476]
[0 1 1 0 0 0 1 1 0 1 0 0 0 1 0 1 1 1 1 1 1 1 1 1 0 0 1 0 0 1 1 0 0 1 1 1 0
0 0 0 1 0 1 0 1 0 1 1 0 1]
....
Average loss at step 3000: 0.470163
[1 1 1 0 0 1 0 1 0 1 0 1 1 1 0 1 0 0 1 1 1 1 0 1 1 1 0 1 1 0 0 1 1 1 0 0 1
0 0 1 1 0 1 0 0 1 1 1 0 0]
[1 1 1 0 0 1 0 1 0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 0 1 1 0 1 1 0 0 1 1 1 1 0 0
0 0 1 1 1 1 0 1 1 0 1 0 1]
[1 1 0 1 1 1 1 1 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 1 0 0 0 1 1 1 1 0 1 1 0 0 0
1 0 1 1 0 1 0 0 0 1 0 1 0]
[1 1 0 1 1 0 0 1 1 0 1 1 1 1 0 0 1 1 0 0 1 1 1 0 1 0 0 1 1 1 1 0 1 1 0 0 0
0 0 1 1 0 0 0 0 0 0 0 0 0]
Valid accuracy: 80.00000
###Markdown
Generating Data Batches for Training Region Embedding LearnerWe define a function that takes in a `batch_size` and `region_size` to output a batch of data using the `data` list that contains all the words, we created above.
###Code
data_index = 0
def generate_region_batch(batch_size, region_size):
'''
Generates a batch of data to train the region embedding learner
'''
global data_index
# Holds the data inputs of the batch (BOW)
batch = np.ndarray(shape=(batch_size, vocabulary_size), dtype=np.int32)
# Holds the data outputs of the batch (BOW)
labels = np.ndarray(shape=(batch_size, vocabulary_size), dtype=np.int32)
span = 2 * region_size + batch_size
# Sample a random index from data
data_index = np.random.randint(len(data)- span)
# Define a buffer that contains all the data within the current span
buffer = collections.deque(maxlen=span)
# Update the buffer
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
current_input_start_idx = 0
# Populate each batch index
for i in range(batch_size):
batch[i,:] = np.zeros(shape=(1,vocabulary_size), dtype=np.float32) #input
# Accumalating BOW vectors for input
for j in range(region_size):
# If the word is <unk> we ignore that word from BOW representation
# as that adds no value
if buffer[current_input_start_idx + j] != dictionary['<unk>']:
batch[i,buffer[current_input_start_idx + j]] += 1
# We collect context words from both left and right
# The follwoing logic takes care of that
if current_input_start_idx > 0:
ids_to_left_of_input = list(range(max(current_input_start_idx - (region_size//2),0), current_input_start_idx))
else:
ids_to_left_of_input = []
# > 0 if there are not enough words on the left side of current input region
amount_flow_from_left_side = (region_size//2)-len(ids_to_left_of_input)
ids_to_right_of_input = list(range(current_input_start_idx+region_size, current_input_start_idx+region_size+(region_size//2)+amount_flow_from_left_side))
assert len(ids_to_left_of_input + ids_to_right_of_input) == region_size
labels[i,:] = np.zeros(shape=(1,vocabulary_size), dtype=np.float32) #input
# Accumulates BOW vector for output
for k in ids_to_left_of_input + ids_to_right_of_input:
# If the word is <unk> we ignore that word from BOW representation
# as that adds no value
if buffer[k] != dictionary['<unk>']:
labels[i,buffer[k]] += 1
current_input_start_idx += 1
# Update the buffer
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
print('data:', [reverse_dictionary[di] for di in data[:50]])
data_index = 0
# Print a few batches
for _ in range(10):
batch, labels = generate_region_batch(batch_size=8, region_size=4)
print(' batch: sum: ', np.sum(batch,axis=1), np.argmax(batch,axis=1))
print(' labels: sum: ', np.sum(labels,axis=1), np.argmax(labels,axis=1))
###Output
_____no_output_____
###Markdown
Defining Region Embeddings AlgorithmHere we define the algorithm for learning region embeddings. This is quite straight forward as we are basically using a target BOW representation of a region, and ask the algorithm to predict the BOW representation of the context region.
###Code
batch_size = 128
tf.reset_default_graph()
# Input/output data.
train_dataset = tf.placeholder(tf.float32, shape=[batch_size, vocabulary_size])
train_labels = tf.placeholder(tf.float32, shape=[batch_size, vocabulary_size])
# Used to mask uninformative tokens
train_mask = tf.placeholder(tf.float32, shape=[batch_size, vocabulary_size])
# Embedding learning layer
with tf.variable_scope('region_embeddings'):
# This is the first hidden layer and is of size vocabulary_size, 500
w1 = tf.get_variable('w1', shape=[vocabulary_size,500], initializer = tf.contrib.layers.xavier_initializer_conv2d())
b1 = tf.get_variable('b1',shape=[500], initializer = tf.random_normal_initializer(stddev=0.05))
# Compute the hidden output
h = tf.nn.relu(
tf.matmul(train_dataset,w1) + b1
)
# Linear Layer that outputs the predicted BOW representation
w = tf.get_variable('linear_w', shape=[500, vocabulary_size], initializer= tf.contrib.layers.xavier_initializer())
b = tf.get_variable('linear_b', shape=[vocabulary_size], initializer= tf.random_normal_initializer(stddev=0.05))
# Output
out =tf.matmul(h,w)+b
# Loss is the mean squared error
loss = tf.reduce_mean(tf.reduce_sum(train_mask*(out - train_labels)**2,axis=1))
# Minimizes the loss
optimizer = tf.train.AdamOptimizer(learning_rate = 0.0005).minimize(loss)
###Output
_____no_output_____
###Markdown
Running Region Embedding Learning AlgorithmHere, using the above defined operations, we run the region embedding learning algorithm for a predefined number of steps.
###Code
num_steps = 6001
region_size = 10
test_results = []
session = tf.InteractiveSession(config=tf.ConfigProto(allow_soft_placement=True))
# Initialize TensorFlow variables
tf.global_variables_initializer().run()
print('Initialized')
average_loss = 0
# Run the algorithm for several steps
for step in range(num_steps):
if (step+1)%100==0:
print('.',end='')
if (step+1)%1000==0:
print('')
# Generate a batch of data
batch_data, batch_labels = generate_region_batch(batch_size, region_size)
# We perform this to reduce the effect of 0s in the batch labels during loss computations
# if we compute the loss naively with equal weight, the algorithm will perform poorly as
# there are more than 100 times zeros than ones
# So we normalize the loss by giving large weight to 1s and smaller weight to 0s
mask = ((vocabulary_size-region_size)*1.0/vocabulary_size) *np.array(batch_labels) + \
(region_size*1.0/vocabulary_size)*np.ones(shape=(batch_size, vocabulary_size),dtype=np.float32)
mask = np.clip(mask,0,1.0)
feed_dict = {train_dataset : batch_data,
train_labels : batch_labels,
train_mask : mask}
# Run an optimization step
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if (step+1) % 1000 == 0:
if step > 0:
average_loss = average_loss / 1000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step+1, average_loss))
average_loss = 0
# Save the weights, as these will be later used to
# initialize a lower layer of the classifer.
w1_arr = session.run(w1)
b1_arr = session.run(b1)
###Output
_____no_output_____
###Markdown
Sentiment Analysis with Region EmbeddingsHere we define a sentiment classifier that uses the region embeddings to output better classification results. There are three important components:* Convolution network performing convolutions on standard BOW representation (`sentiment_analysis`)* Convolution network performing convolutions on the region embeddings (`region_embeddings`)* Final layer that combine the outputs of above two networks to produce the final classification (`linear_layer`)
###Code
tf.reset_default_graph()
# Hyperparameters
batch_size = 50
region_size = 10
# These are
conv_width = vocabulary_size
reg_conv_width = 500
conv_stride = vocabulary_size
reg_conv_stride = 500
num_r = 100//region_size
# Input/output data.
train_dataset = [tf.placeholder(tf.float32, shape=[batch_size, vocabulary_size], name='train_data_%d'%ri) for ri in range(num_r)]
train_labels = tf.placeholder(tf.float32, shape=[batch_size], name='train_labels')
# Testing input/output data
valid_dataset = [tf.placeholder(tf.float32, shape=[batch_size, vocabulary_size], name='valid_data_%d'%ri) for ri in range(num_r)]
valid_labels = tf.placeholder(tf.int32, shape=[batch_size], name='valid_labels')
variables_to_init = []
with tf.variable_scope('region_embeddings', reuse=False):
# Getting the region embeddings weights
w1 = tf.get_variable('w1', shape=[vocabulary_size,500], trainable=False, initializer=tf.constant_initializer(w1_arr))
b1 = tf.get_variable('b1', shape=[500], trainable=False, initializer=tf.constant_initializer(b1_arr))
# Calculating region embeddings for all regions
concat_reg_emb = []
for t in train_dataset:
reg_emb = tf.nn.relu(
tf.matmul(t,w1) + b1
)
concat_reg_emb.append(tf.expand_dims(reg_emb,0))
# Reshaping the region embeddings to a shape [batch_size, regions, vocabulary_size]
concat_reg_emb = tf.concat(concat_reg_emb,axis=0)
concat_reg_emb = tf.transpose(concat_reg_emb, [1,0,2])
concat_reg_emb = tf.reshape(concat_reg_emb, [batch_size,-1])
# Region embeddings for valid dataset
concat_valid_reg_emb = []
for v in valid_dataset:
valid_reg_emb = tf.nn.relu(
tf.matmul(v,w1) + b1
)
concat_valid_reg_emb.append(tf.expand_dims(valid_reg_emb,0))
# Reshaping the valid region embeddings to a shape [batch_size, regions, vocabulary_size]
concat_valid_reg_emb = tf.concat(concat_valid_reg_emb,axis=0)
concat_valid_reg_emb = tf.transpose(concat_valid_reg_emb, [1,0,2]) # batch major region embeddings
concat_valid_reg_emb = tf.reshape(concat_valid_reg_emb, [batch_size,-1])
# Defining convolutions on regions (Weights and biases)
sentreg_w1 = tf.get_variable('reg_conv_w1', shape=[reg_conv_width,1,1], initializer = tf.contrib.layers.xavier_initializer_conv2d())
sentreg_b1 = tf.get_variable('reg_conv_b1',shape=[1], initializer = tf.random_normal_initializer(stddev=0.05))
variables_to_init.append(sentreg_w1)
variables_to_init.append(sentreg_b1)
# Doing convolutions on region embeddings
sentreg_h = tf.nn.relu(
tf.nn.conv1d(tf.expand_dims(concat_reg_emb,-1),filters=sentreg_w1,stride=reg_conv_stride, padding='SAME') + sentreg_b1
)
sentreg_h_valid = tf.nn.relu(
tf.nn.conv1d(tf.expand_dims(concat_valid_reg_emb,-1),filters=sentreg_w1,stride=reg_conv_stride, padding='SAME') + sentreg_b1
)
# reshape the outputs of the embeddings for the top linear layer
sentreg_h = tf.reshape(sentreg_h, [batch_size, -1])
sentreg_h_valid = tf.reshape(sentreg_h_valid, [batch_size, -1])
with tf.variable_scope('sentiment_analysis',reuse=False):
# Convolution with just BOW inputs
sent_w1 = tf.get_variable('conv_w1', shape=[conv_width,1,1], initializer = tf.contrib.layers.xavier_initializer_conv2d())
sent_b1 = tf.get_variable('conv_b1',shape=[1], initializer = tf.random_normal_initializer(stddev=0.05))
variables_to_init.append(sent_w1)
variables_to_init.append(sent_b1)
concat_train_dataset = tf.concat([tf.expand_dims(t,0) for t in train_dataset],axis=0)
concat_train_dataset = tf.transpose(concat_train_dataset, [1,0,2]) # make batch-major (axis)
concat_train_dataset = tf.reshape(concat_train_dataset, [batch_size, -1])
sent_h = tf.nn.relu(
tf.nn.conv1d(tf.expand_dims(concat_train_dataset,-1),filters=sent_w1,stride=conv_stride, padding='SAME') + sent_b1
)
# Valid data convolution
concat_valid_dataset = tf.concat([tf.expand_dims(v,0) for v in valid_dataset],axis=0)
concat_valid_dataset = tf.transpose(concat_valid_dataset, [1,0,2]) # make batch-major (axis)
concat_valid_dataset = tf.reshape(concat_valid_dataset, [batch_size, -1])
sent_h_valid = tf.nn.relu(
tf.nn.conv1d(tf.expand_dims(concat_valid_dataset,-1),filters=sent_w1,stride=conv_stride, padding='SAME') + sent_b1
)
# reshape the outputs of the embeddings for the top linear layer
sent_h = tf.reshape(sent_h, [batch_size, -1])
sent_h_valid = tf.reshape(sent_h_valid, [batch_size, -1])
with tf.variable_scope('top_layer', reuse=False):
# Linear Layer (output)
sent_w = tf.get_variable('linear_w', shape=[num_r*2, 1], initializer= tf.contrib.layers.xavier_initializer())
sent_b = tf.get_variable('linear_b', shape=[1], initializer= tf.random_normal_initializer(stddev=0.05))
variables_to_init.append(sent_w)
variables_to_init.append(sent_b)
# Here we feed in a combination of the BOW representation and region embedding
# related hidden outputs to the final classification layer
sent_hybrid_h = tf.concat([sentreg_h, sent_h],axis=1)
sent_hybrid_h_valid = tf.concat([sentreg_h_valid, sent_h_valid],axis=1)
# Output values
sent_out = tf.matmul(sent_hybrid_h,sent_w)+sent_b
tr_train_predictions = tf.nn.sigmoid(sent_out)
tf_valid_predictions = tf.nn.sigmoid(tf.matmul(sent_hybrid_h_valid, sent_w) + sent_b)
# Calculate valid accuracy
valid_pred_classes = tf.cast(tf.reshape(tf.greater(tf_valid_predictions, 0.5),[-1]),tf.int32)
# Loss computation and optimization
with tf.variable_scope('sentiment_with_region_embeddings'):
sent_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(labels=tf.expand_dims(train_labels,-1), logits=sent_out))
sent_optimizer = tf.train.AdamOptimizer(learning_rate = 0.0005).minimize(sent_loss)
num_steps = 10001
reg_valid_ot = []
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) as session:
tf.global_variables_initializer().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
print('.',end='')
if (step+1)%100==0:
print('')
batches_data, batch_labels = generate_sentiment_batch(batch_size, region_size,is_train=True)
feed_dict = {}
#print(len(batches_data))
for ri, batch in enumerate(batches_data):
feed_dict[train_dataset[ri]] = batch
feed_dict.update({train_labels : batch_labels})
_, l, tr_batch_preds = session.run([sent_optimizer, sent_loss, tr_train_predictions], feed_dict=feed_dict)
if np.random.random()<0.002:
print('\nTrain Predictions:')
print((tr_batch_preds>0.5).astype(np.int32).reshape(-1))
print(batch_labels.reshape(-1))
average_loss += l
if (step+1) % 500 == 0:
sentiment_data_index = -1
if step > 0:
average_loss = average_loss / 500
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step+1, average_loss))
average_loss = 0
valid_accuracy = []
for vi in range(2):
batches_vdata, batch_vlabels = generate_sentiment_batch(batch_size, region_size,is_train=False)
feed_dict = {}
for ri, batch in enumerate(batches_vdata):
feed_dict[valid_dataset[ri]] = batch
feed_dict.update({valid_labels : batch_vlabels})
batch_pred_classes, batch_preds = session.run([valid_pred_classes,tf_valid_predictions], feed_dict=feed_dict)
valid_accuracy.append(np.mean(batch_pred_classes==batch_vlabels)*100.0)
print(batch_pred_classes.reshape(-1))
print(batch_vlabels)
print()
print('Valid accuracy: %.5f'%np.mean(valid_accuracy))
reg_valid_ot.append(np.mean(valid_accuracy))
###Output
_____no_output_____
###Markdown
Plot the ResultsHere we plot the accuracies for standard sentiment classifier as well as the region embedding classifier.
###Code
naive_test_accuracy = [68.0, 68.0, 72.0, 76.0, 75.0, 73.0, 76.0, 78.0, 81.0, 80.0, 80.0, 81.0, 82.0, 81.0, 80.0, 79.0, 81.0, 82.0, 80.0, 83.0]
reg_test_accuracy = [55.0, 65.0, 71.0, 72.0, 75.0, 78.0, 80.0, 81.0, 84.0, 84.0, 83.0, 84.0, 83.0, 85.0, 85.0, 86.0, 86.0, 83.0, 84.0, 85.0]
f = pylab.figure(figsize=(15,5))
pylab.plot(np.arange(500,10001,500),naive_test_accuracy, linestyle='--', linewidth = 2.0, label='BOW')
pylab.plot(np.arange(500,10001,500),reg_test_accuracy, linewidth = 2.0, label='BOW + Region Embeddings')
pylab.legend(fontsize=18)
pylab.xlabel('Iteration', fontsize=18)
pylab.ylabel('Test Accuracy', fontsize=18)
pylab.show()
###Output
_____no_output_____ |
python-asyncio/python_asyncio.ipynb | ###Markdown
Coroutines for IO-bound tasksIn this notebook, we'll weave together our new (Tweet Parser)[https://github.com/tw-ddis/tweet_parser] and some python asyncio magic.Let's set up the environment and demonstrate a motivating example.
###Code
from IPython.display import HTML
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/dD9NgzLhbBM" frameborder="0" allowfullscreen></iframe>')
%load_ext autoreload
%autoreload 2
%matplotlib inline
import itertools as it
from functools import partial
import seaborn as sns
import pandas as pd
import requests
from tweet_parser.tweet import Tweet
import sec # you will not have this python file; I use it to keep `secrets` like passwords hidden
###Output
_____no_output_____
###Markdown
We can define a few constants here that will be used throughout our example.
###Code
username = "[email protected]"
AUTH = requests.auth.HTTPBasicAuth(username, sec.GNIP_API_PW)
GNIP_BASE_URL = "https://gnip-api.twitter.com/search/30day/accounts/shendrickson/peabody.json?"
###Output
_____no_output_____
###Markdown
This function is a little helper for programatically generating valid queries for terms with the Gnip api.
###Code
def gen_query_url(url, terms, max_results=100):
if isinstance(terms, str):
terms = terms.split()
return ''.join([url,
"query=",
"%20".join(terms),
"&maxResults={}".format(max_results)])
###Output
_____no_output_____
###Markdown
Lets say you want to get a collection of tweets matching some criteria - this is an extremely common task. The process might look something like this:
###Code
query = gen_query_url(GNIP_BASE_URL, ["just", "bought", "a", "house"])
print(query)
import requests
def sync_tweets(query):
return requests.get(url=query, auth=AUTH).json()['results']
%%time
tweets = [Tweet(i) for i in sync_tweets(query)]
print(tweets[0].text)
###Output
_____no_output_____
###Markdown
Easy peasy. What if you have a bunch of queries to match (this is a bit contrived, but serves a purpose). You might define all your queries as such and run a for loop to query all of them.
###Code
formed_query = partial(gen_query_url, url=GNIP_BASE_URL, max_results=100)
queries = [formed_query(terms=[i]) for i in ["eclipse", "nuclear", "korea", "cats", "ai", "memes", "googlebro"]]
queries
%%time
tweets = [Tweet(i) for i in it.chain.from_iterable([sync_tweets(query) for query in queries])]
###Output
_____no_output_____
###Markdown
Works great, but notice that there seems to be linear scaling for the time it takes to run this. Given that this is a trivial amount of _computation_ and a task that is almost entirely taken up by system calls / IO, it's a perfect opportunity to add parallism to the mix and speed it up.IO-bound parallism is commonly handled with a technique called asyncronous programming, in which the semantics _coroutine_, _event loop_, _user-level thread_, _task_, _future_, etc. are introduced. In modern python (>3.5), the language has builtins for using coroutines, exposed via the `asyncio` module and the keywords `async` and `await`. Several libraries have been introduced that make use of coroutines internally, such as `aiohttp`, which is mostly a coroutine verison of `requests`.Let's look at what the basic coroutine version of our above simple example would look like in aiohttp:
###Code
import asyncio
import aiohttp
import async_timeout
async def fetch_tweets_coroutine(url):
async with aiohttp.ClientSession() as session:
async with session.get(url, auth=aiohttp.BasicAuth(AUTH.username, AUTH.password)) as response:
return await response.json()
%%time
loop = asyncio.get_event_loop()
tweets = [Tweet(i) for i in loop.run_until_complete(fetch_tweets_coroutine(query))['results']]
print(tweets[0].user_id, tweets[0].text)
###Output
_____no_output_____
###Markdown
It's a lot more code that our simple requests example and doesn't work any more quickly, though this is expected since the time is really response time to and from Gnip. Let's try again with our longer set of queries, redefining the methods to handle this more naturally.
###Code
async def fetch_tweets_fancy(session, url):
async with session.get(url, auth=aiohttp.BasicAuth(AUTH.username, AUTH.password)) as response:
# print("collecting query: {}".format(url))
_json = await response.json()
return [Tweet(t) for t in _json["results"]]
async def collect_queries(queries):
tasks = []
async with aiohttp.ClientSession() as session:
for query in queries:
task = asyncio.ensure_future(fetch_tweets_fancy(session, query))
tasks.append(task)
responses = await asyncio.gather(*tasks)
return responses
formed_query = partial(gen_query_url, url=GNIP_BASE_URL, max_results=100)
queries = [formed_query(terms=[i]) for i in ["eclipse", "nuclear", "korea", "cats", "ai", "memes"]]
%%time
loop = asyncio.get_event_loop()
future = asyncio.ensure_future(collect_queries(queries))
res = list(it.chain.from_iterable(loop.run_until_complete(future)))
print(res[0].text)
print(len(res))
###Output
_____no_output_____ |
Redshift_Efficiency_Study/BGS_z-efficiency_uniform-sampling.ipynb | ###Markdown
BGS Signal-to-Noise Ratio and Redshift EfficiencyThe goal of this notebook is to assess the signal-to-noise ratio and redshift efficiency of BGS targets observed in "nominal" observing conditions (which are defined [here](https://github.com/desihub/desisurvey/blob/master/py/desisurvey/data/config.yamlL102) and discussed [here](https://github.com/desihub/desisurvey/issues/77), among other places). Specifically, the nominal BGS observing conditions we adopt (note the 5-minute exposure time is with the moon down!) are:```python{'AIRMASS': 1.0, 'EXPTIME': 300, 'SEEING': 1.1, 'MOONALT': -60, 'MOONFRAC': 0.0, 'MOONSEP': 180}```During the survey itself, observations with the moon up (i.e., during bright time) will be obtained with longer exposure times according to the bright-time exposure-time model (see [here](https://github.com/desihub/surveysim/tree/master/doc/nb)).Because we fix the observing conditions, we only consider how redshift efficiency depends on galaxy properties (apparent magnitude, redshift, 4000-A break, etc.). However, note that the code is structured such that we *could* (now or in the future) explore variations in seeing, exposure time, and lunar parameters.For code to generate large numbers of spectra over significant patches of sky and to create a representative DESI dataset (with parallelism), see `desitarget/bin/select_mock_targets` and `desitarget.mock.build.targets_truth`.Finally, note that the various python Classes instantiated here (documented in `desitarget.mock.mockmaker`) are easily extensible to other mock catalogs and galaxy/QSO/stellar physics.
###Code
import os
import numpy as np
import matplotlib as mpl
mpl.use('Agg')
import matplotlib.pyplot as plt
from astropy.table import Table, vstack
from astropy.io import fits
from desispec.io.util import write_bintable
from desiutil.log import get_logger, DEBUG
log = get_logger()
from desitarget.cuts import isBGS_bright, isBGS_faint
## Following not yet available in the master branch
from desitarget.mock.mockmaker import BGSMaker
from desitarget.mock.mockmaker import SKYMaker
import multiprocessing
nproc = multiprocessing.cpu_count() // 2
import seaborn as sns
sns.set(style='white', font_scale=1.1, palette='deep')
# Specify if using this from command line as a .py or as an ipynb
using_py = False
class arg:
pass
simnames = ['sim46']#['sim13','sim14','sim16','sim17','sim18'] #'sim12',
if using_py:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--sim', type=int, default=None, help='Simulation number (see documentation)')
parser.add_argument('--part', type=str, default=None, help='Which part of the simulation to run. Options are all, newexp, group, zfit')
args = parser.parse_args()
if args.sim is None:
parser.print_help()
sys.exit(1)
else:
%matplotlib inline
%load_ext autoreload
%autoreload 2
args = arg()
args.sim = 1
args.part = 'all'
###Output
_____no_output_____
###Markdown
Establish the I/O path, random seed, and path to the dust maps and desired healpixel.
###Code
simdir = os.path.join(os.getenv('DESI_ROOT'), 'spectro', 'sim', 'bgs', 'kremin', 'flat_priors')
if not os.path.exists(simdir):
os.makedirs(simdir)
seed = 626
###Output
_____no_output_____
###Markdown
All or none of the output files can be overwritten using these keywords.
###Code
overwrite_spectra = True
#overwrite_templates = overwrite_spectra
overwrite_redshifts = True
overwrite_results = True
###Output
_____no_output_____
###Markdown
Initialize random state
###Code
rand = np.random.RandomState(seed)
###Output
_____no_output_____
###Markdown
Set up the simulation parameters.Here we use the mock to capture the correct distribution of apparent magnitudes, galaxy properties, and redshifts.Note that if `use_mock=False` then *rmagmin*, *rmagmax*, *zmin*, and *zmax* are required. For example, here's another possible simulation of 1000 spectra in which the magnitude (r=19.5) and redshift (z=0.2) are held fixed while moonfrac and moonsep are varied (as well as intrinsic galaxy properties):```pythonsim2 = dict(suffix='sim02', use_mock=False, nsim=10, nspec=100, seed=22, zmin=0.2, zmax=0.2, rmagmin=19.5, rmagmax=19.5, moonfracmin=0.0, moonfracmax=1.0, moonsepmin=0.0, moonsepmax=120.0, )```
###Code
from desistudy import get_predefined_sim_dict, get_predefined_obs_dict
all_sims = []
all_obsconds = []
for simname in simnames:
all_sims.append(get_predefined_sim_dict(simname))
all_obsconds.append(get_predefined_obs_dict(simname))
print(all_obsconds)
sims = np.atleast_1d(all_sims)
conditions = np.atleast_1d(all_obsconds)
###Output
[{'AIRMASS': 1.0, 'SEEING': 1.1, 'MOONALT': 90, 'MOONSEP': 20, 'EXPTIME': 300, 'MOONFRAC': 0.99}]
###Markdown
Generate Spectra
###Code
from desistudy import bgs_sim_spectra
if overwrite_spectra:
for sim,cond in zip(sims,conditions):
log.info("\n\n\n\nNow performing sim {}".format(sim['suffix']))
bgs_sim_spectra(sim, cond, simdir, verbose=False, overwrite=overwrite_spectra)
log.info("\n\nFinished simulating templates\n\n")
###Output
INFO:<ipython-input-16-068526d5d605>:5:<module>:
Now performing sim sim46
INFO:io.py:1013:read_basis_templates: Reading /global/project/projectdirs/desi/spectro/templates/basis_templates/v2.5/bgs_templates_v2.1.fits metadata.
INFO:io.py:1025:read_basis_templates: Reading /global/project/projectdirs/desi/spectro/templates/basis_templates/v2.5/bgs_templates_v2.1.fits
Writing /global/project/projectdirs/desi/spectro/sim/bgs/kremin/flat_priors/sim46/bgs-sim46-simdata.fits
{'AIRMASS': 1.0, 'EXPTIME': 300.0, 'MOONALT': 90.0, 'MOONFRAC': 0.99000001, 'MOONSEP': 20.0, 'SEEING': 1.1}
###Markdown
Fit the redshifts.This step took ~1.8 seconds per spectrum, ~3 minutes per 100 spectra, or ~30 minutes for all 1000 spectra with my 4-core laptop.
###Code
from desistudy import bgs_redshifts
if overwrite_redshifts:
for sim in sims:
log.info("\n\n\n\nNow performing sim {}".format(sim['suffix']))
bgs_redshifts(sim, simdir=simdir, overwrite=overwrite_redshifts)
log.info("\n\n\n\n\nFinished redshift fitting\n\n\n")
###Output
INFO:<ipython-input-13-3473eae6755c>:5:<module>:
Now performing sim sim46
Running on a NERSC login node- reducing number of processes to 4
Running with 4 processes
Loading targets...
Read and distribution of 800 targets: 14.4 seconds
DEBUG: Using default redshift range 0.0050-1.6988 for rrtemplate-galaxy.fits
DEBUG: Using default redshift range 0.5000-3.9956 for rrtemplate-qso.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-A.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-B.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-Carbon.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-F.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-G.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-K.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-Ldwarf.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-M.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-WD.fits
Read and broadcast of 11 templates: 0.1 seconds
Rebinning templates: 12.1 seconds
Computing redshifts
Scanning redshifts for template GALAXY
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 411.5 seconds
Scanning redshifts for template QSO
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 129.2 seconds
Scanning redshifts for template STAR:::A
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 15.9 seconds
Scanning redshifts for template STAR:::B
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 16.0 seconds
Scanning redshifts for template STAR:::CARBON
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 4.9 seconds
Scanning redshifts for template STAR:::F
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 15.8 seconds
Scanning redshifts for template STAR:::G
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 16.2 seconds
Scanning redshifts for template STAR:::K
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 15.8 seconds
Scanning redshifts for template STAR:::LDWARF
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 4.9 seconds
Scanning redshifts for template STAR:::M
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 15.5 seconds
Scanning redshifts for template STAR:::WD
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 15.7 seconds
Finding best fits for template GALAXY
Finished in: 75.5 seconds
Finding best fits for template QSO
Finished in: 17.8 seconds
Finding best fits for template STAR:::A
Finished in: 19.4 seconds
Finding best fits for template STAR:::B
Finished in: 18.9 seconds
Finding best fits for template STAR:::CARBON
Finished in: 4.4 seconds
Finding best fits for template STAR:::F
Finished in: 19.4 seconds
Finding best fits for template STAR:::G
Finished in: 19.4 seconds
Finding best fits for template STAR:::K
Finished in: 18.6 seconds
Finding best fits for template STAR:::LDWARF
Finished in: 4.4 seconds
Finding best fits for template STAR:::M
Finished in: 19.1 seconds
Finding best fits for template STAR:::WD
Finished in: 20.6 seconds
Computing redshifts took: 921.1 seconds
Writing zbest data took: 0.1 seconds
Total run time: 947.8 seconds
Running on a NERSC login node- reducing number of processes to 4
Running with 4 processes
Loading targets...
Read and distribution of 800 targets: 14.9 seconds
DEBUG: Using default redshift range 0.0050-1.6988 for rrtemplate-galaxy.fits
DEBUG: Using default redshift range 0.5000-3.9956 for rrtemplate-qso.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-A.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-B.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-Carbon.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-F.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-G.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-K.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-Ldwarf.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-M.fits
DEBUG: Using default redshift range -0.0020-0.0020 for rrtemplate-star-WD.fits
Read and broadcast of 11 templates: 0.1 seconds
Rebinning templates: 12.6 seconds
Computing redshifts
Scanning redshifts for template GALAXY
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 410.7 seconds
Scanning redshifts for template QSO
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 133.6 seconds
Scanning redshifts for template STAR:::A
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 15.3 seconds
Scanning redshifts for template STAR:::B
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 15.2 seconds
Scanning redshifts for template STAR:::CARBON
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 4.9 seconds
Scanning redshifts for template STAR:::F
Progress: 0 %
Progress: 10 %
Progress: 20 %
Progress: 30 %
Progress: 40 %
Progress: 50 %
Progress: 60 %
Progress: 70 %
Progress: 80 %
Progress: 90 %
Progress: 100 %
Finished in: 15.4 seconds
###Markdown
Gather the results.
###Code
from desistudy import bgs_gather_results
if overwrite_results:
for sim in sims:
log.info("\n\n\n\nNow performing sim {}".format(sim['suffix']))
bgs_gather_results(sim, simdir=simdir, overwrite=overwrite_results)
log.info("Finished gathering results")
###Output
INFO:<ipython-input-14-299839a56a8e>:5:<module>:
Now performing sim sim46
INFO:desistudy.py:209:bgs_gather_results: Reading /global/project/projectdirs/desi/spectro/sim/bgs/kremin/flat_priors/sim46/bgs-sim46-000-true.fits
INFO:desistudy.py:223:bgs_gather_results: Reading /global/project/projectdirs/desi/spectro/sim/bgs/kremin/flat_priors/sim46/bgs-sim46-000-zbest.fits
INFO:desistudy.py:235:bgs_gather_results: Reading /global/project/projectdirs/desi/spectro/sim/bgs/kremin/flat_priors/sim46/bgs-sim46-000.fits
###Markdown
Do everything in one cell
###Code
# from desistudy import bgs_sim_spectra
# from desistudy import bgs_redshifts
# from desistudy import bgs_gather_results
# for sim,cond in zip(sims,conditions):
# log.info("\n\n\n\nNow performing sim {}".format(sim['suffix']))
# if overwrite_spectra:
# bgs_sim_spectra(sim, cond, simdir, verbose=False, overwrite=overwrite_spectra)
# log.info("Finished simulating templates")
# if overwrite_redshifts:
# bgs_redshifts(sim, simdir=simdir, overwrite=overwrite_redshifts)
# log.info("Finished redshift fitting")
# if overwrite_results:
# bgs_gather_results(sim, simdir=simdir, overwrite=overwrite_results)
# log.info("Finished gathering results")
###Output
_____no_output_____ |
notebooks/community/neo4j/graph_paysim.ipynb | ###Markdown
Run in Colab View on GitHub OverviewIn this notebook, you will learn how to use Neo4j AuraDS to create graph features. You'll then use those new features to solve a classification problem with Vertex AI. DatasetThis notebook uses a version of the PaySim dataset that has been modified to work with Neo4j's graph database. PaySim is a synthetic fraud dataset. The goal is to identify whether or not a given transaction constitutes fraud. The [original version of the dataset](https://github.com/EdgarLopezPhD/PaySim) has tabular data.Neo4j has worked on a modified version that generates a graph dataset [here](https://github.com/voutilad/PaySim). We've pregenerated a copy of that dataset that you can grab [here](https://storage.googleapis.com/neo4j-datasets/paysim.dump). You'll want to download that dataset and then upload it to Neo4j AuraDS. AuraDS is a graph data science tool that is offered as a service on GCP. Instructions on signing up and uploading the dataset are available [here](https://github.com/neo4j-partners/aurads-paysim). CostsThis tutorial uses billable components of Google Cloud:* Cloud Storage* Vertex AILearn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. Setup Set up your development environmentWe suggest you use Colab for this notebook. Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Install additional PackagesFirst off, you'll also need to install a few packages.
###Code
!pip install --quiet --upgrade neo4j
!pip install --quiet google-cloud-storage
!pip install --quiet google.cloud.aiplatform
###Output
_____no_output_____
###Markdown
(Colab only) Restart the kernelAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages. When you run this, you may get a notification that the kernel crashed. You can disregard that.
###Code
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Working with Neo4j Define Neo4J related variablesYou'll need to enter the credentials from your AuraDS instance below. You can get your credentials by following this [walkthrough](https://github.com/neo4j-partners/aurads-paysim).The "DB_NAME" is always neo4j for AuraDS. It is different from the name you gave your database tenant in the AuraDS console.
###Code
DB_URL = "neo4j+s://XXXXX.databases.neo4j.io"
DB_USER = "neo4j"
DB_PASS = "YOUR PASSWORD"
DB_NAME = "neo4j"
###Output
_____no_output_____
###Markdown
In this section we're going to connect to Neo4j and look around the database. We're going to generate some new features in the dataset using Neo4j's Graph Data Science library. Finally, we'll load the data into a Pandas dataframe so that it's all ready to put into GCP Feature Store. Exploring the database
###Code
import pandas as pd
from neo4j import GraphDatabase
driver = GraphDatabase.driver(DB_URL, auth=(DB_USER, DB_PASS))
###Output
_____no_output_____
###Markdown
Now, let's explore the data in the database a bit to understand what we have to work with.
###Code
# node labels
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL db.labels() YIELD label
CALL apoc.cypher.run('MATCH (:`'+label+'`) RETURN count(*) as freq', {})
YIELD value
RETURN label, value.freq AS freq
"""
).data()
)
df = pd.DataFrame(result)
display(df)
# relationship types
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL db.relationshipTypes() YIELD relationshipType as type
CALL apoc.cypher.run('MATCH ()-[:`'+type+'`]->() RETURN count(*) as freq', {})
YIELD value
RETURN type AS relationshipType, value.freq AS freq
ORDER by freq DESC
"""
).data()
)
df = pd.DataFrame(result)
display(df)
# transaction types
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
MATCH (t:Transaction)
WITH sum(t.amount) AS globalSum, count(t) AS globalCnt
WITH *, 10^3 AS scaleFactor
UNWIND ['CashIn', 'CashOut', 'Payment', 'Debit', 'Transfer'] AS txType
CALL apoc.cypher.run('MATCH (t:' + txType + ')
RETURN sum(t.amount) as txAmount, count(t) AS txCnt', {})
YIELD value
RETURN txType,value.txAmount AS TotalMarketValue
"""
).data()
)
df = pd.DataFrame(result)
display(df)
###Output
_____no_output_____
###Markdown
Create a New Feature with a Graph Embedding using Neo4jFirst we're going to create an in memory graph represtation of the data in Neo4j Graph Data Science (GDS).Note, if you get an error saying the graph already exists, that's probably because you ran this code before. You can destroy it using the command in the cleanup section of this notebook.
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.graph.create.cypher('client_graph',
'MATCH (c:Client) RETURN id(c) as id, c.num_transactions as num_transactions, c.total_transaction_amnt as total_transaction_amnt, c.is_fraudster as is_fraudster',
'MATCH (c:Client)-[:PERFORMED]->(t:Transaction)-[:TO]->(c2:Client) return id(c) as source, id(c2) as target, sum(t.amount) as amount, "TRANSACTED_WITH" as type ')
"""
).data()
)
df = pd.DataFrame(result)
display(df)
###Output
_____no_output_____
###Markdown
Now we can generate an embedding from that graph. This is a new feature we can use in our predictions. We're using FastRP, which is a more full featured and higher performance of Node2Vec. You can learn more about that [here](https://neo4j.com/docs/graph-data-science/current/algorithms/fastrp/).
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.fastRP.mutate('client_graph',{
relationshipWeightProperty:'amount',
iterationWeights: [0.0, 1.00, 1.00, 0.80, 0.60],
featureProperties: ['num_transactions', 'total_transaction_amnt'],
propertyRatio: 0.25,
nodeSelfInfluence: 0.15,
embeddingDimension: 16,
randomSeed: 1,
mutateProperty:'embedding'
})
"""
).data()
)
df = pd.DataFrame(result)
display(df)
###Output
_____no_output_____
###Markdown
Finally we dump that out to a dataframe
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.graph.streamNodeProperties
('client_graph', ['embedding', 'num_transactions', 'total_transaction_amnt', 'is_fraudster'])
YIELD nodeId, nodeProperty, propertyValue
RETURN nodeId, nodeProperty, propertyValue
"""
).data()
)
df = pd.DataFrame(result)
df.head()
###Output
_____no_output_____
###Markdown
Now we need to take that dataframe and shape it into something that better represents our classification problem.
###Code
x = df.pivot(index="nodeId", columns="nodeProperty", values="propertyValue")
x = x.reset_index()
x.columns.name = None
x.head()
###Output
_____no_output_____
###Markdown
is_fraudster will have a value of 0 or 1 if populated. If the value is -9223372036854775808 then it's unlabled, so we're going to drop it.
###Code
x = x.loc[x["is_fraudster"] != -9223372036854775808]
x.head()
###Output
_____no_output_____
###Markdown
Note that the embedding row is an array. To make this dataset more consumable, we should flatten that out into multiple individual features: embedding_0, embedding_1, ... embedding_n.
###Code
FEATURES_FILENAME = "features.csv"
embeddings = pd.DataFrame(x["embedding"].values.tolist()).add_prefix("embedding_")
merged = x.drop(columns=["embedding"]).merge(
embeddings, left_index=True, right_index=True
)
features_df = merged.drop(
columns=["is_fraudster", "num_transactions", "total_transaction_amnt"]
)
train_df = merged.drop(columns=["nodeId"])
features_df.to_csv(FEATURES_FILENAME, index=False)
###Output
_____no_output_____
###Markdown
This dataset is too small to use with Vertex AI AutoML Tables. For sake of demonstration, we're going to repeat it a few times. Don't do this in the real world.
###Code
TRAINING_FILENAME = "train.csv"
pd.concat([train_df for i in range(10)]).to_csv(TRAINING_FILENAME, index=False)
###Output
_____no_output_____
###Markdown
And that's it! The dataframe now has a nice dataset that we can use with GCP Vertex AI. Using Vertex AI with Neo4j data Define Google Cloud variablesYou'll need to set a few variables for your GCP environment. PROJECT_ID and STORAGE_BUCKET are most critical. The others will probably work with the defaults given.
###Code
# Edit these variables!
PROJECT_ID = "YOUR-PROJECT-ID"
STORAGE_BUCKET = "YOUR-BUCKET-NAME"
# You can leave these defaults
REGION = "us-central1"
STORAGE_PATH = "paysim"
EMBEDDING_DIMENSION = 16
FEATURESTORE_ID = "paysim"
ENTITY_NAME = "payer"
import os
os.environ["GCLOUD_PROJECT"] = PROJECT_ID
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account
###Code
try:
from google.colab import auth as google_auth
google_auth.authenticate_user()
except:
pass
###Output
_____no_output_____
###Markdown
Upload to a GCP Cloud Storage BucketTo get the data into Vertex AI, we must first put it in a bucket as a CSV.
###Code
from google.cloud import storage
client = storage.Client()
bucket = client.bucket(STORAGE_BUCKET)
client.create_bucket(bucket)
# Upload our files to that bucket
for filename in [FEATURES_FILENAME, TRAINING_FILENAME]:
upload_path = os.path.join(STORAGE_PATH, filename)
blob = bucket.blob(upload_path)
blob.upload_from_filename(filename)
###Output
_____no_output_____
###Markdown
Train and deploy a model on GCPWe'll use the engineered features to train an AutoML Tables model, then deploy it to an endpoint
###Code
from google.cloud import aiplatform
aiplatform.init(project=PROJECT_ID, location=REGION)
dataset = aiplatform.TabularDataset.create(
display_name="paysim",
gcs_source=os.path.join("gs://", STORAGE_BUCKET, STORAGE_PATH, TRAINING_FILENAME),
)
dataset.wait()
print(f'\tDataset: "{dataset.display_name}"')
print(f'\tname: "{dataset.resource_name}"')
embedding_column_names = ["embedding_{}".format(i) for i in range(EMBEDDING_DIMENSION)]
other_column_names = ["num_transactions", "total_transaction_amnt"]
all_columns = other_column_names + embedding_column_names
column_specs = {column: "numeric" for column in all_columns}
job = aiplatform.AutoMLTabularTrainingJob(
display_name="train-paysim-automl-1",
optimization_prediction_type="classification",
column_specs=column_specs,
)
model = job.run(
dataset=dataset,
target_column="is_fraudster",
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
model_display_name="paysim-prediction-model",
disable_early_stopping=False,
budget_milli_node_hours=1000,
)
endpoint = model.deploy(machine_type="n1-standard-4")
###Output
_____no_output_____
###Markdown
Loading Data into GCP Feature StoreIn this section, we'll take our dataframe with newly engineered features and load that into GCP feature store.
###Code
from google.cloud.aiplatform_v1 import FeaturestoreServiceClient
api_endpoint = "{}-aiplatform.googleapis.com".format(REGION)
fs_client = FeaturestoreServiceClient(client_options={"api_endpoint": api_endpoint})
resource_path = fs_client.common_location_path(PROJECT_ID, REGION)
fs_path = fs_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
entity_path = fs_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, ENTITY_NAME
)
###Output
_____no_output_____
###Markdown
First, let's check if the Feature Store already exists
###Code
from grpc import StatusCode
def check_has_resource(callable):
has_resource = False
try:
callable()
has_resource = True
except Exception as e:
if (
not hasattr(e, "grpc_status_code")
or e.grpc_status_code != StatusCode.NOT_FOUND
):
raise e
return has_resource
feature_store_exists = check_has_resource(
lambda: fs_client.get_featurestore(name=fs_path)
)
from google.cloud.aiplatform_v1.types import entity_type as entity_type_pb2
from google.cloud.aiplatform_v1.types import feature as feature_pb2
from google.cloud.aiplatform_v1.types import featurestore as featurestore_pb2
from google.cloud.aiplatform_v1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1.types import io as io_pb2
if not feature_store_exists:
create_lro = fs_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=resource_path,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=1
),
),
)
)
print(create_lro.result())
entity_type_exists = check_has_resource(
lambda: fs_client.get_entity_type(name=entity_path)
)
if not entity_type_exists:
users_entity_type_lro = fs_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=fs_path,
entity_type_id=ENTITY_NAME,
entity_type=entity_type_pb2.EntityType(
description="Main entity type",
),
)
)
print(users_entity_type_lro.result())
feature_requests = [
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="Embedding {} from Neo4j".format(i),
),
feature_id="embedding_{}".format(i),
)
for i in range(EMBEDDING_DIMENSION)
]
create_features_lro = fs_client.batch_create_features(
parent=entity_path,
requests=feature_requests,
)
print(create_features_lro.result())
feature_specs = [
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="embedding_{}".format(i)
)
for i in range(EMBEDDING_DIMENSION)
]
from google.protobuf.timestamp_pb2 import Timestamp
feature_time = Timestamp()
feature_time.GetCurrentTime()
feature_time.nanos = 0
import_request = fs_client.import_feature_values(
featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=entity_path,
csv_source=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(
uris=[
os.path.join(
"gs://", STORAGE_BUCKET, STORAGE_PATH, FEATURES_FILENAME
)
]
)
),
entity_id_field="nodeId",
feature_specs=feature_specs,
worker_count=1,
feature_time=feature_time,
)
)
print(import_request.result())
###Output
_____no_output_____
###Markdown
Sending a prediction using features from the feature store
###Code
from google.cloud.aiplatform_v1 import FeaturestoreOnlineServingServiceClient
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": api_endpoint}
)
# Retrieve Neo4j embeddings from feature store
from google.cloud.aiplatform_v1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1.types import \
featurestore_online_service as featurestore_online_service_pb2
feature_selector = FeatureSelector(
id_matcher=IdMatcher(
ids=["embedding_{}".format(i) for i in range(EMBEDDING_DIMENSION)]
)
)
fs_features = data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
entity_type=entity_path,
entity_id="5",
feature_selector=feature_selector,
)
)
saved_embeddings = dict(
zip(
(fd.id for fd in fs_features.header.feature_descriptors),
(str(d.value.double_value) for d in fs_features.entity_view.data),
)
)
# Combine with other features. These might be sourced per transaction
all_features = {"num_transactions": "80", "total_dollar_amnt": "7484459.618641878"}
all_features.update(saved_embeddings)
instances = [{key: str(value) for key, value in all_features.items()}]
# Send a prediction
endpoint.predict(instances=instances)
###Output
_____no_output_____
###Markdown
Cleanup Neo4j cleanupTo delete the Graph Data Science representation of the graph, run this:
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.graph.drop('client_graph')
"""
).data()
)
###Output
_____no_output_____
###Markdown
Google Cloud cleanupDelete the feature store and turn down the endpoint
###Code
fs_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=fs_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
endpoint.delete()
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub OverviewIn this notebook, you will learn how to use Neo4j AuraDS to create graph features. You'll then use those new features to solve a classification problem with Vertex AI. DatasetThis notebook uses a version of the PaySim dataset that has been modified to work with Neo4j's graph database. PaySim is a synthetic fraud dataset. The goal is to identify whether or not a given transaction constitutes fraud. The [original version of the dataset](https://github.com/EdgarLopezPhD/PaySim) has tabular data.Neo4j has worked on a modified version that generates a graph dataset [here](https://github.com/voutilad/PaySim). We've pregenerated a copy of that dataset that you can grab [here](https://storage.googleapis.com/neo4j-datasets/paysim.dump). You'll want to download that dataset and then upload it to Neo4j AuraDS. AuraDS is a graph data science tool that is offered as a service on GCP. Instructions on signing up and uploading the dataset are available [here](https://github.com/neo4j-partners/aurads-paysim). CostsThis tutorial uses billable components of Google Cloud:* Cloud Storage* Vertex AILearn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. Setup Set up your development environmentWe suggest you use Colab for this notebook. Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Install additional PackagesFirst off, you'll also need to install a few packages.
###Code
!pip install --quiet --upgrade neo4j
!pip install --quiet google-cloud-storage
!pip install --quiet google.cloud.aiplatform
###Output
_____no_output_____
###Markdown
(Colab only) Restart the kernelAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages. When you run this, you may get a notification that the kernel crashed. You can disregard that.
###Code
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Working with Neo4j Define Neo4J related variablesYou'll need to enter the credentials from your AuraDS instance below. You can get your credentials by following this [walkthrough](https://github.com/neo4j-partners/aurads-paysim).The "DB_NAME" is always neo4j for AuraDS. It is different from the name you gave your database tenant in the AuraDS console.
###Code
DB_URL = "neo4j+s://XXXXX.databases.neo4j.io"
DB_USER = "neo4j"
DB_PASS = "YOUR PASSWORD"
DB_NAME = "neo4j"
###Output
_____no_output_____
###Markdown
In this section we're going to connect to Neo4j and look around the database. We're going to generate some new features in the dataset using Neo4j's Graph Data Science library. Finally, we'll load the data into a Pandas dataframe so that it's all ready to put into GCP Feature Store. Exploring the database
###Code
import pandas as pd
from neo4j import GraphDatabase
driver = GraphDatabase.driver(DB_URL, auth=(DB_USER, DB_PASS))
###Output
_____no_output_____
###Markdown
Now, let's explore the data in the database a bit to understand what we have to work with.
###Code
# node labels
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL db.labels() YIELD label
CALL apoc.cypher.run('MATCH (:`'+label+'`) RETURN count(*) as freq', {})
YIELD value
RETURN label, value.freq AS freq
"""
).data()
)
df = pd.DataFrame(result)
display(df)
# relationship types
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL db.relationshipTypes() YIELD relationshipType as type
CALL apoc.cypher.run('MATCH ()-[:`'+type+'`]->() RETURN count(*) as freq', {})
YIELD value
RETURN type AS relationshipType, value.freq AS freq
ORDER by freq DESC
"""
).data()
)
df = pd.DataFrame(result)
display(df)
# transaction types
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
MATCH (t:Transaction)
WITH sum(t.amount) AS globalSum, count(t) AS globalCnt
WITH *, 10^3 AS scaleFactor
UNWIND ['CashIn', 'CashOut', 'Payment', 'Debit', 'Transfer'] AS txType
CALL apoc.cypher.run('MATCH (t:' + txType + ')
RETURN sum(t.amount) as txAmount, count(t) AS txCnt', {})
YIELD value
RETURN txType,value.txAmount AS TotalMarketValue
"""
).data()
)
df = pd.DataFrame(result)
display(df)
###Output
_____no_output_____
###Markdown
Create a New Feature with a Graph Embedding using Neo4jFirst we're going to create an in memory graph represtation of the data in Neo4j Graph Data Science (GDS).Note, if you get an error saying the graph already exists, that's probably because you ran this code before. You can destroy it using the command in the cleanup section of this notebook.
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.graph.create.cypher('client_graph',
'MATCH (c:Client) RETURN id(c) as id, c.num_transactions as num_transactions, c.total_transaction_amnt as total_transaction_amnt, c.is_fraudster as is_fraudster',
'MATCH (c:Client)-[:PERFORMED]->(t:Transaction)-[:TO]->(c2:Client) return id(c) as source, id(c2) as target, sum(t.amount) as amount, "TRANSACTED_WITH" as type ')
"""
).data()
)
df = pd.DataFrame(result)
display(df)
###Output
_____no_output_____
###Markdown
Now we can generate an embedding from that graph. This is a new feature we can use in our predictions. We're using FastRP, which is a more full featured and higher performance of Node2Vec. You can learn more about that [here](https://neo4j.com/docs/graph-data-science/current/algorithms/fastrp/).
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.fastRP.mutate('client_graph',{
relationshipWeightProperty:'amount',
iterationWeights: [0.0, 1.00, 1.00, 0.80, 0.60],
featureProperties: ['num_transactions', 'total_transaction_amnt'],
propertyRatio: 0.25,
nodeSelfInfluence: 0.15,
embeddingDimension: 16,
randomSeed: 1,
mutateProperty:'embedding'
})
"""
).data()
)
df = pd.DataFrame(result)
display(df)
###Output
_____no_output_____
###Markdown
Finally we dump that out to a dataframe
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.graph.streamNodeProperties
('client_graph', ['embedding', 'num_transactions', 'total_transaction_amnt', 'is_fraudster'])
YIELD nodeId, nodeProperty, propertyValue
RETURN nodeId, nodeProperty, propertyValue
"""
).data()
)
df = pd.DataFrame(result)
df.head()
###Output
_____no_output_____
###Markdown
Now we need to take that dataframe and shape it into something that better represents our classification problem.
###Code
x = df.pivot(index="nodeId", columns="nodeProperty", values="propertyValue")
x = x.reset_index()
x.columns.name = None
x.head()
###Output
_____no_output_____
###Markdown
is_fraudster will have a value of 0 or 1 if populated. If the value is -9223372036854775808 then it's unlabeled, so we're going to drop it.
###Code
x = x.loc[x["is_fraudster"] != -9223372036854775808]
x.head()
###Output
_____no_output_____
###Markdown
Note that the embedding row is an array. To make this dataset more consumable, we should flatten that out into multiple individual features: embedding_0, embedding_1, ... embedding_n.
###Code
FEATURES_FILENAME = "features.csv"
embeddings = pd.DataFrame(x["embedding"].values.tolist()).add_prefix("embedding_")
merged = x.drop(columns=["embedding"]).merge(
embeddings, left_index=True, right_index=True
)
features_df = merged.drop(
columns=["is_fraudster", "num_transactions", "total_transaction_amnt"]
)
train_df = merged.drop(columns=["nodeId"])
features_df.to_csv(FEATURES_FILENAME, index=False)
###Output
_____no_output_____
###Markdown
This dataset is too small to use with Vertex AI for AutoML tabular data. For sake of demonstration, we're going to repeat it a few times. Don't do this in the real world.
###Code
TRAINING_FILENAME = "train.csv"
pd.concat([train_df for i in range(10)]).to_csv(TRAINING_FILENAME, index=False)
###Output
_____no_output_____
###Markdown
And that's it! The dataframe now has a nice dataset that we can use with GCP Vertex AI. Using Vertex AI with Neo4j data Define Google Cloud variablesYou'll need to set a few variables for your GCP environment. PROJECT_ID and STORAGE_BUCKET are most critical. The others will probably work with the defaults given.
###Code
# Edit these variables!
PROJECT_ID = "YOUR-PROJECT-ID"
STORAGE_BUCKET = "YOUR-BUCKET-NAME"
# You can leave these defaults
REGION = "us-central1"
STORAGE_PATH = "paysim"
EMBEDDING_DIMENSION = 16
FEATURESTORE_ID = "paysim"
ENTITY_NAME = "payer"
import os
os.environ["GCLOUD_PROJECT"] = PROJECT_ID
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account
###Code
try:
from google.colab import auth as google_auth
google_auth.authenticate_user()
except:
pass
###Output
_____no_output_____
###Markdown
Upload to a GCP Cloud Storage BucketTo get the data into Vertex AI, we must first put it in a bucket as a CSV.
###Code
from google.cloud import storage
client = storage.Client()
bucket = client.bucket(STORAGE_BUCKET)
client.create_bucket(bucket)
# Upload our files to that bucket
for filename in [FEATURES_FILENAME, TRAINING_FILENAME]:
upload_path = os.path.join(STORAGE_PATH, filename)
blob = bucket.blob(upload_path)
blob.upload_from_filename(filename)
###Output
_____no_output_____
###Markdown
Train and deploy a model on GCPWe'll use the engineered features to train an AutoML Tables model, then deploy it to an endpoint
###Code
from google.cloud import aiplatform
aiplatform.init(project=PROJECT_ID, location=REGION)
dataset = aiplatform.TabularDataset.create(
display_name="paysim",
gcs_source=os.path.join("gs://", STORAGE_BUCKET, STORAGE_PATH, TRAINING_FILENAME),
)
dataset.wait()
print(f'\tDataset: "{dataset.display_name}"')
print(f'\tname: "{dataset.resource_name}"')
embedding_column_names = ["embedding_{}".format(i) for i in range(EMBEDDING_DIMENSION)]
other_column_names = ["num_transactions", "total_transaction_amnt"]
all_columns = other_column_names + embedding_column_names
column_specs = {column: "numeric" for column in all_columns}
job = aiplatform.AutoMLTabularTrainingJob(
display_name="train-paysim-automl-1",
optimization_prediction_type="classification",
column_specs=column_specs,
)
model = job.run(
dataset=dataset,
target_column="is_fraudster",
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
model_display_name="paysim-prediction-model",
disable_early_stopping=False,
budget_milli_node_hours=1000,
)
endpoint = model.deploy(machine_type="n1-standard-4")
###Output
_____no_output_____
###Markdown
Loading Data into GCP Feature StoreIn this section, we'll take our dataframe with newly engineered features and load that into GCP feature store.
###Code
from google.cloud.aiplatform_v1 import FeaturestoreServiceClient
api_endpoint = "{}-aiplatform.googleapis.com".format(REGION)
fs_client = FeaturestoreServiceClient(client_options={"api_endpoint": api_endpoint})
resource_path = fs_client.common_location_path(PROJECT_ID, REGION)
fs_path = fs_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
entity_path = fs_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, ENTITY_NAME
)
###Output
_____no_output_____
###Markdown
First, let's check if the Feature Store already exists
###Code
from grpc import StatusCode
def check_has_resource(callable):
has_resource = False
try:
callable()
has_resource = True
except Exception as e:
if (
not hasattr(e, "grpc_status_code")
or e.grpc_status_code != StatusCode.NOT_FOUND
):
raise e
return has_resource
feature_store_exists = check_has_resource(
lambda: fs_client.get_featurestore(name=fs_path)
)
from google.cloud.aiplatform_v1.types import entity_type as entity_type_pb2
from google.cloud.aiplatform_v1.types import feature as feature_pb2
from google.cloud.aiplatform_v1.types import featurestore as featurestore_pb2
from google.cloud.aiplatform_v1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1.types import io as io_pb2
if not feature_store_exists:
create_lro = fs_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=resource_path,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=1
),
),
)
)
print(create_lro.result())
entity_type_exists = check_has_resource(
lambda: fs_client.get_entity_type(name=entity_path)
)
if not entity_type_exists:
users_entity_type_lro = fs_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=fs_path,
entity_type_id=ENTITY_NAME,
entity_type=entity_type_pb2.EntityType(
description="Main entity type",
),
)
)
print(users_entity_type_lro.result())
feature_requests = [
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="Embedding {} from Neo4j".format(i),
),
feature_id="embedding_{}".format(i),
)
for i in range(EMBEDDING_DIMENSION)
]
create_features_lro = fs_client.batch_create_features(
parent=entity_path,
requests=feature_requests,
)
print(create_features_lro.result())
feature_specs = [
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="embedding_{}".format(i)
)
for i in range(EMBEDDING_DIMENSION)
]
from google.protobuf.timestamp_pb2 import Timestamp
feature_time = Timestamp()
feature_time.GetCurrentTime()
feature_time.nanos = 0
import_request = fs_client.import_feature_values(
featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=entity_path,
csv_source=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(
uris=[
os.path.join(
"gs://", STORAGE_BUCKET, STORAGE_PATH, FEATURES_FILENAME
)
]
)
),
entity_id_field="nodeId",
feature_specs=feature_specs,
worker_count=1,
feature_time=feature_time,
)
)
print(import_request.result())
###Output
_____no_output_____
###Markdown
Sending a prediction using features from the feature store
###Code
from google.cloud.aiplatform_v1 import FeaturestoreOnlineServingServiceClient
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": api_endpoint}
)
# Retrieve Neo4j embeddings from feature store
from google.cloud.aiplatform_v1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1.types import \
featurestore_online_service as featurestore_online_service_pb2
feature_selector = FeatureSelector(
id_matcher=IdMatcher(
ids=["embedding_{}".format(i) for i in range(EMBEDDING_DIMENSION)]
)
)
fs_features = data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
entity_type=entity_path,
entity_id="5",
feature_selector=feature_selector,
)
)
saved_embeddings = dict(
zip(
(fd.id for fd in fs_features.header.feature_descriptors),
(str(d.value.double_value) for d in fs_features.entity_view.data),
)
)
# Combine with other features. These might be sourced per transaction
all_features = {"num_transactions": "80", "total_dollar_amnt": "7484459.618641878"}
all_features.update(saved_embeddings)
instances = [{key: str(value) for key, value in all_features.items()}]
# Send a prediction
endpoint.predict(instances=instances)
###Output
_____no_output_____
###Markdown
Cleanup Neo4j cleanupTo delete the Graph Data Science representation of the graph, run this:
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.graph.drop('client_graph')
"""
).data()
)
###Output
_____no_output_____
###Markdown
Google Cloud cleanupDelete the feature store and turn down the endpoint
###Code
fs_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=fs_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
endpoint.delete()
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub OverviewIn this notebook, you will learn how to use Neo4j AuraDS to create graph features. You'll then use those new features to solve a classification problem with Vertex AI. DatasetThis notebook uses a version of the PaySim dataset that has been modified to work with Neo4j's graph database. PaySim is a synthetic fraud dataset. The goal is to identify whether or not a given transaction constitutes fraud. The [original version of the dataset](https://github.com/EdgarLopezPhD/PaySim) has tabular data.Neo4j has worked on a modified version that generates a graph dataset [here](https://github.com/voutilad/PaySim). We've pregenerated a copy of that dataset that you can grab [here](https://storage.googleapis.com/neo4j-datasets/paysim.dump). You'll want to download that dataset and then upload it to Neo4j AuraDS. AuraDS is a graph data science tool that is offered as a service on GCP. Instructions on signing up and uploading the dataset are available [here](https://github.com/neo4j-partners/aurads-paysim). CostsThis tutorial uses billable components of Google Cloud:* Cloud Storage* Vertex AILearn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. Setup Set up your development environmentWe suggest you use Colab for this notebook. Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Install additional PackagesFirst off, you'll also need to install a few packages.
###Code
!pip install --quiet --upgrade neo4j
!pip install --quiet google-cloud-storage
!pip install --quiet google.cloud.aiplatform
###Output
_____no_output_____
###Markdown
(Colab only) Restart the kernelAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages. When you run this, you may get a notification that the kernel crashed. You can disregard that.
###Code
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Working with Neo4j Define Neo4J related variablesYou'll need to enter the credentials from your AuraDS instance below. You can get your credentials by following this [walkthrough](https://github.com/neo4j-partners/aurads-paysim).The "DB_NAME" is always neo4j for AuraDS. It is different from the name you gave your database tenant in the AuraDS console.
###Code
DB_URL = "neo4j+s://XXXXX.databases.neo4j.io"
DB_USER = "neo4j"
DB_PASS = "YOUR PASSWORD"
DB_NAME = "neo4j"
###Output
_____no_output_____
###Markdown
In this section we're going to connect to Neo4j and look around the database. We're going to generate some new features in the dataset using Neo4j's Graph Data Science library. Finally, we'll load the data into a Pandas dataframe so that it's all ready to put into GCP Feature Store. Exploring the database
###Code
import pandas as pd
from neo4j import GraphDatabase
driver = GraphDatabase.driver(DB_URL, auth=(DB_USER, DB_PASS))
###Output
_____no_output_____
###Markdown
Now, let's explore the data in the database a bit to understand what we have to work with.
###Code
# node labels
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL db.labels() YIELD label
CALL apoc.cypher.run('MATCH (:`'+label+'`) RETURN count(*) as freq', {})
YIELD value
RETURN label, value.freq AS freq
"""
).data()
)
df = pd.DataFrame(result)
display(df)
# relationship types
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL db.relationshipTypes() YIELD relationshipType as type
CALL apoc.cypher.run('MATCH ()-[:`'+type+'`]->() RETURN count(*) as freq', {})
YIELD value
RETURN type AS relationshipType, value.freq AS freq
ORDER by freq DESC
"""
).data()
)
df = pd.DataFrame(result)
display(df)
# transaction types
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
MATCH (t:Transaction)
WITH sum(t.amount) AS globalSum, count(t) AS globalCnt
WITH *, 10^3 AS scaleFactor
UNWIND ['CashIn', 'CashOut', 'Payment', 'Debit', 'Transfer'] AS txType
CALL apoc.cypher.run('MATCH (t:' + txType + ')
RETURN sum(t.amount) as txAmount, count(t) AS txCnt', {})
YIELD value
RETURN txType,value.txAmount AS TotalMarketValue
"""
).data()
)
df = pd.DataFrame(result)
display(df)
###Output
_____no_output_____
###Markdown
Create a New Feature with a Graph Embedding using Neo4jFirst we're going to create an in memory graph represtation of the data in Neo4j Graph Data Science (GDS).Note, if you get an error saying the graph already exists, that's probably because you ran this code before. You can destroy it using the command in the cleanup section of this notebook.
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.graph.create.cypher('client_graph',
'MATCH (c:Client) RETURN id(c) as id, c.num_transactions as num_transactions, c.total_transaction_amnt as total_transaction_amnt, c.is_fraudster as is_fraudster',
'MATCH (c:Client)-[:PERFORMED]->(t:Transaction)-[:TO]->(c2:Client) return id(c) as source, id(c2) as target, sum(t.amount) as amount, "TRANSACTED_WITH" as type ')
"""
).data()
)
df = pd.DataFrame(result)
display(df)
###Output
_____no_output_____
###Markdown
Now we can generate an embedding from that graph. This is a new feature we can use in our predictions. We're using FastRP, which is a more full featured and higher performance of Node2Vec. You can learn more about that [here](https://neo4j.com/docs/graph-data-science/current/algorithms/fastrp/).
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.fastRP.mutate('client_graph',{
relationshipWeightProperty:'amount',
iterationWeights: [0.0, 1.00, 1.00, 0.80, 0.60],
featureProperties: ['num_transactions', 'total_transaction_amnt'],
propertyRatio: 0.25,
nodeSelfInfluence: 0.15,
embeddingDimension: 16,
randomSeed: 1,
mutateProperty:'embedding'
})
"""
).data()
)
df = pd.DataFrame(result)
display(df)
###Output
_____no_output_____
###Markdown
Finally we dump that out to a dataframe
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.graph.streamNodeProperties
('client_graph', ['embedding', 'num_transactions', 'total_transaction_amnt', 'is_fraudster'])
YIELD nodeId, nodeProperty, propertyValue
RETURN nodeId, nodeProperty, propertyValue
"""
).data()
)
df = pd.DataFrame(result)
df.head()
###Output
_____no_output_____
###Markdown
Now we need to take that dataframe and shape it into something that better represents our classification problem.
###Code
x = df.pivot(index="nodeId", columns="nodeProperty", values="propertyValue")
x = x.reset_index()
x.columns.name = None
x.head()
###Output
_____no_output_____
###Markdown
is_fraudster will have a value of 0 or 1 if populated. If the value is -9223372036854775808 then it's unlabeled, so we're going to drop it.
###Code
x = x.loc[x["is_fraudster"] != -9223372036854775808]
x.head()
###Output
_____no_output_____
###Markdown
Note that the embedding row is an array. To make this dataset more consumable, we should flatten that out into multiple individual features: embedding_0, embedding_1, ... embedding_n.
###Code
FEATURES_FILENAME = "features.csv"
embeddings = pd.DataFrame(x["embedding"].values.tolist()).add_prefix("embedding_")
merged = x.drop(columns=["embedding"]).merge(
embeddings, left_index=True, right_index=True
)
features_df = merged.drop(
columns=["is_fraudster", "num_transactions", "total_transaction_amnt"]
)
train_df = merged.drop(columns=["nodeId"])
features_df.to_csv(FEATURES_FILENAME, index=False)
###Output
_____no_output_____
###Markdown
This dataset is too small to use with Vertex AI for AutoML tabular data. For sake of demonstration, we're going to repeat it a few times. Don't do this in the real world.
###Code
TRAINING_FILENAME = "train.csv"
pd.concat([train_df for i in range(10)]).to_csv(TRAINING_FILENAME, index=False)
###Output
_____no_output_____
###Markdown
And that's it! The dataframe now has a nice dataset that we can use with GCP Vertex AI. Using Vertex AI with Neo4j data Define Google Cloud variablesYou'll need to set a few variables for your GCP environment. PROJECT_ID and STORAGE_BUCKET are most critical. The others will probably work with the defaults given.
###Code
# Edit these variables!
PROJECT_ID = "YOUR-PROJECT-ID"
STORAGE_BUCKET = "YOUR-BUCKET-NAME"
# You can leave these defaults
REGION = "us-central1"
STORAGE_PATH = "paysim"
EMBEDDING_DIMENSION = 16
FEATURESTORE_ID = "paysim"
ENTITY_NAME = "payer"
import os
os.environ["GCLOUD_PROJECT"] = PROJECT_ID
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account
###Code
try:
from google.colab import auth as google_auth
google_auth.authenticate_user()
except:
pass
###Output
_____no_output_____
###Markdown
Upload to a GCP Cloud Storage BucketTo get the data into Vertex AI, we must first put it in a bucket as a CSV.
###Code
from google.cloud import storage
client = storage.Client()
bucket = client.bucket(STORAGE_BUCKET)
client.create_bucket(bucket)
# Upload our files to that bucket
for filename in [FEATURES_FILENAME, TRAINING_FILENAME]:
upload_path = os.path.join(STORAGE_PATH, filename)
blob = bucket.blob(upload_path)
blob.upload_from_filename(filename)
###Output
_____no_output_____
###Markdown
Train and deploy a model on GCPWe'll use the engineered features to train an AutoML Tables model, then deploy it to an endpoint
###Code
from google.cloud import aiplatform
aiplatform.init(project=PROJECT_ID, location=REGION)
dataset = aiplatform.TabularDataset.create(
display_name="paysim",
gcs_source=os.path.join("gs://", STORAGE_BUCKET, STORAGE_PATH, TRAINING_FILENAME),
)
dataset.wait()
print(f'\tDataset: "{dataset.display_name}"')
print(f'\tname: "{dataset.resource_name}"')
embedding_column_names = ["embedding_{}".format(i) for i in range(EMBEDDING_DIMENSION)]
other_column_names = ["num_transactions", "total_transaction_amnt"]
all_columns = other_column_names + embedding_column_names
column_specs = {column: "numeric" for column in all_columns}
job = aiplatform.AutoMLTabularTrainingJob(
display_name="train-paysim-automl-1",
optimization_prediction_type="classification",
column_specs=column_specs,
)
model = job.run(
dataset=dataset,
target_column="is_fraudster",
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
model_display_name="paysim-prediction-model",
disable_early_stopping=False,
budget_milli_node_hours=1000,
)
endpoint = model.deploy(machine_type="n1-standard-4")
###Output
_____no_output_____
###Markdown
Loading Data into GCP Feature StoreIn this section, we'll take our dataframe with newly engineered features and load that into GCP feature store.
###Code
from google.cloud.aiplatform_v1 import FeaturestoreServiceClient
api_endpoint = "{}-aiplatform.googleapis.com".format(REGION)
fs_client = FeaturestoreServiceClient(client_options={"api_endpoint": api_endpoint})
resource_path = fs_client.common_location_path(PROJECT_ID, REGION)
fs_path = fs_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
entity_path = fs_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, ENTITY_NAME
)
###Output
_____no_output_____
###Markdown
First, let's check if the Feature Store already exists
###Code
from grpc import StatusCode
def check_has_resource(callable):
has_resource = False
try:
callable()
has_resource = True
except Exception as e:
if (
not hasattr(e, "grpc_status_code")
or e.grpc_status_code != StatusCode.NOT_FOUND
):
raise e
return has_resource
feature_store_exists = check_has_resource(
lambda: fs_client.get_featurestore(name=fs_path)
)
from google.cloud.aiplatform_v1.types import entity_type as entity_type_pb2
from google.cloud.aiplatform_v1.types import feature as feature_pb2
from google.cloud.aiplatform_v1.types import featurestore as featurestore_pb2
from google.cloud.aiplatform_v1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1.types import io as io_pb2
if not feature_store_exists:
create_lro = fs_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=resource_path,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=1
),
),
)
)
print(create_lro.result())
entity_type_exists = check_has_resource(
lambda: fs_client.get_entity_type(name=entity_path)
)
if not entity_type_exists:
users_entity_type_lro = fs_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=fs_path,
entity_type_id=ENTITY_NAME,
entity_type=entity_type_pb2.EntityType(
description="Main entity type",
),
)
)
print(users_entity_type_lro.result())
feature_requests = [
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="Embedding {} from Neo4j".format(i),
),
feature_id="embedding_{}".format(i),
)
for i in range(EMBEDDING_DIMENSION)
]
create_features_lro = fs_client.batch_create_features(
parent=entity_path,
requests=feature_requests,
)
print(create_features_lro.result())
feature_specs = [
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="embedding_{}".format(i)
)
for i in range(EMBEDDING_DIMENSION)
]
from google.protobuf.timestamp_pb2 import Timestamp
feature_time = Timestamp()
feature_time.GetCurrentTime()
feature_time.nanos = 0
import_request = fs_client.import_feature_values(
featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=entity_path,
csv_source=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(
uris=[
os.path.join(
"gs://", STORAGE_BUCKET, STORAGE_PATH, FEATURES_FILENAME
)
]
)
),
entity_id_field="nodeId",
feature_specs=feature_specs,
worker_count=1,
feature_time=feature_time,
)
)
print(import_request.result())
###Output
_____no_output_____
###Markdown
Sending a prediction using features from the feature store
###Code
from google.cloud.aiplatform_v1 import FeaturestoreOnlineServingServiceClient
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": api_endpoint}
)
# Retrieve Neo4j embeddings from feature store
from google.cloud.aiplatform_v1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1.types import \
featurestore_online_service as featurestore_online_service_pb2
feature_selector = FeatureSelector(
id_matcher=IdMatcher(
ids=["embedding_{}".format(i) for i in range(EMBEDDING_DIMENSION)]
)
)
fs_features = data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
entity_type=entity_path,
entity_id="5",
feature_selector=feature_selector,
)
)
saved_embeddings = dict(
zip(
(fd.id for fd in fs_features.header.feature_descriptors),
(str(d.value.double_value) for d in fs_features.entity_view.data),
)
)
# Combine with other features. These might be sourced per transaction
all_features = {"num_transactions": "80", "total_dollar_amnt": "7484459.618641878"}
all_features.update(saved_embeddings)
instances = [{key: str(value) for key, value in all_features.items()}]
# Send a prediction
endpoint.predict(instances=instances)
###Output
_____no_output_____
###Markdown
Cleanup Neo4j cleanupTo delete the Graph Data Science representation of the graph, run this:
###Code
with driver.session(database=DB_NAME) as session:
result = session.read_transaction(
lambda tx: tx.run(
"""
CALL gds.graph.drop('client_graph')
"""
).data()
)
###Output
_____no_output_____
###Markdown
Google Cloud cleanupDelete the feature store and turn down the endpoint
###Code
fs_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=fs_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
endpoint.delete()
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub Open in Vertex AI Workbench OverviewIn this notebook, you will learn how to use Neo4j AuraDS to create graph features. You'll then use those new features to solve a classification problem with Vertex AI. DatasetThis notebook uses a version of the PaySim dataset that has been modified to work with Neo4j's graph database. PaySim is a synthetic fraud dataset. The goal is to identify whether or not a given transaction constitutes fraud. The [original version of the dataset](https://github.com/EdgarLopezPhD/PaySim) has tabular data.Neo4j has worked on a modified version that generates a graph dataset [here](https://github.com/voutilad/PaySim). We've pregenerated a copy of that dataset that you can grab [here](https://storage.googleapis.com/neo4j-datasets/paysim.dump). You'll want to download that dataset and then upload it to Neo4j AuraDS. AuraDS is a graph data science tool that is offered as a service on GCP. Instructions on signing up and uploading the dataset are available [here](https://github.com/neo4j-partners/aurads-paysim). CostsThis tutorial uses billable components of Google Cloud:* Cloud Storage* Vertex AILearn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storage pricing](https://cloud.google.com/storage/pricing), and use the [Pricing Calculator](https://cloud.google.com/products/calculator/) to generate a cost estimate based on your projected usage. Setup Set up your development environmentWe suggest you use Colab for this notebook. Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Install additional PackagesFirst off, you'll also need to install a few packages.
###Code
!pip install --quiet --upgrade graphdatascience==1.0.0
!pip install --quiet google-cloud-storage
!pip install --quiet google.cloud.aiplatform
###Output
_____no_output_____
###Markdown
(Colab only) Restart the kernelAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages. When you run this, you may get a notification that the kernel crashed. You can disregard that.
###Code
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Working with Neo4j Define Neo4J related variablesYou'll need to enter the credentials from your AuraDS instance below. You can get your credentials by following this [walkthrough](https://github.com/neo4j-partners/aurads-paysim).The "DB_NAME" is always neo4j for AuraDS. It is different from the name you gave your database tenant in the AuraDS console.
###Code
DB_URL = "neo4j+s://XXXXX.databases.neo4j.io"
DB_USER = "neo4j"
DB_PASS = "YOUR PASSWORD"
DB_NAME = "neo4j"
###Output
_____no_output_____
###Markdown
In this section we're going to connect to Neo4j and look around the database. We're going to generate some new features in the dataset using Neo4j's Graph Data Science library. Finally, we'll load the data into a Pandas dataframe so that it's all ready to put into GCP Feature Store. Exploring the database
###Code
import pandas as pd
from graphdatascience import GraphDataScience
# If you are connecting the client to an AuraDS instance, you can get the recommended non-default configuration settings of the Python Driver applied automatically. To achieve this, set the constructor argument aura_ds=True
gds = GraphDataScience(DB_URL, auth=(DB_USER, DB_PASS), aura_ds=True)
gds.set_database(DB_NAME)
###Output
_____no_output_____
###Markdown
Now, let's explore the data in the database a bit to understand what we have to work with.
###Code
# node labels
result = gds.run_cypher(
"""
CALL db.labels() YIELD label
CALL apoc.cypher.run('MATCH (:`'+label+'`) RETURN count(*) as freq', {})
YIELD value
RETURN label, value.freq AS freq
"""
)
display(result)
# relationship types
result = gds.run_cypher(
"""
CALL db.relationshipTypes() YIELD relationshipType as type
CALL apoc.cypher.run('MATCH ()-[:`'+type+'`]->() RETURN count(*) as freq', {})
YIELD value
RETURN type AS relationshipType, value.freq AS freq
ORDER by freq DESC
"""
)
display(result)
# transaction types
result = gds.run_cypher(
"""
MATCH (t:Transaction)
WITH sum(t.amount) AS globalSum, count(t) AS globalCnt
WITH *, 10^3 AS scaleFactor
UNWIND ['CashIn', 'CashOut', 'Payment', 'Debit', 'Transfer'] AS txType
CALL apoc.cypher.run('MATCH (t:' + txType + ')
RETURN sum(t.amount) as txAmount, count(t) AS txCnt', {})
YIELD value
RETURN txType,value.txAmount AS TotalMarketValue
"""
)
display(result)
###Output
_____no_output_____
###Markdown
Create a New Feature with a Graph Embedding using Neo4jFirst we're going to create an in memory graph represtation of the data in Neo4j Graph Data Science (GDS).Note, if you get an error saying the graph already exists, that's probably because you ran this code before. You can destroy it using the command in the cleanup section of this notebook.
###Code
# We get a tuple back with an object that represents the graph projection and the results of the GDS call
G, results = gds.graph.project.cypher(
"client_graph",
"MATCH (c:Client) RETURN id(c) as id, c.num_transactions as num_transactions, c.total_transaction_amnt as total_transaction_amnt, c.is_fraudster as is_fraudster",
'MATCH (c:Client)-[:PERFORMED]->(t:Transaction)-[:TO]->(c2:Client) return id(c) as source, id(c2) as target, sum(t.amount) as amount, "TRANSACTED_WITH" as type ',
)
display(results)
###Output
_____no_output_____
###Markdown
Now we can generate an embedding from that graph. This is a new feature we can use in our predictions. We're using FastRP, which is a more full featured and higher performance of Node2Vec. You can learn more about that [here](https://neo4j.com/docs/graph-data-science/current/algorithms/fastrp/).
###Code
results = gds.fastRP.mutate(
G,
relationshipWeightProperty="amount",
iterationWeights=[0.0, 1.00, 1.00, 0.80, 0.60],
featureProperties=["num_transactions", "total_transaction_amnt"],
propertyRatio=0.25,
nodeSelfInfluence=0.15,
embeddingDimension=16,
randomSeed=1,
mutateProperty="embedding",
)
display(result)
###Output
_____no_output_____
###Markdown
Finally we dump that out to a dataframe
###Code
node_properties = gds.graph.streamNodeProperties(
G, ["embedding", "num_transactions", "total_transaction_amnt", "is_fraudster"]
)
node_properties.head()
###Output
_____no_output_____
###Markdown
Now we need to take that dataframe and shape it into something that better represents our classification problem.
###Code
x = node_properties.pivot(
index="nodeId", columns="nodeProperty", values="propertyValue"
)
x = x.reset_index()
x.columns.name = None
x.head()
###Output
_____no_output_____
###Markdown
is_fraudster will have a value of 0 or 1 if populated. If the value is -9223372036854775808 then it's unlabeled, so we're going to drop it.
###Code
x = x.loc[x["is_fraudster"] != -9223372036854775808]
x.head()
###Output
_____no_output_____
###Markdown
Note that the embedding row is an array. To make this dataset more consumable, we should flatten that out into multiple individual features: embedding_0, embedding_1, ... embedding_n.
###Code
FEATURES_FILENAME = "features.csv"
embeddings = pd.DataFrame(x["embedding"].values.tolist()).add_prefix("embedding_")
merged = x.drop(columns=["embedding"]).merge(
embeddings, left_index=True, right_index=True
)
features_df = merged.drop(
columns=["is_fraudster", "num_transactions", "total_transaction_amnt"]
)
train_df = merged.drop(columns=["nodeId"])
features_df.to_csv(FEATURES_FILENAME, index=False)
###Output
_____no_output_____
###Markdown
This dataset is too small to use with Vertex AI for AutoML tabular data. For sake of demonstration, we're going to repeat it a few times. Don't do this in the real world.
###Code
TRAINING_FILENAME = "train.csv"
pd.concat([train_df for i in range(10)]).to_csv(TRAINING_FILENAME, index=False)
###Output
_____no_output_____
###Markdown
And that's it! The dataframe now has a nice dataset that we can use with GCP Vertex AI. Using Vertex AI with Neo4j data Define Google Cloud variablesYou'll need to set a few variables for your GCP environment. PROJECT_ID and STORAGE_BUCKET are most critical. The others will probably work with the defaults given.
###Code
# Edit these variables!
PROJECT_ID = "YOUR-PROJECT-ID"
STORAGE_BUCKET = "YOUR-BUCKET-NAME"
# You can leave these defaults
REGION = "us-central1"
STORAGE_PATH = "paysim"
EMBEDDING_DIMENSION = 16
FEATURESTORE_ID = "paysim"
ENTITY_NAME = "payer"
import os
os.environ["GCLOUD_PROJECT"] = PROJECT_ID
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account
###Code
try:
from google.colab import auth as google_auth
google_auth.authenticate_user()
except:
pass
###Output
_____no_output_____
###Markdown
Upload to a GCP Cloud Storage BucketTo get the data into Vertex AI, we must first put it in a bucket as a CSV.
###Code
from google.cloud import storage
client = storage.Client()
bucket = client.bucket(STORAGE_BUCKET)
client.create_bucket(bucket)
# Upload our files to that bucket
for filename in [FEATURES_FILENAME, TRAINING_FILENAME]:
upload_path = os.path.join(STORAGE_PATH, filename)
blob = bucket.blob(upload_path)
blob.upload_from_filename(filename)
###Output
_____no_output_____
###Markdown
Train and deploy a model with Vertex AIWe'll use the engineered features to train an AutoML Tabular Data, then deploy it to an endpoint
###Code
from google.cloud import aiplatform
aiplatform.init(project=PROJECT_ID, location=REGION)
dataset = aiplatform.TabularDataset.create(
display_name="paysim",
gcs_source=os.path.join("gs://", STORAGE_BUCKET, STORAGE_PATH, TRAINING_FILENAME),
)
dataset.wait()
print(f'\tDataset: "{dataset.display_name}"')
print(f'\tname: "{dataset.resource_name}"')
embedding_column_names = ["embedding_{}".format(i) for i in range(EMBEDDING_DIMENSION)]
other_column_names = ["num_transactions", "total_transaction_amnt"]
all_columns = other_column_names + embedding_column_names
column_specs = {column: "numeric" for column in all_columns}
job = aiplatform.AutoMLTabularTrainingJob(
display_name="train-paysim-automl-1",
optimization_prediction_type="classification",
column_specs=column_specs,
)
model = job.run(
dataset=dataset,
target_column="is_fraudster",
training_fraction_split=0.8,
validation_fraction_split=0.1,
test_fraction_split=0.1,
model_display_name="paysim-prediction-model",
disable_early_stopping=False,
budget_milli_node_hours=1000,
)
endpoint = model.deploy(machine_type="n1-standard-4")
###Output
_____no_output_____
###Markdown
Loading Data into Vertex AI Feature StoreIn this section, we'll take our dataframe with newly engineered features and load that into Vertex AI Feature Store.
###Code
from google.cloud.aiplatform_v1 import FeaturestoreServiceClient
api_endpoint = "{}-aiplatform.googleapis.com".format(REGION)
fs_client = FeaturestoreServiceClient(client_options={"api_endpoint": api_endpoint})
resource_path = fs_client.common_location_path(PROJECT_ID, REGION)
fs_path = fs_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
entity_path = fs_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, ENTITY_NAME
)
###Output
_____no_output_____
###Markdown
First, let's check if the Feature Store already exists
###Code
from grpc import StatusCode
def check_has_resource(callable):
has_resource = False
try:
callable()
has_resource = True
except Exception as e:
if (
not hasattr(e, "grpc_status_code")
or e.grpc_status_code != StatusCode.NOT_FOUND
):
raise e
return has_resource
feature_store_exists = check_has_resource(
lambda: fs_client.get_featurestore(name=fs_path)
)
from google.cloud.aiplatform_v1.types import entity_type as entity_type_pb2
from google.cloud.aiplatform_v1.types import feature as feature_pb2
from google.cloud.aiplatform_v1.types import featurestore as featurestore_pb2
from google.cloud.aiplatform_v1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1.types import io as io_pb2
if not feature_store_exists:
create_lro = fs_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=resource_path,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=1
),
),
)
)
print(create_lro.result())
entity_type_exists = check_has_resource(
lambda: fs_client.get_entity_type(name=entity_path)
)
if not entity_type_exists:
users_entity_type_lro = fs_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=fs_path,
entity_type_id=ENTITY_NAME,
entity_type=entity_type_pb2.EntityType(
description="Main entity type",
),
)
)
print(users_entity_type_lro.result())
feature_requests = [
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="Embedding {} from Neo4j".format(i),
),
feature_id="embedding_{}".format(i),
)
for i in range(EMBEDDING_DIMENSION)
]
create_features_lro = fs_client.batch_create_features(
parent=entity_path,
requests=feature_requests,
)
print(create_features_lro.result())
feature_specs = [
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="embedding_{}".format(i)
)
for i in range(EMBEDDING_DIMENSION)
]
from google.protobuf.timestamp_pb2 import Timestamp
feature_time = Timestamp()
feature_time.GetCurrentTime()
feature_time.nanos = 0
import_request = fs_client.import_feature_values(
featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=entity_path,
csv_source=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(
uris=[
os.path.join(
"gs://", STORAGE_BUCKET, STORAGE_PATH, FEATURES_FILENAME
)
]
)
),
entity_id_field="nodeId",
feature_specs=feature_specs,
worker_count=1,
feature_time=feature_time,
)
)
print(import_request.result())
###Output
_____no_output_____
###Markdown
Sending a prediction using features from the feature store
###Code
from google.cloud.aiplatform_v1 import FeaturestoreOnlineServingServiceClient
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": api_endpoint}
)
# Retrieve Neo4j embeddings from feature store
from google.cloud.aiplatform_v1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1.types import \
featurestore_online_service as featurestore_online_service_pb2
feature_selector = FeatureSelector(
id_matcher=IdMatcher(
ids=["embedding_{}".format(i) for i in range(EMBEDDING_DIMENSION)]
)
)
fs_features = data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
entity_type=entity_path,
entity_id="5",
feature_selector=feature_selector,
)
)
saved_embeddings = dict(
zip(
(fd.id for fd in fs_features.header.feature_descriptors),
(str(d.value.double_value) for d in fs_features.entity_view.data),
)
)
# Combine with other features. These might be sourced per transaction
all_features = {"num_transactions": "80", "total_dollar_amnt": "7484459.618641878"}
all_features.update(saved_embeddings)
instances = [{key: str(value) for key, value in all_features.items()}]
# Send a prediction
endpoint.predict(instances=instances)
###Output
_____no_output_____
###Markdown
Cleanup Neo4j cleanupTo delete the Graph Data Science representation of the graph, run this:
###Code
gds.graph.drop(G)
###Output
_____no_output_____
###Markdown
Google Cloud cleanupDelete the feature store and turn down the endpoint
###Code
fs_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=fs_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
endpoint.delete()
###Output
_____no_output_____ |
Python - basics.ipynb | ###Markdown
Python - Basics Function Libraries in Python:1. Scientifics: - Pandas (for: data structure/Dataframes/tools) - Numpy (for: array and matrices) - Scipy (for: integrals, solving differential equations, optimization)2. Visualization: - Matplotlib (for: plot and graphs) - Seaborn (for: plot heat maps, time series, violin plot)3. Algorithmix: - Scikit-learn (for: machine learning) - Starmodels (for: explore data, estimate statistical models, and perform statistical tests) Import/Export data in PythonFor import data from website, we can use the comand `!wget https://'Path where the CSV file is stored\File name'`.Then use one of following comand: Import: `pd.read_`:> **csv:** pd.read_csv('Path where the CSV file is stored\File name.csv')>> **json:** pd.read_json('Path where the CSV file is stored\File name.json')>> **excel:** pd.read_excel('Path where the CSV file is stored\File name.excel')>> **sql:** pd.read_sql('Path where the CSV file is stored\File name.sql') Export: `df.to_`:> **csv:** df.to_csv('Path where the CSV file is stored\File name.csv')>> **json:** df.to_json('Path where the CSV file is stored\File name.json')>> **excel:** df.to_excel('Path where the CSV file is stored\File name.excel')>> **sql:** df.to_sql('Path where the CSV file is stored\File name.sql')Syntax:**`pd.read_csv('Path where the CSV file is stored\File name.csv', sep=';', header=’infer’, index_col=None)`**where: - 'Path where the CSV file is stored\File name.csv' : where file is.- sep = ';' : delimiter to use.- delimiter = None : alternative argument name for sep.- header = ’infer’ : row number(s) to use as the column names, and the start of the data.- index_col = None : column to use as the row labels of the DataFrame.
###Code
# e.g.
import pandas as pd
df = pd.read_csv('Pokemon.csv', sep=';',header='infer', index_col = ['Name']) #import data
df
###Output
_____no_output_____
###Markdown
Basic 0. Convert into dataframePandas DataFrame is two-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes. It consists of three principal components, the data, rows, and columns.- data[ ] : 1 bracket --> pandas series- dta[[ ]] : 2 bracket --> pandas dataframe
###Code
data = pd.DataFrame(df, index = None, columns = ['Console','Year']) #convert into dataframe
data
###Output
_____no_output_____
###Markdown
1. TypesPandas `dtypes` is used to view types of dataframe.
###Code
data.dtypes
###Output
_____no_output_____
###Markdown
2. DescribePandas `describe( )` is used to view some basic statistical details like percentile, mean, std etc. of a data frame or a series of numeric values.
###Code
data.describe()
###Output
_____no_output_____
###Markdown
3. Printing the dataframeTo show the top (`head( )` ) and the bottom (`tail( )` ) of the database.
###Code
data.head() #Top
data.tail() #Bottom
###Output
_____no_output_____
###Markdown
4. Information of dataframe
###Code
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Index: 12 entries, Pokémon Rosso e Verde to Pokémon Nero e Bianco
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Console 12 non-null object
1 Year 12 non-null int64
dtypes: int64(1), object(1)
memory usage: 288.0+ bytes
###Markdown
5. Remove missing data`dropna( )` remove data that contain missing values.- rows: **axis = 0**- column: **axis = 1**
###Code
data.dropna(subset=['Console'], axis = 0, inplace = True)
data.head()
###Output
_____no_output_____
###Markdown
6. Remove column
###Code
data.drop(data[['Console']], axis = 1, inplace = True)
data.head()
###Output
_____no_output_____
###Markdown
7. Replace dataSintax: `df.replace(old, new, count)`
###Code
txt = "I never play with Pokémon!"
x = txt.replace("never", "always")
print(x)
data.rename(columns={'Year':'years'}, inplace = True)
data
###Output
_____no_output_____
###Markdown
8. Evaluating for Missing Data
###Code
missing_data = data.notnull() #or missing_data = data.isnull()
missing_data
###Output
_____no_output_____
###Markdown
9. Count data
###Code
count = data["years"].value_counts()
count
###Output
_____no_output_____
###Markdown
10. Change Type
###Code
avg = data["years"].astype("float")
avg
###Output
_____no_output_____
###Markdown
11. Groupby
###Code
data.groupby('Name', as_index=True)
data
###Output
_____no_output_____
###Markdown
12. DummiesPandas `get_dummies( )` separate features into 2 o more unique category.
###Code
import pandas as pd
# Create a dataframe
raw_data = {'first_name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'last_name': ['Miller', 'Jacobson', 'Ali', 'Milner', 'Cooze'],
'sex': ['male', 'female', 'male', 'female', 'female']}
df = pd.DataFrame(raw_data, columns = ['first_name', 'last_name', 'sex'])
df
# Create a set of dummy variables from the sex variable
pd.get_dummies(df, columns=['sex'])
###Output
_____no_output_____ |
plotter.ipynb | ###Markdown
4r robot exp* reward: task* model: SAC+HER* basic hyperparameter.* random init* task_ll = [-1, -1, 0], task_ul = [1, 1, 1]* joint_range = $2\pi * 3/4$
###Code
log_dir = "rl-trained-agents/sac/RxbotReach-v0_13/"
results_plotter.plot_results([log_dir], 1e5, results_plotter.X_TIMESTEPS, "SAC RxbotReach-v0")
###Output
_____no_output_____
###Markdown
exp* reward: task+action* model: SAC+HER* basic hyperparameter.* random init* task_ll = [-1, -1, 0], task_ul = [1, 1, 1]* joint_range = $2\pi * 3/4$
###Code
log_dir = "rl-trained-agents/sac/RxbotReach-v0_15/"
results_plotter.plot_results([log_dir], 1e5, results_plotter.X_TIMESTEPS, "SAC RxbotReach-v0")
import gym
import utils.rxbot.rxbot_reach
from stable_baselines3 import SAC
from sb3_contrib import TQC
from stable_baselines3.common.vec_env import DummyVecEnv, VecNormalize
from sb3_contrib.common.wrappers import TimeFeatureWrapper
from stable_baselines3.common.env_util import make_vec_env
log_dir = "rl-trained-agents/sac/RxbotReach-v0_2/"
# import model
env = make_vec_env("RxbotReach-v0")
env = VecNormalize.load(log_dir+"RxbotReach-v0/vecnormalize.pkl", env)
# do not update them at test time
env.training = False
# reward normalization is not needed at test time
env.norm_reward = False
model = SAC.load("rl-trained-agents/sac/RxbotReach-v0_2/RxbotReach-v0.zip", env)
###Output
C:\Users\apple\anaconda3\lib\site-packages\gym\logger.py:34: UserWarning: [33mWARN: Box bound precision lowered by casting to float32[0m
warnings.warn(colorize("%s: %s" % ("WARN", msg % args), "yellow"))
###Markdown
Plotting Tools
###Code
import os
import csv
import numpy as np
from scipy.io import loadmat, savemat
import matplotlib.pyplot as plt
from matplotlib import rcParams
%matplotlib inline
# rcParams.update({'figure.autolayout': True})
###Output
_____no_output_____
###Markdown
Basic plotting example
###Code
log_dir = '/data/ShenShuo/workspace/handful-of-trials/play/log' # Directory specified in script, not including date+time
min_num_trials = 291 # Plots up to this many trials
returns = []
for subdir in os.listdir(log_dir):
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
returns = np.array(returns)
returns = np.maximum.accumulate(returns, axis=-1)
# reduce useless first dim
mean = np.mean(returns, axis=0)
# Plot result
plt.figure(tight_layout=True)
plt.plot(np.arange(1, min_num_trials + 1), mean)
plt.title("Performance")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.plot()
plt.savefig("test.png")
plt.show()
###Output
_____no_output_____
###Markdown
show returns
###Code
log_dir = '/data/ShenShuo/workspace/handful-of-trials/play/log' # Directory specified in script, not including date+time
min_num_trials = 291 # Plots up to this many trials
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
print(data)
print(data['returns'].shape)
returns = []
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
returns = np.array(returns)
# returns = np.maximum.accumulate(returns, axis=-1)
mean = np.mean(returns, axis=0)
# Plot result
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(np.arange(1, min_num_trials + 1), mean)
plt.title("Performance")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.axis('on')
fig.savefig("test.png")
plt.show()
###Output
_____no_output_____
###Markdown
show RS and CEM in four env first plot method plot all file in log, which can only plot until minimum trial
###Code
log_dir = '/data/ShenShuo/workspace/handful-of-trials/log' # Directory specified in script, not including date+time
min_num_trials = 50 # Plots up to this many trials
returns = []
for subdir in os.listdir(log_dir):
print(subdir)
for subdir_ in os.listdir(os.path.join(log_dir,subdir)):
data = loadmat(os.path.join(log_dir, subdir, subdir_, "logs.mat"))
print(data["returns"].shape)
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
returns = np.array(returns)
# print(returns)
# returns = np.maximum.accumulate(returns, axis=-1)
print(returns.shape)
# reduce useless first dim
# mean = np.mean(returns, axis=0)
# Plot result
# reacher
plt.figure()
plt.plot(np.arange(1, min_num_trials + 1), returns[0])
plt.plot(np.arange(1, min_num_trials + 1), returns[4])
plt.title("reacher_50_epoch")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.show()
# pusher
plt.figure()
plt.plot(np.arange(1, min_num_trials + 1), returns[1])
plt.plot(np.arange(1, min_num_trials + 1), returns[3])
plt.title("pusher_50_epoch")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.show()
# cartpole
plt.figure()
plt.plot(np.arange(1, min_num_trials + 1), returns[5])
plt.plot(np.arange(1, min_num_trials + 1), returns[6])
plt.title("cartpole_50_epoch")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.show()
# halfcheetha
plt.figure()
plt.plot(np.arange(1, min_num_trials + 1), returns[2])
plt.plot(np.arange(1, min_num_trials + 1), returns[7])
plt.title("halfCheetah_50_epoch")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.show()
###Output
reacher_PE_TSinf_Random
(1, 100)
pusher_PE_TSinf_Random
(1, 100)
halfcheetah_PE_TSinf_Random
(1, 84)
pusher_PE_TSinf_CEM
(1, 100)
reacher_PE_TSinf_CEM
(1, 100)
cartpole_PE_TSinf_Random
(1, 50)
cartpole_PE_TSinf_CEM
(1, 50)
halfcheetah_PE_TSinf_CEM
(1, 81)
(8, 50)
###Markdown
this plot method is more trivial, handful select which file to load and plot reacher
###Code
log_dir = '/data/ShenShuo/workspace/handful-of-trials/log/reacher_PE_TSinf_CEM' # Directory specified in script, not including date+time
min_num_trials = 100 # Plots up to this many trials
returns = []
for subdir in os.listdir(log_dir):
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
log_dir = '/data/ShenShuo/workspace/handful-of-trials/log/reacher_PE_TSinf_Random'
for subdir in os.listdir(log_dir):
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
returns = np.array(returns)
returns = np.maximum.accumulate(returns, axis=-1)
print(returns.shape)
# reduce useless first dim
# mean = np.mean(returns, axis=0)
# plt.figure()
fig, ax = plt.subplots()
ax.plot(np.arange(1, min_num_trials + 1), returns[0], label="reacher_CEM")
ax.plot(np.arange(1, min_num_trials + 1), returns[1], label="reacher_Random")
ax.legend()
plt.title("reacher_PE_TFinf_100_epoch_maximum")
plt.xlabel("Iteration number")
plt.ylabel("Return")
ax.plot()
plt.savefig("log/picture/reacher_CEM_Random_max.jpg", bbox_inches="tight", pad_inches = 0)
plt.show()
plt.close()
###Output
(2, 100)
###Markdown
pusher
###Code
log_dir = '/data/ShenShuo/workspace/handful-of-trials/log/pusher_PE_TSinf_CEM' # Directory specified in script, not including date+time
min_num_trials = 100 # Plots up to this many trials
returns = []
for subdir in os.listdir(log_dir):
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
log_dir = '/data/ShenShuo/workspace/handful-of-trials/log/pusher_PE_TSinf_Random'
for subdir in os.listdir(log_dir):
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
returns = np.array(returns)
returns = np.maximum.accumulate(returns, axis=-1)
print(returns.shape)
# reduce useless first dim
# mean = np.mean(returns, axis=0)
plt.figure()
plt.plot(np.arange(1, min_num_trials + 1), returns[0], label="pusher_CEM")
plt.plot(np.arange(1, min_num_trials + 1), returns[1], label="pusher_Random")
plt.legend()
plt.title("pusher_PE_TFinf_100_epoch_maximum")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.plot()
plt.savefig("log/picture/pusher_CEM_Random_max.jpg")
plt.show()
plt.close()
###Output
(2, 100)
###Markdown
cartpole
###Code
log_dir = '/data/ShenShuo/workspace/handful-of-trials/log/cartpole_PE_TSinf_CEM' # Directory specified in script, not including date+time
min_num_trials = 50 # Plots up to this many trials
returns = []
for subdir in os.listdir(log_dir):
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
log_dir = '/data/ShenShuo/workspace/handful-of-trials/log/cartpole_PE_TSinf_Random'
for subdir in os.listdir(log_dir):
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
returns = np.array(returns)
returns = np.maximum.accumulate(returns, axis=-1)
print(returns.shape)
# reduce useless first dim
# mean = np.mean(returns, axis=0)
plt.figure()
plt.plot(np.arange(1, min_num_trials + 1), returns[0], label="cartpole_CEM")
plt.plot(np.arange(1, min_num_trials + 1), returns[1], label="cartpole_Random")
plt.legend()
plt.title("cartpole_PE_TSinf_50_epoch_maximum")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.savefig("log/picture/cartpole_CEM_Random_max.jpg")
plt.show()
###Output
(2, 50)
###Markdown
halfcheetah
###Code
log_dir = '/data/ShenShuo/workspace/handful-of-trials/log/halfcheetah_PE_TSinf_CEM' # Directory specified in script, not including date+time
min_num_trials = 225 # Plots up to this many trials
returns = []
for subdir in os.listdir(log_dir):
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
print(data["returns"].shape[1])
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
log_dir = '/data/ShenShuo/workspace/handful-of-trials/log/halfcheetah_PE_TSinf_Random'
for subdir in os.listdir(log_dir):
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
print(data["returns"].shape[1])
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
returns = np.array(returns)
# returns = np.maximum.accumulate(returns, axis=-1)
print(returns.shape)
# reduce useless first dim
# mean = np.mean(returns, axis=0)
plt.figure()
plt.plot(np.arange(1, min_num_trials + 1), returns[0], label="halfcheetah_CEM")
plt.plot(np.arange(1, min_num_trials + 1), returns[1], label="halfcheetah_Random")
plt.legend()
plt.title("halfcheetah_PE_TFinf_225_epoch_maximum")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.savefig("log/picture/halfcheetah_CEM_Random.jpg")
plt.show()
###Output
227
231
(2, 225)
###Markdown
Plotting Tools
###Code
import os
import csv
import numpy as np
from scipy.io import loadmat, savemat
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Basic plotting example
###Code
log_dir = None # Directory specified in script, not including date+time
min_num_trials = None # Plots up to this many trials
returns = []
for subdir in os.listdir(log_dir):
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
returns = np.array(returns)
returns = np.maximum.accumulate(returns, axis=-1)
mean = np.mean(returns, axis=0)
# Plot result
plt.figure()
plt.plot(np.arange(1, min_num_trials + 1), mean)
plt.title("Performance")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.show()
###Output
_____no_output_____
###Markdown
Pretrained ResNet - Freeze FC only
###Code
train_loss, test_loss, train_acc, test_acc, train_epochs, test_epochs = read_train_test_loss_acc('log')
f = plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
plt.title(f'Triplet Loss (train: {round(train_loss[-1], 3)}, test: {round(test_loss[-1], 3)})')
plt.plot(train_epochs[213:], train_loss[213:], color = '#2c3e50', label = 'Train: VGGFace2')
plt.xlabel('Epoch', fontsize = 15)
plt.ylabel('Loss', fontsize = 15)
plt.ylim(0, max(train_loss)+0.1*max(train_loss))
plt.plot(test_epochs[213:], test_loss[213:], color = '#16a085', label = 'Test: LFW')
plt.legend(frameon=False)
plt.xticks(np.arange(min(train_epochs[213:]), max(train_epochs[213:])+1, len(train_epochs)//15))
plt.subplot(1,2,2)
plt.title(f'Accuracy (train: {round(train_acc[-1], 3)}, test: {round(test_acc[-1], 3)})')
plt.plot(train_epochs[213:], train_acc[213:], color = '#2c3e50', label = 'Train: VGGFace2')
plt.xlabel('Epoch', fontsize = 15)
plt.ylabel('Accuracy', fontsize = 14)
plt.ylim(min(train_acc)-.05*min(train_acc), 1)
plt.plot(test_epochs[213:], test_acc[213:], color = '#16a085', label = 'Test: LFW')
plt.legend(frameon=False, loc='lower right')
plt.xticks(np.arange(min(train_epochs[213:]), max(train_epochs[213:])+1, len(train_epochs)//15))
plt.savefig('log/a-graph-loss-accuracy.jpg', dpi=f.dpi)
print('Last test acc:', round(test_acc[-1], 3), 'max:', round(max(test_acc), 3))
<<<<<<< LOCAL CELL DELETED >>>>>>>
train_loss, test_loss, train_acc, test_acc, train_epochs, test_epochs = read_train_test_loss_acc('log')
f = plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
plt.title('Facenet Loss')
plt.plot(train_epochs, train_loss, color = '#2c3e50', label = 'Train: VGGFace2')
plt.xlabel('Epoch', fontsize = 15)
plt.ylabel('Loss', fontsize = 15)
plt.ylim(0, max(train_loss)+0.1*max(train_loss))
plt.plot(test_epochs, test_loss, color = '#16a085', label = 'Test: LFW')
plt.legend(loc='lower left')
plt.xticks(np.arange(min(train_epochs), max(train_epochs)+1, 2.0))
plt.subplot(1,2,2)
plt.title('Facenet Acc')
plt.plot(train_epochs, train_acc, color = '#2c3e50', label = 'Train: VGGFace2')
plt.xlabel('Epoch', fontsize = 15)
plt.ylabel('Accuracy', fontsize = 14)
plt.ylim(min(train_acc)-.05*min(train_acc), 1)
plt.plot(test_epochs, test_acc, color = '#16a085', label = 'Test: LFW')
plt.legend(loc='upper left')
plt.xticks(np.arange(min(train_epochs), max(train_epochs)+1, 2.0))
plt.savefig('log/a-graph-loss-fc-only-accuracy.jpg', dpi=f.dpi)
print('Max test acc:', round(max(test_acc), 3))
###Output
Max test acc: 0.835
###Markdown
Plotting Tools
###Code
import os
import csv
import numpy as np
from scipy.io import loadmat, savemat
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Basic plotting example
###Code
log_dir = r"C:\Users\nikki\OneDrive\Research\Continuous RL\BAIR Handful of Trails Probabilistic Dynamics\handful-of-trials\scripts\CartpoleTestLog" # Directory specified in script, not including date+time
min_num_trials = 1 # Plots up to this many trials
returns = []
for subdir in os.listdir(log_dir):
print(subdir)
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
returns = np.array(returns)
returns = np.maximum.accumulate(returns, axis=-1)
mean = np.mean(returns, axis=0)
print(mean)
# Plot result
plt.figure()
plt.plot(np.arange(1, min_num_trials + 1), mean)
plt.title("Performance")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.show()
###Output
2019-07-17--21_21_15
[180.42398527]
2019-07-18--22_01_10
[179.7741936]
###Markdown
Robot configuration* state : joint, ee pos, ee goal* action : full joint
###Code
log_dir = "logs/sac/RxbotReach-v0_25/"
results_plotter.plot_results([log_dir], 1e5, results_plotter.X_TIMESTEPS, "TQC RxbotReach-v0")
b = np.load(log_dir+"evaluations.npz")
list(b.keys())
b['timesteps']
b['successes']
###Output
_____no_output_____
###Markdown
exp* reward: task* model: TQC+HER* basic hyperparameter.* random init* task_ll = [0, -1, 0], task_ul = [1, 1, 1]* joint_range = $2\pi$
###Code
log_dir = "rl-trained-agents/tqc/RxbotReach-v0_1/"
results_plotter.plot_results([log_dir], 1e5, results_plotter.X_TIMESTEPS, "TQC RxbotReach-v0")
###Output
_____no_output_____
###Markdown
exp* reward: task* model: SAC+HER* basic hyperparameter.* random init* task_ll = [0, -1, 0], task_ul = [1, 1, 1]* joint_range = $2\pi$
###Code
log_dir2 = "rl-trained-agents/sac/RxbotReach-v0_2/"
results_plotter.plot_results([log_dir], 1e5, results_plotter.X_TIMESTEPS, "SAC RxbotReach-v0")
###Output
_____no_output_____
###Markdown
exp* reward: task* model: SAC+HER* basic hyperparameter.* random init* task_ll = [-1, -1, 0], task_ul = [1, 1, 1]* joint_range = $2\pi$
###Code
log_dir = "rl-trained-agents/sac/RxbotReach-v0_3/"
results_plotter.plot_results([log_dir], 1e5, results_plotter.X_TIMESTEPS, "SAC RxbotReach-v0")
###Output
_____no_output_____
###Markdown
exp* reward: task + joint reward at to farr* model: SAC+HER* basic hyperparameter.* random init* task_ll = [-1, -1, 0], task_ul = [1, 1, 1]* joint_range = $2\pi$
###Code
log_dir = "rl-trained-agents/sac/RxbotReach-v0_10/"
results_plotter.plot_results([log_dir], 1e5, results_plotter.X_TIMESTEPS, "SAC RxbotReach-v0")
###Output
_____no_output_____
###Markdown
exp* reward: task* model: SAC+HER* basic hyperparameter.* random init* task_ll = [-1, -1, 0], task_ul = [1, 1, 1]* joint_range = $2\pi * 3/4$
###Code
log_dir = "rl-trained-agents/sac/RxbotReach-v0_11/"
results_plotter.plot_results([log_dir], 1e5, results_plotter.X_TIMESTEPS, "SAC RxbotReach-v0")
###Output
_____no_output_____
###Markdown
PlotterThis notebook will be used to show the plotter options of history
###Code
from model_example import get_model_and_history_example
model, history = get_model_and_history_example()
history.history.keys()
import pandas as pd
import matplotlib.pyplot as plt
pd.DataFrame(history.history).plot(figsize=(8, 5))
plt.grid(True)
plt.gca().set_ylim(0, 1)
plt.show()
###Output
_____no_output_____
###Markdown
Plotting Tools
###Code
import os
import csv
import glob
import numpy as np
from scipy.io import loadmat, savemat
import matplotlib.pyplot as plt
def plot_one_run(log_dir):
all_returns = []
for log_file in sorted(glob.glob(log_dir + "logs.mat")):
data = loadmat(log_file)
all_returns.append(data["returns"][0])
max_trial_length = min(map(len, all_returns))
trimmed_returns = np.array([r[:max_trial_length] for r in all_returns])
average_returns = np.mean(trimmed_returns, axis=0)
# Plot result
plt.figure()
plt.plot(np.arange(len(average_returns)), average_returns)
plt.title("Returns vs Iteration (averaged across seeds)")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.show()
###Output
_____no_output_____
###Markdown
Cartpole
###Code
plot_one_run('/home/vitchyr/git/handful-of-trials/log/test/2019-07-01--16:18:38/')https://www.youtube.com/watch?v=uj6Z7ZYuSzs&list=PLsbYZauRyT1KZvMjDMNm-3NheCXzwLZub&index=14
###Output
_____no_output_____
###Markdown
Pointmass Pointmass Fixed Goal With squared exponential loss, it solves it perfectly
###Code
plot_one_run('/home/vitchyr/git/handful-of-trials/log/pointmass-reach-fixed-point-cartpole-settings/2019-07-01--17:22:52/')
###Output
_____no_output_____
###Markdown
With squared loss:
###Code
plot_one_run('/home/vitchyr/git/handful-of-trials/log/pointmass-reach-fixed-point-cartpole-settings-squared-loss/2019-07-01--17:28:09/')
###Output
_____no_output_____
###Markdown
Pointmass: No Walls (Varied goals)
###Code
plot_one_run('/home/vitchyr/git/handful-of-trials/log/pointmass-no-walls-cartpole-settings-squared-loss/2019-07-01--21:03:37/')
###Output
_____no_output_____
###Markdown
Pointmass U Wall
###Code
plot_one_run('/home/vitchyr/git/handful-of-trials/log/pointmass-u-wall-solve-in-8-ph10-run2/2019-07-01--22:41:41/')
###Output
_____no_output_____
###Markdown
Plotting Tools
###Code
import os
import csv
import numpy as np
from scipy.io import loadmat, savemat
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Basic plotting example
###Code
log_dir = "/home/archie/trail/baselines/handful-of-trials/log/" # Directory specified in script, not including date+time
min_num_trials = 50 # Plots up to this many trials
returns = []
for subdir in os.listdir(log_dir):
data = loadmat(os.path.join(log_dir, subdir, "logs.mat"))
if data["returns"].shape[1] >= min_num_trials:
returns.append(data["returns"][0][:min_num_trials])
returns = np.array(returns)
returns = np.maximum.accumulate(returns, axis=-1)
mean = np.mean(returns, axis=0)
# Plot result
plt.figure()
plt.plot(np.arange(1, min_num_trials + 1), mean)
plt.title("Performance")
plt.xlabel("Iteration number")
plt.ylabel("Return")
plt.show()
###Output
_____no_output_____ |
Data science and machine learning with python hands on/Standard Deviation & Variance.ipynb | ###Markdown
Standard Deviation and Variance
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
incomes = np.random.normal(100.0, 20.0, 10000)
plt.hist(incomes, 50)
plt.show()
incomes.std()
incomes.var()
###Output
_____no_output_____ |
tutorials/extragalactic_gcr_cluster_colors.ipynb | ###Markdown
Plotting Galaxy Cluster Member Colors in Extragalactic CatalogsOwners: **Dan Korytov [@dkorytov](https://github.com/LSSTDESC/DC2-analysis/issues/new?body=@dkorytov)**Last verified run: Nov 30, 2018 (by @yymao)This notebook demonstrates how to access the extra galactic catalog through the Generic Catalog Reader (GCR, https://github.com/yymao/generic-catalog-reader) as well as how filter on galaxy features and cluster membership.__Objectives__:After working through and studying this Notebook you should be able to1. Access extragalactic catalogs (protoDC2, cosmoDC2) through the GCR2. Filter on galaxy properties3. Select and plot cluster members__Logistics__: This notebook is intended to be run through the JupyterHub NERSC interface available here: https://jupyter-dev.nersc.gov. To setup your NERSC environment, please follow the instructions available here: https://confluence.slac.stanford.edu/display/LSSTDESC/Using+Jupyter-dev+at+NERSC
###Code
import GCRCatalogs
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as clr
%matplotlib inline
gc = GCRCatalogs.load_catalog('cosmoDC2_v1.1.4_small')
data = gc.get_quantities(['halo_mass', 'redshift',
'mag_u', 'mag_g', 'mag_r',
'mag_i', 'mag_z'], filters=['halo_mass > 3e13'])
###Output
_____no_output_____
###Markdown
Reading catalogWe load in the catalog with the "load_catalog" command, and then the values with the "get_quantities" command using filters to select sub-samples of the catalog. For this case we only need the magnitudes in several filters and the redshift. Galaxies are filtered on host halo mass to be at least 3e13 h$^{-1}$M$_\odot$. Help for error messages:If this fails to find the appropriate quantities, check that the desc-python kernel is being used and if this is not available source the kernels by running the following command on a terminal at nersc: "source /global/common/software/lsst/common/miniconda/setup_current_python.sh"We are loading in a smaller version of the full cosmoDC2 catalog - this contains the same information as the full catalog but with a smaller sky area.
###Code
plt.figure()
h,xbins = np.histogram(np.log10(data['halo_mass']),bins=40)
xbins_avg = (xbins[1:]+xbins[:-1])/2.0
plt.semilogy(xbins_avg, h)
plt.ylabel(r'Galaxy Count')
plt.xlabel(r'log10( M$_{\rm{halo}}$ / M$_\odot)$')
plt.show()
###Output
_____no_output_____
###Markdown
As a sanity check, we made sure no galaxies have a host halo below 3e13 h$^{-1}$ M$_\odot$.
###Code
plt.figure()
gal_clr = data['mag_g']-data['mag_r']
plt.hist2d(data['redshift'], gal_clr, bins=100, cmap='PuBu', norm=clr.LogNorm())
plt.colorbar(label='population density')
plt.ylabel('Observed g-r')
plt.xlabel('redshift')
plt.title('Galaxy Colors in Clusters')
plt.tight_layout()
plt.figure()
gal_clr = data['mag_r']-data['mag_i']
plt.hist2d(data['redshift'], gal_clr, bins=100, cmap='PuBu',norm=clr.LogNorm())
plt.colorbar(label='population density')
plt.ylabel('r-i')
plt.xlabel('redshift')
plt.title('Galaxy Colors in Clusters')
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
class_materials/Loops_and_CustomFunctions/Custom_Functions/2017/lab6_exercises_ANSWERS_2017.ipynb | ###Markdown
Programming Bootcamp 2016 Lesson 6 Exercises -- ANSWERS--- ** Earning points (optional) **- Enter your name below.- Email your `.ipynb` file to raju ([email protected]) **before 9:00 am on 9/23**. - You do not need to complete all the problems to get points. - I will give partial credit for effort when possible.- At the end of the course, everyone who gets at least 90% of the total points will get a prize. **Name**: --- 1. Guess the output: scope practice (2pts)Refer to the code below to answer the following questions:
###Code
def fancy_calc(a, b, c):
x1 = basic_calc(a,b)
x2 = basic_calc(b,c)
x3 = basic_calc(c,a)
z = x1 * x2 * x3
return z
def basic_calc(x, y):
result = x + y
return result
x = 1
y = 2
z = 3
result = fancy_calc(x, y, z)
###Output
_____no_output_____
###Markdown
**(A)** List the line numbers of the code above in the order that they will be **executed**. If a line will be executed more than once, list it each time. **NOTE**: Select the cell above and hit "L" to activate line numbering! Answer:```1213141512891023891034891045615``` **(B)** Guess the output if you were to run each of the following pieces of code immediately after running the code above. Then run the code to see if you're right. (Remember to run the code above first)
###Code
print(x)
print(z)
print(x1)
print(result)
###Output
60
###Markdown
--- 2. Data structure woes (2pt)**(A) Passing a data structure to a function.** Guess the output of the following lines of code if you were to run them immediately following the code block below. Then run the code yourself to see if you're right.
###Code
# run this first!
def getMax(someList):
someList.sort()
x = someList[-1]
return x
scores = [9, 5, 7, 1, 8]
maxScore = getMax(scores)
print(maxScore)
print(someList)
print(scores)
###Output
[1, 5, 7, 8, 9]
###Markdown
> Why does scores get sorted? > When you pass a data structure as a parameter to a function, it's not a **copy** of the data structure that gets passed (as what happens with regular variables). What gets passed is a **direct reference** to the data structure itself. > The reason this is done is because data structures are typically expected to be fairly large, and copying/re-assigning the whole thing can be both time- and memory-consuming. So doing things this way is more efficient. It can also surprise you, though, if you're not aware it's happening. If you would like to learn more about this, look up "Pass by reference vs pass by value". **(B) Copying data structures.** Guess the output of the following code if you were to run them immediately following the code block below. Then run the code yourself to see if you're right.
###Code
# run this first!
list1 = [1, 2, 3, 4]
list2 = list1
list2[0] = "HELLO"
print(list2)
print(list1)
###Output
['HELLO', 2, 3, 4]
###Markdown
> Yes, that's right--even when you try to make a new copy of a list, it's actually just a reference to the same list! This is called aliasing. The same thing will happen with a dictionary. This can really trip you up if you don't know it's happening. So what if we want to make a truly separate copy? Here's a way for lists:
###Code
# for lists
list1 = [1, 2, 3, 4]
list2 = list(list1) #make a true copy of the list
list2[0] = "HELLO"
print(list2
print(list1)
###Output
['HELLO', 2, 3, 4]
[1, 2, 3, 4]
###Markdown
And here's a way for dictionaries:
###Code
# for dictionaries
dict1 = {'A':1, 'B':2, 'C':3}
dict2 = dict1.copy() #make a true copy of the dict
dict2['A'] = 99
print(dict2)
print(dict1)
###Output
{'A': 99, 'B': 2, 'C': 3}
{'A': 1, 'B': 2, 'C': 3}
###Markdown
--- 3. Writing custom functions (8pts)Complete the following. For some of these problems, you can use your code from previous labs as a starting point. (If you didn't finish those problems, feel free to use the code from the answer sheet, just make sure you understand how they work! Optionally, for extra practice you can try re-writing them using some of the new things we've learned since then.) **(A)** (1pt) Create a function called "gc" that takes a single sequence as a parameter and returns the GC content of the sequence (as a 2 decimal place float).
###Code
def gc(seq):
gcCount = seq.count("C") + seq.count("G")
gcFrac = float(gcCount) / len(seq)
return round(gcFrac,2)
###Output
_____no_output_____
###Markdown
**(B)** (1pt) Create a function called "reverse_compl" that takes a single sequence as a parameter and returns the reverse complement.
###Code
def reverse_compl(seq):
complements = {'A':'T', 'C':'G', 'G':'C', 'T':'A'}
compl = ""
for char in seq:
compl = complements[char] + compl
return compl
###Output
_____no_output_____
###Markdown
**(C)** (1pt) Create a function called "read_fasta" that takes a file name as a parameter (which is assumed to be in fasta format), puts each fasta entry into a dictionary (using the header line as a key and the sequence as a value), and then returns the dictionary.
###Code
def read_fasta(fileName):
ins = open(fileName, 'r')
seqDict = {}
activeID = ""
for line in ins:
line = line.rstrip('\r\n')
if line[0] == ">":
activeID = line[1:]
if activeID in seqDict:
print (">>> Warning: repeat id:", activeID, "-- overwriting previous ID.")
seqDict[activeID] = ""
else:
seqDict[activeID] += line
ins.close()
return seqDict
###Output
_____no_output_____
###Markdown
**(D)** (2pts) Create a function called "rand_seq" that takes an integer length as a parameter, and then returns a random DNA sequence of that length. *Hint: make a list of the possible nucleotides*
###Code
def rand_seq(length):
import random
nts = ['A','C','G','T']
seq = ""
for i in range(length):
seq += random.choice(nts)
return seq
###Output
_____no_output_____
###Markdown
**(E)** (2pts) Create a function called "shuffle_nt" that takes a single sequence as a parameter and returns a string that is a shuffled version of the sequence (i.e. the same nucleotides, but in a random order). *Hint: Look for Python functions that will make this easier. For example, the `random` module has some functions for shuffling. There may also be some built-in string functions that are useful. However, you can also do this just using things we've learned.*
###Code
def shuffle_nt(seq):
import random
strList = list(seq)
random.shuffle(strList)
shuffSeq = "".join(strList)
return shuffSeq
###Output
_____no_output_____
###Markdown
**(F)** (1pt) Run the code below to show that all of your functions work. Try to fix any that have problems.
###Code
##### testing gc
gcCont = gc("ATGGGCCCAATGG")
if type(gcCont) != float:
print(">> Problem with gc: answer is not a float, it is a %s." % type(gcCont))
elif gcCont != 0.62:
print(">> Problem with gc: incorrect answer (should be 0.62; your code gave", gcCont, ")")
else:
print("gc: Passed.")
##### testing reverse_compl
revCompl = reverse_compl("GGGGTCGATGCAAATTCAAA")
if type(revCompl) != str:
print (">> Problem with reverse_compl: answer is not a string, it is a %s." % type(revCompl))
elif revCompl != "TTTGAATTTGCATCGACCCC":
print (">> Problem with reverse_compl: answer (%s) does not match expected (%s)" % (revCompl, "TTTGAATTTGCATCGACCCC"))
else:
print ("reverse_compl: Passed.")
##### testing read_fasta
try:
ins = open("horrible.fasta", 'r')
except IOError:
print (">> Can not test read_fasta because horrible.fasta is missing. Please add it to the directory with this notebook.")
else:
seqDict = read_fasta("horrible.fasta")
if type(seqDict) != dict:
print (">> Problem with read_fasta: answer is not a dictionary, it is a %s." % type(seqDict))
elif len(seqDict) != 22:
print (">> Problem with read_fasta: # of keys in dictionary (%s) does not match expected (%s)" % (len(seqDict), 22))
else:
print ("read_fasta: Passed.")
##### testing rand_seq
randSeq1 = rand_seq(23)
randSeq2 = rand_seq(23)
if type(randSeq1) != str:
print (">> Problem with rand_seq: answer is not a string, it is a %s." % type(randSeq1))
elif len(randSeq1) != 23:
print (">> Problem with rand_seq: answer length (%s) does not match expected (%s)." % (len(randSeq1), 23))
elif randSeq1 == randSeq2:
print (">> Problem with rand_seq: generated the same sequence twice (%s) -- are you sure this is random?" % randSeq1)
else:
print ("rand_seq: Passed.")
##### testing shuffle_nt
shuffSeq = shuffle_nt("AAAAAAGTTTCCC")
if type(shuffSeq) != str:
print (">> Problem with shuffle_nt: answer is not a string, it is a %s." % type(shuffSeq))
elif len(shuffSeq) != 13:
print (">> Problem with shuffle_nt: answer length (%s) does not match expected (%s)." % (len(shuffSeq), 12))
elif shuffSeq == "AAAAAAGTTTCCC":
print (">> Problem with shuffle_nt: answer is exactly the same as the input. Are you sure this is shuffling?")
elif shuffSeq.count('A') != 6:
print (">> Problem with shuffle_nt: answer doesn't contain the same # of each nt as the input.")
else:
print ("shuff_seq: Passed.")
###Output
gc: Passed.
reverse_compl: Passed.
read_fasta: Passed.
rand_seq: Passed.
shuff_seq: Passed.
###Markdown
--- 4. Using your functions (5pts)Use the **functions you created above** to complete the following. **(A)** (1pt) Create 20 random nucleotide sequences of length 50 and print them to the screen.
###Code
for i in range(20):
print(rand_seq(50))
###Output
CCTTACATGCTGATAAGCAGTATCGAACCATGAGCTAGCGCCGCCTTTAA
GGTTCGTACAGGGTGGTTATCCGCCGTCGGGCGGTAGTATCGTCTTCTGG
CGCCGGGTGAGGGTAGGATTGAAACGTGATATTCAGGCCACCCGTTTGTA
TTTAATCATCGATTGATCTAACTCGAGTCAATTCCAGGGGGGGCCAAAGC
ACGAATCGATTGCAGACAGGGGTTGTTACCTGTCCTGACGCAACATAGTG
TCTATACGGTGAGGACCATTGGGCAGTTTTGAATATGCTTAGACTACCGG
GAACGCCCCCTTGTACGCGGCGGTTACGAAGCTCGTGATGAGGATCGCCT
AGGACCGGAGCAATTTAATCATGTTTTCTGACGGTTCACACCCTTCTGGA
ACAAGAGAGCCCCAACTCCGCCATTACCATTGTAGAAAACCCGTACTGCA
GGGCAAGTCGCTCGTTCTCTGCTGGGTTTTTTTGTTTACGTGTTTTGTGG
TGAAGGTTTGCAAATACGCCGTAGGGAATAACATACTTCTTTTAGTGCCT
CGCGAATGCCTTCCTAGTTTCGTATTAGCAAGAAAGTTGCTCACTACTCT
CGATTCTTACGAATGAACGTTTGGTTCGACTAAACTCTGCTCATAATGCC
ATACGTCGAGACCTCCACTCTTACAGTATGGACGGCGACGCTAGCGCCAA
AACATTCTTTGTAATTGAGTACATTCTCGTTGACTCAGGCCCTTTCCTTT
GAGCTGGGTTTGAAATGAGGCTCGACTACATTCGACAGAAACGATGCTCC
CAGTGAGCTCCCCACACGAACCGCAGCAAAATACGTTATGCATATCAGCT
GAAGGCCCTCCCAGGCCATCATTATTGGCGCCCGCTATGTGGAGTGGAGT
TAAATGAGGTCTGAGAGTGCCAAGGCAGGAATGGAAGCAGAGAGGGACGC
CAGTATGTCAACTACGCAGCCCTCGAGCCTCCTAGGATCCAGAAAAAAGA
###Markdown
**(B)** (1pt) Read in `horrible.fasta` into a dictionary. For each sequence, print its reverse complement to the screen.
###Code
seqDict = read_fasta("horrible.fasta")
for seqID in seqDict:
print (reverse_compl(seqDict[seqID]))
###Output
AACCTCCTGGGGAGGTGGTGGCGGCTCTTGCAGATGTGGAACCAGCAGAGGTTGTGCTTACAGCTGGGCCTGTGGTGCTGCCAGCTGTTTCAGCCGGTGT
CTGATCACTGAGCTGAAACTAAACGTTTTAGGTGGAAAAAAAGCGTCCGAAGGCACCGTGAAATGATTAAGGAACTAAAGAGCTTCTCGCCATGTGAGATCATGTCCTGTTCTCGCCAACATCACAAGATGTCCCCAGACACGCCGCGCCCCCAGCGCGCCGCCCCACACTGCCGGCCCGGAGCGAGGAAAGGGTAGGCGCTGCGCGG
ACCCCTAAGGAACGTCCCTCGCGTCGGTTTGAGGAGGAAGGCGCACTTCTCTTGATGACCGTTGG
GGTAAGCACAGGATCCAAGAAACAGAGATTACACACAGGAGAGAGGCCAAGCAAAGCTCTGTGATGAAAGGTATGAAGTATGCCCACGGAGCAGCCAGCTGAGACTGGAACAAGAGGATGTAGCACTCCATGCAGGAAAATTCCATGGAATCTAGCACTTTGGGACATCCAGGTGGGCG
AGCAATACTTTCACTGCTGCCAGCCCGAG
GTATCACCTTCAATTTCTTAAGAGCCATTCTTCT
ATTTTCTGAGCTTCTTCTCTCGCAAGGTCTTGTTCATTTGGCAATACTGATATTTGATCTTTGTACACA
CCATGGTTAGTTAAATTCCCTAGAGATGTAGCCGTGACTCTCCCAATACCTGAAGTGTGCCTCCCCTGACTCTGTGGCATCCTCTGGAAGAGATCATGGTTGTATTCATAATATCTGTAATCTTCTTGTGCACGATCTCCAAGTGGCCGCCTTCTCTGTCCATCAAAAAAGTTATCTGAGAAGAAGTATCGGGAGCCAGAGTCTCCATTCTCAACAGCAAAGTTAACTTCTGTCAAAAATGACTGTGATGAGCCACACTCTCGAGGGACATCTGCTAGGCTCCTGACAAGGTAAGAAGGGGCAGACAGTCTGTGGCTTTCTCTTCTCATTACTTCATGAGGTGTCCTTTGAATTGCAGTTCTCAGGAAACTCTGGTTTCTTGAAACTACACCATCTCCAGAAGCTGAGAAAGCAGTAGCACTTGAATCTGGAAGACAGAGGTCAGTCC
GTACCTTCTCGGAAGGCCAGAGTCAATTGTACCACCACAGATCCTGGCCTGAACTTAATATTGGAGAGGCCCAGAAAACCCCCTT
CAAAGCACACAGAGATTCTGTCAGGTGCTGAGACACCACAGCCTTCTCAATTTTGTCCTTAAGGGCTTTATCTTTCATCCAATTGAGCAGAGGCTCAAATTCTTTCTCAACTGCTTCATGACTCTCCTTAGTTTTCTCACTTTTATCAAACTTCATTCCTTCCTTGACAACATTCTGGAACCTCTTCCCATCAAATTTG
GCTTTGGAAACTGGAATGAGGATCACCAACAGGATCCTCATTTTACACAGGAGTTATGAGAGTTACATCCTCTAGCAGAGATGCTTGGTCATTACCTGTGGTACATGAGATTACCGAGCTAAAAGGGAAAAAAAACGATCTTAATGTTCTCCCATGAACTCAACTTAAGCTTTTTATGGAGGCACTGAGGCCATGCAGCTCCTTTTCCAAAAGACACAGATAAAAGCCAAATAAGGTAGAGGACTTTGGAAATTTTCTCTGAAAAGTTAAATTCCACATAATAGTAAGA
TTTTAATCTTCTTCCTTCCCGTCGACTGTCTTTCTTTAAAGCAACTGCAATTTCTTCCCTTACTTCCTCACTGTCTGTTGCTATAATTTGCCCATTGTGAACCATCTGTGAATTCTGTCTTAGGTATTCCATGAATCCATTCACATCTTCATTTAAGTACTCTTTTTTCTTTTTGTTCTTTTTATGTTTTGCTTGGGGTGCATCATTTTTGAGGGATAGCCTATTGGCTTCAAGTTGTTTACGCTTTGGTAGGTTTTGGCTTGTTCCCTCAAAGGATCCCTTCTTCATGTCCTCCCATGATGTTGCAGGCAAGGGTCTCTTGTTATATGTGGTACTAACTCGGGCCCACCTGGTCATAATTTCATCAGTGGTACCGCGCACGAATCCCCCAGAGCAGCCGAGTTGGCGAGCCGGGGAAGACCGCCCTCCTGCGGTATTGGAGACCGGAAGCACATAGTG
GGGCCCGGGACCCGGGTGGGGGGGACCGCCGAGAGGCCCAGCGCAGCGA
CTTCATATATATTTAATTTTCTCTTTGCTTCACTACTGCAAGGTAGGTGTTTATTATCTCCTTTTACAGATGTGGAAACTTAGGCTCAGAGGTGAAGTAACTTGCACAAGTTTCTACAGCTAGAATTTGAACCAGGTCTGACCCCCGAATTGTGCTCGTCCATAAAGGCCAGCATTTGCCAAATTATGGCACACAGTACCACCAGTGGTACGTGACTTCTTTGGTTGAAAACAGACAAATTTATTTTGTTTTGATAGTTATGTCTTTTAATATGTATTAGAAGAATACATAATTAGCACACATCAAACCTGTGATTTCACAGATATCACTACTTGGGATGAAAATGATATAGGATAACAATGTTAGACCTCAG
AAGATTTCCAGAGTGG
CCTTTCCGGGACTGGTTT
AAATTGACTTCTGCCATAATAAAATC
TGAACAGCTGCTGTGTAGCCCATACTGTGAAAAGTAAAACATCACCCCAGTTCTCGGTACACACAGAGCTCATGCTCCAGCGGGCTGAGCCT
GCTTAAGCCTAGGAGTTTGAGACCAGCCTGGGCAACACAGCAAGACCCCATCTCTACCAAAAAAAAAAAAAAATTAAAGAGTCCTATAGAGAATTCTTATACTCCAATGTGAAGACAACATTGGAAAGGGCCAAGTTTCTCATGCCCTCCAACTAAGAAACCCCTAATAAAAAATGAAGTGACACTTGAACAGGACTTAAGGATTCTACAGTTGGTCTTTGGCAGCAGTATGTTTTAGGAAATGTAATGCGGCGGGTGGGGCGGTGACTTAGCCAGTTATGCTTTTAAATGGAACTGCAATAATAAAAGTGATACTAGTGCAGAAAGTATCTGTATTAGAATTCTAGAGTAAGTCAAGAGCTCACATTCATTAAAATAATGACACAACTCCACGGGGGTGGGGAGAACAGCAGTAAAGCAACCACATACTATACTATTAGACTGGCAACATTGAGACTGAAAATATCCATGAGGAGAATACTGACATCTTA
TCAATGTTTTCTTCTTTAATCACAGATGATGTACAGACACCAGCATAATTTGCTGATGTAATTTCCTTATCCAAGG
GCATGGTTGGCCTGAAGGTATTAGTGCGCAGGAGATGATTCAAACTTCCATGGGTCCCATTATTAGGAGCTGGCTTCAATCCCAGGAGATCACACATAACATTGTAAAGTTCAATGTTTTCAAATGGAGGCACTTTAGTCTTGTACTTAAATGTTGAGCCATAACCTACAAAAACAGTCTGCATGCTGTTGACCTTGTTATCAAATCCGTGGTCTCCCTGGAAAAAGCATTTTCCTGATGG
TAGGTGAAAATTCCTTCTGCTGGTTCCCAGAGATACCTAGGAAGACTCTGGGGAACCCTTGGCTAATTATCCCAGGAAAACTGCTGCCTCGGCTGAAACTGGAAGCTCATGGTGGACCCCAAGATATCTTATCTTTGGGACACTTAAAAAAAAAAAGCTATTTTATTCCAATTAAGCCAGTCTTTTGAGAGACACCTAGAAAGAAAGGGCTTCTAAAACATGAACATGAGCTCTGATGTTAGCAACCCAACTTCCACTCCAAAATTACTGAAATATTTATGGGTAAAATTAACTCATAAAAACCTTCTTCT
###Markdown
**(C)** (3pts) Read in horrible.fasta into a dictionary. For each sequence, find the length and the gc content. Print the results to the screen in the following format:```SeqID Len GC... ... ...```That is, print the header shown above (separating each column's title by a tab (`\t`)), followed by the corresponding info about each sequence on a separate line. The "columns" should be separated by tabs. Remember that you can do this printing as you loop through the dictionary... that way you don't have to store the length and gc content.(In general, this is the sort of formatting you should use when printing data files!)
###Code
seqDict = read_fasta("horrible.fasta")
print ("SeqID\tLen\tGC")
for seqID in seqDict:
seq = seqDict[seqID]
seqLen = len(seq)
seqGC = gc(seq)
print (seqID + "\t" + str(seqLen) + "\t" + str(seqGC))
###Output
SeqID Len GC
varlen2_uc007xie.1_4456 100 0.61
varlen2_uc010mlp.1_79 208 0.57
varlen2_uc009div.2_242 65 0.58
varlen2_uc003its.2_2976 179 0.5
varlen2_uc003nvg.4_2466 29 0.55
varlen2_uc029ygd.1_73 34 0.35
varlen2_uc007kxx.1_2963 69 0.36
varlen2_uc007nte.2_374 448 0.46
varlen2_uc009wph.3_423 85 0.51
varlen2_uc010osx.2_1007 199 0.41
varlen2_uc001pmn.3_3476 289 0.39
varlen2_uc003khi.3_3 459 0.45
varlen2_uc001agr.3_7 49 0.84
varlen2_uc011moe.2_5914 373 0.36
varlen2_uc003hyy.2_273 16 0.44
varlen2_uc007fws.1_377 18 0.56
varlen2_uc003pij.1_129 26 0.27
varlen2_uc002wkt.1_1569 92 0.52
varlen2_uc010suq.2_3895 491 0.4
varlen2_uc021qfk.1>2_1472 76 0.34
varlen2_uc003yos.2_1634 241 0.42
varlen2_uc009bxt.1_1728 311 0.4
###Markdown
--- Bonus question: K-mer generation (+2 bonus points)This question is optional, but if you complete it, I'll give you two bonus points. You won't lose points if you skip it.Create a function called `get_kmers` that takes a single integer parameter, `k`, and returns a list of all possible k-mers of A/T/G/C. For example, if the supplied `k` was 2, you would generate all possible 2-mers, i.e. [AA, AT, AG, AC, TA, TT, TG, TC, GA, GT, GG, GC, CA, CT, CG, CC]. Notes:- This function must be *generic*, in the sense that it can take *any* integer value of `k` and produce the corresponding set of k-mers.- As there are $4^k$ possible k-mers for a given k, stick to smaller values of k for testing!!- I have not really taught you any particularly obvious way to solve this problem, so feel free to get creative in your solution!*There are many ways to do this, and plenty of examples online. Since the purpose of this question is to practice problem solving, don't directly look up "k-mer generation"... try to figure it out yourself. You're free to look up more generic things, though.*
###Code
# Method 1
# Generic kmer generation for any k and any alphabet (default is DNA nt)
# Pretty fast
def get_kmers1(k, letters=['A','C','G','T']):
kmers = []
choices = len(letters)
finalNum = choices ** k
# initialize to blank strings
for i in range(finalNum):
kmers.append("")
# imagining the kmers lined up vertically, generate one "column" at a time
for i in range(k):
consecReps = choices ** (k - (i + 1)) #number of times to consecutively repeat each letter
patternReps = choices ** i #number of times to repeat pattern of letters
# create the current column of letters
index = 0
for j in range(patternReps):
for m in range(choices):
for n in range(consecReps):
kmers[index] += letters[m]
index += 1
return kmers
get_kmers1(3)
# Method 2
# Generate numbers, discard any that aren't 1/2/3/4's, convert to letters.
# Super slow~
def get_kmers2(k):
discard = ["0", "5", "6", "7", "8", "9"]
convert = {"1": "A", "2": "T", "3": "G", "4": "C"}
min = int("1" * k)
max = int("4" * k)
kmers = []
tmp = []
for num in range(min, (max + 1)): # generate numerical kmers
good = True
for digit in str(num):
if digit in discard:
good = False
break
if good == True:
tmp.append(num)
for num in tmp: # convert numerical kmers to ATGC
result = ""
for digit in str(num):
result += convert[digit]
kmers.append(result)
return kmers
# Method 3 (by Nate)
# A recursive solution. Fast!
# (A recursive function is a function that calls itself)
def get_kmers3(k):
nt = ['A', 'T', 'G', 'C']
k_mers = []
if k == 1:
return nt
else:
for i in get_kmers3(k - 1):
for j in nt:
k_mers.append(i + j)
return k_mers
# Method 4 (by Nate)
# Fast
def get_kmers4(k):
nt = ['A', 'T', 'G', 'C']
k_mers = []
total_kmers = len(nt)**k
# make a list of size k with all zeroes.
# this keeps track of which base we need at each position
pointers = []
for p in range(k):
pointers.append(0)
for k in range(total_kmers):
# use the pointers to generate the next k-mer
k_mer = ""
for p in pointers:
k_mer += nt[p]
k_mers.append(k_mer)
# get the pointers ready for the next k-mer by updating them left to right
pointersUpdated = False
i = 0
while not pointersUpdated and i < len(pointers):
if pointers[i] < len(nt) - 1:
pointers[i] += 1
pointersUpdated = True
else:
pointers[i] = 0
i += 1
return k_mers
# Method 5 (by Justin Becker, bootcamp 2013)
# Fast!
def get_kmers5(k): #function requires int as an argument
kmers = [""]
for i in range(k): #after each loop, kmers will store the complete set of i-mers
currentNumSeqs = len(kmers)
for j in range(currentNumSeqs): #each loop takes one i-mer and converts it to 4 (i+1)=mers
currentSeq = kmers[j]
kmers.append(currentSeq + 'C')
kmers.append(currentSeq + 'T')
kmers.append(currentSeq + 'G')
kmers[j] += 'A'
return kmers
# Method 6 (by Nick)
# Convert to base-4
def get_kmers6(k):
bases = ['a', 'g', 'c', 't']
kmers = []
for i in range(4**k):
digits = to_base4(i, k)
mystr = ""
for baseidx in digits:
mystr += bases[baseidx]
kmers.append(mystr)
return kmers
# convert num to a k-digit base-4 int
def to_base4(num, k):
digits = []
while k > 0:
digits.append(num/4**(k-1))
num %= 4**(k-1)
k -= 1
return digits
# Below: more from Nate
import random
import time
alphabet = ['A', 'C', 'G', 'T']
## Modulus based
def k_mer_mod(k):
k_mers = []
for i in range(4**k):
k_mer = ''
for j in range(k):
k_mer = alphabet[(i/4**j) % 4]+ k_mer
k_mers.append(k_mer)
return k_mers
## maybe the range operator slows things down by making a big tuple
def k_mer_mod_1(k):
k_mers = []
total = 4**k
i = 0
while i < total:
k_mer = ''
for j in range(k):
k_mer = alphabet[(i/4**j) % 4]+ k_mer
k_mers.append(k_mer)
i += 1
return k_mers
## Does initializing the list of k_mers help?
def k_mer_mod_2(k):
k_mers = [''] * 4**k
for i in range(4**k):
k_mer = ''
for j in range(k):
k_mer = alphabet[(i/4**j) % 4] + k_mer
k_mers[i] = k_mer
return k_mers
## What's faster? element assignment or hashing?
def k_mer_mod_set(k):
k_mers = set()
for i in range(4**k):
k_mer = ''
for j in range(k):
k_mer = alphabet[(i/4**j) % 4] + k_mer
k_mers.add(k_mer)
return list(k_mers)
## does creating the string up front help?
#def k_mer_mod_3(k):
#n k_mers = []
# k_mer = "N" * k
# for i in range(4**k):
# for j in range(k):
# k_mer[j] = alphabet[(i/4**j) % 4]
# k_mers.append(k_mer)
# return k_mers
# Nope! String are immutable, dummy!
# maybe we can do something tricky with string substitution
def k_mer_mod_ssub(k):
template = "\%s" * k
k_mers = []
for i in range(4**k):
k_mer = []
for j in range(k):
k_mer.append(alphabet[(i/4**j) % 4])
k_mers.append(template % k_mer)
return k_mers
# what about using a list?
def k_mer_mod_4(k):
k_mers = [''] * 4**k
k_mer = [''] * k
for i in range(4**k):
for j in range(k):
k_mer[j] = alphabet[(i/4**j) % 4]
k_mers[i] = "".join(k_mer)
return k_mers
## recursive version
def k_mer_recursive(k):
if k == 0:
return ['']
else:
k_mers = []
for k_mer in k_mer_recursive(k-1):
for n in alphabet:
k_mers.append("%s%s" % (k_mer, n))
return k_mers
## That works, but what I wanted to be like, really obnoxious about it
def k_mer_recursive_2(k):
if k == 0:
return ['']
else:
k_mers = []
[[k_mers.append("%s%s" % (k_mer, n)) for n in alphabet] for k_mer in k_mer_recursive_2(k-1)]
return k_mers
# using list instead of strings to store the k_mers
def k_mer_recursive_3(k, j = False):
if k == 0:
return [[]]
else:
k_mers = []
[[k_mers.append((k_mer + [n])) if j else k_mers.append("".join(k_mer + [n])) for n in alphabet] for k_mer in k_mer_recursive_3(k-1, True)]
return k_mers
## stochastic (I have a good feeling about this one!)
def k_mer_s(k):
s = set()
i = 0
while i < 4**k:
k_mer = ''
for j in range(k):
k_mer = k_mer + random.choice(alphabet)
if k_mer not in s:
s.add(k_mer)
i += 1
return list(s)
## I sure hope this works because now we're pretty much cheating
import array
def k_mer_mod_array(k):
k_mers = []
k_mer = array.array('c', ['N'] * k)
for i in range(4**k):
for j in range(k):
k_mer[j] = alphabet[(i/4**j) % 4]
k_mers.append("".join(k_mer))
return k_mers
## That could have gone better.
###Output
_____no_output_____
###Markdown
------ Extra problems (0pts) **(A)** Create a function that counts the number of occurences of each nt in a specified string. Your function should accept a nucleotide string as a parameter, and should return a dictionary with the counts of each nucleotide (where the nt is the key and the count is the value).
###Code
def nt_counts(seq):
counts = {}
for nt in seq:
if nt not in counts:
counts[nt] = 1
else:
counts[nt] += 1
return counts
print(nt_counts("AAAAATTTTTTTGGGGC"))
###Output
{'A': 5, 'T': 7, 'G': 4, 'C': 1}
###Markdown
**(B)** Create a function that generates a random nt sequence of a specified length with specified nt frequencies. Your function should accept as parameters: - a length- a dictionary of nt frequences.and should return the generated string. You'll need to figure out a way to use the supplied frequencies to generate the sequence.An example of the nt freq dictionary could be: {'A':0.60, 'G':0.10, 'C':0.25, 'T':0.05}
###Code
def generate_nucleotide(length, freqs):
import random
seq = ""
samplingStr = ""
# maybe not the best way to do this, but fun:
# create a list with the indicated freq of nt
for nt in freqs:
occurPer1000 = int(1000*freqs[nt])
samplingStr += nt*occurPer1000
samplingList = list(samplingStr)
# sample from the list
for i in range(length):
newChar = random.choice(samplingList)
seq += newChar
return seq
generate_nucleotide(100, {'A':0.60, 'G':0.10, 'C':0.25, 'T':0.05})
# let's check if it's really working
n = 10000
testSeq = generate_nucleotide(n, {'A':0.60, 'G':0.10, 'C':0.25, 'T':0.05})
obsCounts = nt_counts(testSeq)
for nt in obsCounts:
print ("%s %f" % (nt, float(obsCounts[nt]) / n))
###Output
G 0.100800
A 0.601700
C 0.248600
T 0.048900
|
week 3 - tensorflow tutorials/Tensorflow+Tutorial+v1.ipynb | ###Markdown
TensorFlow TutorialWelcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: - Initialize variables- Start your own session- Train algorithms - Implement a Neural NetworkPrograming frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. 1 - Exploring the Tensorflow LibraryTo start, you will import the library:
###Code
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
###Output
_____no_output_____
###Markdown
Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
###Code
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
###Output
_____no_output_____
###Markdown
Writing and running programs in TensorFlow has the following steps:1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors.3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value.Now let us look at an easy example. Run the cell below:
###Code
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
###Output
_____no_output_____
###Markdown
As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
###Code
sess = tf.Session()
print(sess.run(c))
###Output
_____no_output_____
###Markdown
Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
###Code
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
###Output
_____no_output_____
###Markdown
When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. 1.1 - Linear functionLets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. **Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):```pythonX = tf.constant(np.random.randn(3,1), name = "X")```You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication- tf.add(..., ...) to do an addition- np.random.randn(...) to initialize randomly
###Code
# GRADED FUNCTION: linear_function
def linear_function():
"""
Implements a linear function:
Initializes W to be a random tensor of shape (4,3)
Initializes X to be a random tensor of shape (3,1)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
"""
np.random.seed(1)
### START CODE HERE ### (4 lines of code)
X = None
W = None
b = None
Y = None
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = None
result = None
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = " + str(linear_function()))
###Output
_____no_output_____
###Markdown
*** Expected Output ***: **result**[[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. ** Exercise **: Implement the sigmoid function below. You should use the following: - `tf.placeholder(tf.float32, name = "...")`- `tf.sigmoid(...)`- `sess.run(..., feed_dict = {x: z})`Note that there are two typical ways to create and use sessions in tensorflow: **Method 1:**```pythonsess = tf.Session() Run the variables initialization (if needed), run the operationsresult = sess.run(..., feed_dict = {...})sess.close() Close the session```**Method 2:**```pythonwith tf.Session() as sess: run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) This takes care of closing the session for you :)```
###Code
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
"""
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = None
# compute sigmoid(x)
sigmoid = None
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
None
# Run session and call the output "result"
result = None
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
###Output
_____no_output_____
###Markdown
*** Expected Output ***: **sigmoid(0)**0.5 **sigmoid(12)**0.999994 **To summarize, you how know how to**:1. Create placeholders2. Specify the computation graph corresponding to operations you want to compute3. Create the session4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. 1.3 - Computing the CostYou can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$you can do it in one line of code in tensorflow!**Exercise**: Implement the cross entropy loss. The function you will use is: - `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)`Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$
###Code
# GRADED FUNCTION: cost
def cost(logits, labels):
"""
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
"""
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = None
y = None
# Use the loss function (approx. 1 line)
cost = None
# Create a session (approx. 1 line). See method 1 above.
sess = None
# Run the session (approx. 1 line).
cost = None
# Close the session (approx. 1 line). See method 1 above.
None
### END CODE HERE ###
return cost
logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
###Output
_____no_output_____
###Markdown
** Expected Output** : **cost** [ 1.00538719 1.03664088 0.41385433 0.39956614] 1.4 - Using One Hot encodingsMany times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: - tf.one_hot(labels, depth, axis) **Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this.
###Code
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
"""
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
"""
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = None
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = None
# Create the session (approx. 1 line)
sess = None
# Run the session (approx. 1 line)
one_hot = None
# Close the session (approx. 1 line). See method 1 above.
None
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot))
###Output
_____no_output_____
###Markdown
**Expected Output**: **one_hot** [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] 1.5 - Initialize with zeros and onesNow you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. **Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). - tf.ones(shape)
###Code
# GRADED FUNCTION: ones
def ones(shape):
"""
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
"""
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = None
# Create the session (approx. 1 line)
sess = None
# Run the session to compute 'ones' (approx. 1 line)
ones = None
# Close the session (approx. 1 line). See method 1 above.
None
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
###Output
_____no_output_____
###Markdown
**Expected Output:** **ones** [ 1. 1. 1.] 2 - Building your first neural network in tensorflowIn this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:- Create the computation graph- Run the graphLet's delve into the problem you'd like to solve! 2.0 - Problem statement: SIGNS DatasetOne afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.- **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).- **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. **Figure 1**: SIGNS dataset Run the following code to load the dataset.
###Code
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
###Output
_____no_output_____
###Markdown
Change the index below and run the cell to visualize some examples in the dataset.
###Code
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
###Output
_____no_output_____
###Markdown
As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
###Code
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
###Output
_____no_output_____
###Markdown
**Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. **The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. 2.1 - Create placeholdersYour first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. **Exercise:** Implement the function below to create the placeholders in tensorflow.
###Code
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
"""
### START CODE HERE ### (approx. 2 lines)
X = None
Y = None
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
###Output
_____no_output_____
###Markdown
**Expected Output**: **X** Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) **Y** Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2) 2.2 - Initializing the parametersYour second task is to initialize the parameters in tensorflow.**Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: ```pythonW1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())```Please use `seed = 1` to make sure your results match ours.
###Code
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
W3 = None
b3 = None
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
###Output
_____no_output_____
###Markdown
**Expected Output**: **W1** **b1** **W2** **b2** As expected, the parameters haven't been evaluated yet. 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: - `tf.add(...,...)` to do an addition- `tf.matmul(...,...)` to do a matrix multiplication- `tf.nn.relu(...)` to apply the ReLU activation**Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`!
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = None # Z1 = np.dot(W1, X) + b1
A1 = None # A1 = relu(Z1)
Z2 = None # Z2 = np.dot(W2, a1) + b2
A2 = None # A2 = relu(Z2)
Z3 = None # Z3 = np.dot(W3,Z2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
###Output
_____no_output_____
###Markdown
**Expected Output**: **Z3** Tensor("Add_2:0", shape=(6, ?), dtype=float32) You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. 2.4 Compute costAs seen before, it is very easy to compute the cost using:```pythontf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))```**Question**: Implement the cost function below. - It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.- Besides, `tf.reduce_mean` basically does the summation over the examples.
###Code
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = None
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
###Output
_____no_output_____
###Markdown
**Expected Output**: **cost** Tensor("Mean:0", shape=(), dtype=float32) 2.5 - Backward propagation & parameter updatesThis is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.For instance, for gradient descent the optimizer would be:```pythonoptimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)```To make the optimization you would do:```python_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})```This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.**Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). 2.6 - Building the modelNow, you will bring it all together! **Exercise:** Implement the model. You will be calling the functions you had previously implemented.
###Code
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
"""
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = None
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = None
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = None
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = None
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = None
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = None
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
###Output
_____no_output_____
###Markdown
Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
###Code
parameters = model(X_train, Y_train, X_test, Y_test)
###Output
_____no_output_____
###Markdown
**Expected Output**: **Train Accuracy** 0.999074 **Test Accuracy** 0.716667 Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.**Insights**:- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. 2.7 - Test with your own image (optional / ungraded exercise)Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right!
###Code
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
###Output
_____no_output_____ |
10 Days of Statistics/Day_1. Standard Deviation.ipynb | ###Markdown
Day 1: Standard Deviation`Task `Given an array, `X`, of `N` integers, calculate and print the standard deviation. Your answer should be in decimal form, rounded to a scale of `1` decimal place (i.e., 12.3 format). An error margin of `+-0.1` will be tolerated for the standard deviation.`Input Format`The first line contains an integer, `N`, denoting the number of elements in the array. The second line contains `N` space-separated integers describing the respective elements of the array.`Output Format`Print the standard deviation on a new line, rounded to a scale of `1` decimal place (i.e., 12.3 format).`Sample Input````510 40 30 50 20````Sample Output````14.1```
###Code
N = int(input())
elemts = list(map(int, input().split()))
mu = sum(elemts)/N
var = sum(map(lambda x: (x-mu)**2, elemts))/N
sigma = var ** (1 / 2)
print(f'{sigma:.1f}')
###Output
_____no_output_____ |
Segmenting and Clustering Neighborhoods in Toronto.ipynb | ###Markdown
Segmenting and Clustering Neighborhoods in TorontoCapstone Coursera **IBM Data Science** specialization, assignment week 3.Use data on Canadian postal codes from Wikipedia to distinguish boroughs and neighbourhoods in Toronto,use a geocoding service to assign coordinates to them, use Foursquare to determine what kinds of venuesare present in each neighbourhood, and finally apply clustering and visualization to explore distinctions and similarities between neighbourhoods.For the sake of the assignment, this document consists of three parts:1. [Get data on boroughs and neighbourhoods in Toronto](Part-1:-Get-data-on-boroughs-and-neighbourhoods-in-Toronto)2. [Add locations (latitude, longitude coordinates) to neighbourhoods](Part-2:-Add-locations)3. [Explore and cluster the neighbourhoods of Toronto](Part-3:-Explore-and-cluster-the-neighborhoods-in-Toronto) Basic importsBefore getting started we import a number of python modules that we will use later.
###Code
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import numpy as np
import json # library to handle JSON files
#!conda install -c conda-forge geopy --yes # uncomment this line if you haven't completed the Foursquare API lab
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
# !pip3 install geocoder==0.6.0
import geocoder
import requests # library to handle requests
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# import k-means from clustering stage
from sklearn.cluster import KMeans
#!conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven't completed the Foursquare API lab
import folium # map rendering library
print('Libraries imported.')
###Output
Libraries imported.
###Markdown
Part 1: Get data on boroughs and neighbourhoods in TorontoIn this part we:* Read the data on the relation between postal codes, boroughs, and neighbourhoods in Toronto* Clean the data up for further processing Read the dataThe postal codes of Toronto in the province of Ontario are those beginnig with M. They can be found at:https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M1. Use pandas's read_html to get a *list* of dataframes on the wikipedia page2. Check to see which is the one we are looking for and select this one for further processing
###Code
# Pandas needs LXML to read HTML. If it is not present, first install it.
# !pip3 install lxml
url = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
# The first row (index=0) of the table contains the headers
list_of_dataframes = pd.read_html(url, header=0)
# As read_html returns a *list* of dataframes, we should first see which is the one we are looking for
for i, df in enumerate(list_of_dataframes):
print('----- index', i, '-----')
print(df.head())
###Output
----- index 0 -----
Postcode Borough Neighbourhood
0 M1A Not assigned Not assigned
1 M2A Not assigned Not assigned
2 M3A North York Parkwoods
3 M4A North York Victoria Village
4 M5A Downtown Toronto Harbourfront
----- index 1 -----
Unnamed: 0 \
0 NL NS PE NB QC ON MB SK AB BC NU/NT YT A B C E...
1 NL
2 A
Canadian postal codes \
0 NL NS PE NB QC ON MB SK AB BC NU/NT YT A B C E...
1 NS
2 B
Unnamed: 2 Unnamed: 3 Unnamed: 4 \
0 NL NS PE NB QC ON MB SK AB BC NU/NT YT A B C E... NaN NaN
1 PE NB QC
2 C E G
Unnamed: 5 Unnamed: 6 Unnamed: 7 Unnamed: 8 Unnamed: 9 Unnamed: 10 \
0 NaN NaN NaN NaN NaN NaN
1 QC QC ON ON ON ON
2 H J K L M N
Unnamed: 11 Unnamed: 12 Unnamed: 13 Unnamed: 14 Unnamed: 15 Unnamed: 16 \
0 NaN NaN NaN NaN NaN NaN
1 ON MB SK AB BC NU/NT
2 P R S T V X
Unnamed: 17
0 NaN
1 YT
2 Y
----- index 2 -----
NL NS PE NB QC QC.1 QC.2 ON ON.1 ON.2 ON.3 ON.4 MB SK AB BC NU/NT YT
0 A B C E G H J K L M N P R S T V X Y
###Markdown
We see that the first dataframe in the list is the right one, so we select it.
###Code
toronto_neighbourhoods = list_of_dataframes[0]
toronto_neighbourhoods.head()
###Output
_____no_output_____
###Markdown
Ok, we now have the wikipedia page data in a dataframe. For the sake of the assignment, rename the 'Postcode' column to 'Postal Code'.
###Code
toronto_neighbourhoods.rename(columns={"Postcode": "Postal Code"}, inplace=True)
# For reference display its current shape
toronto_neighbourhoods.shape
###Output
_____no_output_____
###Markdown
Clean up the data* Only process the cells that have an assigned borough. Ignore cells with a borough that is Not assigned.* If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough. So for the 9th cell in the table on the Wikipedia page, the value of the Borough and the Neighborhood columns will be Queen's Park.* More than one neighborhood can exist in one postal code area. For example, in the table on the Wikipedia page, you will notice that M5A is listed twice and has two neighborhoods: Harbourfront and Regent Park. These two rows will be combined into one row with the neighborhoods separated with a comma as shown in row 11 in the above table. Ignore cells with a borough that is 'Not assigned'
###Code
toronto_neighbourhoods.drop(
toronto_neighbourhoods[toronto_neighbourhoods['Borough'] == 'Not assigned'].index,
inplace=True
)
###Output
_____no_output_____
###Markdown
If a neighbourhooed is 'Not assigned', give it the value of the borough
###Code
mask = toronto_neighbourhoods['Neighbourhood'] == 'Not assigned'
toronto_neighbourhoods['Neighbourhood'] = np.where(
mask,
toronto_neighbourhoods['Borough'],
toronto_neighbourhoods['Neighbourhood'])
toronto_neighbourhoods.head(10)
###Output
_____no_output_____
###Markdown
Concatenate neighbourhoods that have the same PostcodeIf different neighbourhoods have the same Postcode, merge them into a single neighbourhood by concatening their names.
###Code
toronto_neighbourhoods = toronto_neighbourhoods.groupby(
['Postal Code','Borough'])['Neighbourhood'].apply(lambda x: ','.join(x)).reset_index()
toronto_neighbourhoods.head()
###Output
_____no_output_____
###Markdown
Check up: Toronto neighbourhoods dataframe shape
###Code
toronto_neighbourhoods.shape
###Output
_____no_output_____
###Markdown
Part 2: Add locations Plan A: Read locations of postal codes by geocoder serviceThis an implementation according to the assignment instructions. It doesn't work asthe service consistently returns a **REQUEST DENIED** error. The implementation is given here for completeness. Actual data is - conform assignment instructions - read from a provided CSV file.
###Code
def get_ll_geocode(postcodes):
# initialize your variable to None
lat_lng_coords = None
d = {'Postal Code': [], 'Latitude': [], 'Longitude': []}
for postal_code in postcodes:
# loop until you get the coordinates
while(lat_lng_coords is None):
# This call consistently gives me a REQUEST DENIED error
#g = geocoder.google('{}, Toronto, Ontario'.format(postal_code))
#lat_lng_coords = g.latlng
lat_lng_coords = (43.653963, -79.387207)
latitude = lat_lng_coords[0]
longitude = lat_lng_coords[1]
d['Postal Code'].append(postal_code)
d['Latitude'].append(latitude)
d['Longitude'].append(longitude)
return pd.DataFrame(d)
# Call the above method
# As it results in REQUEST DENIED errors it is here commented out
# postcodes_locations = get_ll_geocode(toronto_neighbourhoods['Postcode'])
###Output
_____no_output_____
###Markdown
Plan B: read locations of postal codes from online CSV fileUse data placed online to facilitate this course: https://cocl.us/Geospatial_data
###Code
postcodes_locations = pd.read_csv('https://cocl.us/Geospatial_data')
postcodes_locations.head()
###Output
_____no_output_____
###Markdown
Join the neighbourhood dataframe with the locations dataframe.
###Code
neighbourhoods = pd.merge(toronto_neighbourhoods, postcodes_locations, how='left', on='Postal Code')
neighbourhoods.head()
###Output
_____no_output_____
###Markdown
Finally we can drop the 'Postal Code' column, as we don't need it any more. And Americanize the spelling of neighbourhood.
###Code
neighbourhoods.drop('Postal Code', axis=1, inplace=True)
neighbourhoods.rename(columns={'Neighbourhood': 'Neighborhood'}, inplace=True)
neighborhoods = neighbourhoods
###Output
_____no_output_____
###Markdown
Part 3: Explore and cluster the neighborhoods in Toronto
###Code
neighbourhoods.head()
# Let's see what we have now
print('The dataframe has {} boroughs and {} neighborhoods.'.format(
len(neighborhoods['Borough'].unique()),
neighborhoods.shape[0]
)
)
address = 'Central Toronto, ON'
geolocator = Nominatim(user_agent="to_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto are {}, {}.'.format(latitude, longitude))
###Output
The geograpical coordinate of Toronto are 43.653963, -79.387207.
###Markdown
Let's create a method to create the map, so we can call it again to add markers to map.
###Code
def city_map(df, location, zoom_start):
map = folium.Map(location=location, zoom_start=zoom_start)
for lat, lng, borough, neighborhood in zip(df['Latitude'], df['Longitude'], df['Borough'], df['Neighborhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map)
return map
###Output
_____no_output_____
###Markdown
Create a map for all neighbourhoods in TorontoUsing the method defined above, we can now plot the neighbourhoods of Toronto
###Code
city_map(neighborhoods, location=[latitude,longitude], zoom_start=10)
###Output
_____no_output_____
###Markdown
Limit the data set we exploreThe assignment suggests to limit the explored boroughs to those with 'Toronto' in their name.Let's look at what boroughs we have now.
###Code
neighborhoods['Borough'].unique()
###Output
_____no_output_____
###Markdown
We see that several boroughs have a name containing 'Toronto'. Let's create a map displayingonly the neighbourhoods of those boroughs.
###Code
mask = neighborhoods['Borough'].str.contains('Toronto')
# recenter the map to contain all marks (based on experimenting with values )
latitude = latitude + 0.02
city_map(neighborhoods[mask], location=[latitude,longitude], zoom_start=12)
###Output
_____no_output_____
###Markdown
Compared to the earlier map, we see that the neighbourhoods are now located more in the center.
###Code
toronto_data = neighborhoods[mask]
toronto_data.shape
###Output
_____no_output_____
###Markdown
Explore venues per neighbourhoodWe use Foursquare to obtain an overview of venues per neighbourhood. > **Methodological note**> We have defined neighbourhoods by coordinates, that is, points on the map rather than areas with borders.> We will use Foursquare to find venues *within a radius* of these points.> > There are two consequences:> 1. When neighbourhoods are close together, as we see on the map in the center, the areas covered> by the radius may overlap, so the same venues can be counted as part of different neighbourhoods.> 2. For larger neighbourhoods, further from downtown Toronto, part of the neighbourhood might not be> covered by the radius.>> Exploring these consequences is *outside the scope* of this assignment. Define Foursquare credentialsAs this notebook is shared publicly, credentials are *not* included in the notebook itself, butrather in a text file residing in the same directory. This text file has the format: CLIENT_ID: *your client id* CLIENT_SECRET: *your client secret*
###Code
### Set Foursquare properties
foursquare_secret = {'CLIENT_ID': 'NA', 'CLIENT_SECRET': 'NA', 'VERSION': '20180605'}
with open('foursquare.secret', 'r') as file:
lines = file.readlines()
for l in lines:
ar = l.split(':')
foursquare_secret[ar[0]] = ar[1].strip()
CLIENT_ID = foursquare_secret['CLIENT_ID']
CLIENT_SECRET = foursquare_secret['CLIENT_SECRET']
VERSION = '20180605'
###Output
_____no_output_____
###Markdown
Define a method to get values within a certain distance of the identified coordinates of a neighbourhood.
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
print('Now getting venues for:')
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
###Output
_____no_output_____
###Markdown
Retrieve data on nearby venuesBased on neighbourhoods as specific points, we look for venues in a circle around them. Per neighbourhood we consider a maximum of 100 venues.
###Code
radius = 500 # radius of venue locations is 500 meters
LIMIT = 100 # maximum of 100 venues in the query result
toronto_venues = getNearbyVenues(names=toronto_data['Neighborhood'],
latitudes=toronto_data['Latitude'],
longitudes=toronto_data['Longitude']
)
###Output
Now getting venues for:
The Beaches
The Danforth West,Riverdale
The Beaches West,India Bazaar
Studio District
Lawrence Park
Davisville North
North Toronto West
Davisville
Moore Park,Summerhill East
Deer Park,Forest Hill SE,Rathnelly,South Hill,Summerhill West
Rosedale
Cabbagetown,St. James Town
Church and Wellesley
Harbourfront
Ryerson,Garden District
St. James Town
Berczy Park
Central Bay Street
Adelaide,King,Richmond
Harbourfront East,Toronto Islands,Union Station
Design Exchange,Toronto Dominion Centre
Commerce Court,Victoria Hotel
Roselawn
Forest Hill North,Forest Hill West
The Annex,North Midtown,Yorkville
Harbord,University of Toronto
Chinatown,Grange Park,Kensington Market
CN Tower,Bathurst Quay,Island airport,Harbourfront West,King and Spadina,Railway Lands,South Niagara
Stn A PO Boxes 25 The Esplanade
First Canadian Place,Underground city
Christie
Dovercourt Village,Dufferin
Little Portugal,Trinity
Brockton,Exhibition Place,Parkdale Village
High Park,The Junction South
Parkdale,Roncesvalles
Runnymede,Swansea
Queen's Park
Business Reply Mail Processing Centre 969 Eastern
###Markdown
Check the results
###Code
print(toronto_venues.shape)
toronto_venues.head()
print('There are {} unique venue categories.'.format(len(toronto_venues['Venue Category'].unique())))
###Output
There are 232 unique venue categories.
###Markdown
Venue categories per neighbourhoodHow many venue categories do we have per neighbourhood? We will attempt to cluster neighbourhoods based onthe categories of their venues. If there are few venues in a neighbourhood, the possibilities for clustering with other neighbourhoods are limited.
###Code
venues_per_neighbourhood = toronto_venues[['Neighborhood','Venue']].groupby('Neighborhood').count().sort_values(by="Venue")
venues_per_neighbourhood.head(10)
###Output
_____no_output_____
###Markdown
We see that six neighbourhoods have only four or less venues. We will keep this in mind when looking atthe results of clustering. Check each neighbourhood
###Code
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped.head()
###Output
_____no_output_____
###Markdown
Let's see what the current size is
###Code
toronto_grouped.shape
###Output
_____no_output_____
###Markdown
Let's print each neighborhood along with the top 5 most common venues
###Code
num_top_venues = 5
for hood in toronto_grouped['Neighborhood'][:5]:
print("----"+hood+"----")
temp = toronto_grouped[toronto_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
temp = temp[temp['freq'] > 0.0] # filter out those venues categories with zero frequency
display(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
###Output
----Adelaide,King,Richmond----
###Markdown
Let's put that into a *pandas* dataframe First, let's write a function to sort the venues in descending order.
###Code
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
###Output
_____no_output_____
###Markdown
Now let's create the new dataframe and display the top 10 venues for each neighborhood.
###Code
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
Cluster neighbourhoods Run k-means to cluster the neighborhood into 5 clusters. Preliminaries: cluster and presentBefore actually doing any clustering of neighbourhoods, let's define methods for:1. Clustering2. Presenting the result of clustering on a map3. Evaluate the clusters Method to cluster neighbourhoodsTakes the desired number of clusters as argument and returns two dataframes:1. 'toronto_merged': cluster label added to 2. 'cluster_counts': number of neighbourhoods per cluster
###Code
def cluster_neighbourhoods(kclusters):
'''Cluster neighbourhoods in kcluster groups'''
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_
# add clustering labels
nvs = neighborhoods_venues_sorted.copy()
nvs.insert(0, 'Cluster Labels', kmeans.labels_)
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = pd.merge(toronto_data, nvs.set_index('Neighborhood'), on='Neighborhood')
cluster_counts = toronto_merged[['Neighborhood', 'Cluster Labels']].groupby("Cluster Labels").count().sort_values(by='Neighborhood', ascending=False)
cluster_counts.reset_index()
return toronto_merged, cluster_counts
###Output
_____no_output_____
###Markdown
Let's create a new dataframe that includes the cluster as well as the top 10 venues for each neighborhood. Finally, let's visualize the resulting clusters
###Code
def show_clusters(kclusters, toronto_merged):
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=12)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
return map_clusters
###Output
_____no_output_____
###Markdown
Characterizing the clustersWhat are the main characteristics of each cluster? Let's find out.
###Code
def evaluate_clusters(cluster_counts, toronto_onehot, toronto_merged):
num_top_venues = 5
import matplotlib.pyplot as plt
clustered_onehot = pd.merge(toronto_onehot, toronto_merged[['Neighborhood', 'Cluster Labels']],
how='left', on='Neighborhood')
toronto_grouped_clusters = clustered_onehot.groupby('Cluster Labels').mean().reset_index()
fig, axes = plt.subplots(nrows=kclusters, ncols=1, sharex=True, figsize=(5,4*kclusters))
i=0
for cluster_id, freq in cluster_counts.itertuples():
mask = toronto_grouped_clusters['Cluster Labels'] == cluster_id
temp = toronto_grouped_clusters[mask].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
showframe = temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues)
mask2 = showframe['freq'] > 0.0
# Remove all items with frequency zero
showframe = showframe[mask2]
# Reindex to reverse the order as we want top frequency shown at the top of the barchart
showframe = showframe.reindex(index=showframe.index[::-1])
# Prevent autoscale from making fewer bars in any chart become extra width
width = (0.8 * showframe.shape[0] / 5) - (5-showframe.shape[0])*0.01
title = "Cluster {}: {} neighbourhoods".format(cluster_id,freq)
c = 'blue'
if freq==1:
name = ''
m = toronto_merged['Cluster Labels'] == cluster_id
try:
name = toronto_merged[m]['Neighborhood'][0]
except:
name = toronto_merged[m]['Neighborhood'].values[0]
c = 'green'
title = "Cluster {}: outlier: {}".format(cluster_id, name)
showframe.plot(ax=axes[i], kind='barh',y='freq', x='venue', width=width, color=c)
axes[i].set_title(title)
i=i+1
###Output
_____no_output_____
###Markdown
Evaluate clustering for 5 clustersWe set the number of clusters to 5 and use the above defined methods to cluster, display, and analysethe result.
###Code
kclusters = 5
toronto_merged, cluster_counts = cluster_neighbourhoods(kclusters)
show_clusters(kclusters, toronto_merged)
evaluate_clusters(cluster_counts, toronto_onehot, toronto_merged)
###Output
_____no_output_____
###Markdown
Visualize the number of neighbourhoods included in each cluster.
###Code
cluster_counts.plot(kind='bar', title='Number of neighbourhoods include in each cluster for {} clusters'.format(kclusters))
###Output
_____no_output_____
###Markdown
Conclusion for 5 clustersWhen we divide neighbourhoods in 5 clusters our result includes 3 outliers of one neighbourhood, one clusterof two neighbourhoods, and one cluster with all the other neighbourhoods.Thus, this cluster size helps us to identify outliers, but it shows little in the way of actual clusters and their possible characteristics. Variation: What happens with other numbers of clusters?I've set the number of clusters to several values. This mostly resulted in getting one blob of neighbourhoods and a number of outliers.First at kclusters=10 did I get a split up to larger sized clusters.
###Code
kclusters = 10
toronto_merged, cluster_counts = cluster_neighbourhoods(kclusters)
show_clusters(kclusters, toronto_merged)
evaluate_clusters(cluster_counts, toronto_onehot, toronto_merged)
###Output
_____no_output_____
###Markdown
Let's visualize the number of neighbourhoods in each cluster.
###Code
cluster_counts.plot(kind='bar', title='Number of neighbourhoods include in each cluster for {} clusters'.format(kclusters))
###Output
_____no_output_____
###Markdown
Segmenting and Clustering Neighborhoods in Toronto Section One Import required libraries
###Code
import pandas as pd
import requests
from IPython.display import display, HTML
###Output
_____no_output_____
###Markdown
Fetch "List of postal codeds of Canada: M" then parse it into Pandas DataFrame
###Code
url = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
r = requests.get(url)
wiki_table = pd.read_html(r.text, flavor='html5lib')
df = wiki_table[0]
df.columns = ['PostalCode', 'Borough', 'Neighborhood']
df
###Output
_____no_output_____
###Markdown
Drop unassigned Borough
###Code
df.drop(df[df['Borough'] == 'Not assigned'].index, inplace=True)
df.reset_index(drop=True, inplace=True)
df
###Output
_____no_output_____
###Markdown
Sort Postcode, Borough, and Neighbourhood then group by Postcode and Borough then aggregate the Neighbourhood columns by joining them into a string separated by "comma". Then check for "Not assigned" neighbourhood.
###Code
df.sort_values(['PostalCode', 'Borough', 'Neighborhood'], inplace=True)
df_grouped = df.groupby(['PostalCode', 'Borough'])['Neighborhood'].apply(', '.join).reset_index()
df_grouped[df_grouped['Neighborhood'] == 'Not assigned']
###Output
_____no_output_____
###Markdown
Final DataFrame
###Code
df_grouped
df_grouped.shape
###Output
_____no_output_____
###Markdown
Section Two Import required libraries
###Code
# !conda install -c conda-forge geopy --yes
from geopy.geocoders import Nominatim
geolocator = Nominatim(user_agent="Toronto Geolocator")
df_location = df_grouped.copy()
# Because the geopy is unreliable I won't add new column manually
# df_location['Latitude'] = ''
# df_location['Longitude'] = ''
df_location
###Output
_____no_output_____
###Markdown
__Note__: Unreliability proof; I limit the trial to about 10 times per postal code because each trial takes considerable time if you take into the account the time needed to get all the data for every postal code
###Code
lat_lon = []
for idx, row in df_location.iterrows():
print(idx)
try:
postcode = df_location.at[idx, 'PostalCode']
geo = None
for i in range(10):
geo = geolocator.geocode(f'{postcode}, Toronto, Ontario')
if geo: break
print(idx, postcode, geo)
# Save
if geo:
lat_lon.append(idx, geo.latitude, geo.longitude)
except:
continue
###Output
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
###Markdown
As it said in the assignment page, the package is very unreliable. Fallback using provided data.
###Code
# !wget -q -O geo_data.csv https://cocl.us/Geospatial_data
###Output
_____no_output_____
###Markdown
Parse the geo data
###Code
df_geo = pd.read_csv('geo_data.csv')
df_geo.columns = ['PostalCode', 'Latitude', 'Longitude']
df_geo
df_toronto = df_location.merge(df_geo, left_on='PostalCode', right_on='PostalCode')
df_toronto
###Output
_____no_output_____
###Markdown
Section Three Set Foursquare variables
###Code
CLIENT_ID = 'EM0NULKILDUZUGSXYVR1TWWDQHMCB3CPMMB3CS0EWOSBDKML' # your Foursquare ID
CLIENT_SECRET = '4OMQKSEUD2IPNSM2WQZ144IHJNMDEDZG2GL1OHZ2YDRB5PWC' # your Foursquare Secret
VERSION = '20180605' # Foursquare API version
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET: ' + CLIENT_SECRET)
df_toronto['Borough'].value_counts()
def getNearbyVenues(names, latitudes, longitudes, radius=500, LIMIT=200):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
###Output
_____no_output_____
###Markdown
Get 200 venues for each neighborhood.
###Code
toronto_venues = getNearbyVenues(names=df_toronto['Neighborhood'],
latitudes=df_toronto['Latitude'],
longitudes=df_toronto['Longitude'])
###Output
Malvern / Rouge
Rouge Hill / Port Union / Highland Creek
Guildwood / Morningside / West Hill
Woburn
Cedarbrae
Scarborough Village
Kennedy Park / Ionview / East Birchmount Park
Golden Mile / Clairlea / Oakridge
Cliffside / Cliffcrest / Scarborough Village West
Birch Cliff / Cliffside West
Dorset Park / Wexford Heights / Scarborough Town Centre
Wexford / Maryvale
Agincourt
Clarks Corners / Tam O'Shanter / Sullivan
Milliken / Agincourt North / Steeles East / L'Amoreaux East
Steeles West / L'Amoreaux West
Upper Rouge
Hillcrest Village
Fairview / Henry Farm / Oriole
Bayview Village
York Mills / Silver Hills
Willowdale / Newtonbrook
Willowdale
York Mills West
Willowdale
Parkwoods
Don Mills
Don Mills
Bathurst Manor / Wilson Heights / Downsview North
Northwood Park / York University
Downsview
Downsview
Downsview
Downsview
Victoria Village
Parkview Hill / Woodbine Gardens
Woodbine Heights
The Beaches
Leaside
Thorncliffe Park
East Toronto
The Danforth West / Riverdale
India Bazaar / The Beaches West
Studio District
Lawrence Park
Davisville North
North Toronto West
Davisville
Moore Park / Summerhill East
Summerhill West / Rathnelly / South Hill / Forest Hill SE / Deer Park
Rosedale
St. James Town / Cabbagetown
Church and Wellesley
Regent Park / Harbourfront
Garden District, Ryerson
St. James Town
Berczy Park
Central Bay Street
Richmond / Adelaide / King
Harbourfront East / Union Station / Toronto Islands
Toronto Dominion Centre / Design Exchange
Commerce Court / Victoria Hotel
Bedford Park / Lawrence Manor East
Roselawn
Forest Hill North & West
The Annex / North Midtown / Yorkville
University of Toronto / Harbord
Kensington Market / Chinatown / Grange Park
CN Tower / King and Spadina / Railway Lands / Harbourfront West / Bathurst Quay / South Niagara / Island airport
Stn A PO Boxes
First Canadian Place / Underground city
Lawrence Manor / Lawrence Heights
Glencairn
Humewood-Cedarvale
Caledonia-Fairbanks
Christie
Dufferin / Dovercourt Village
Little Portugal / Trinity
Brockton / Parkdale Village / Exhibition Place
North Park / Maple Leaf Park / Upwood Park
Del Ray / Mount Dennis / Keelsdale and Silverthorn
Runnymede / The Junction North
High Park / The Junction South
Parkdale / Roncesvalles
Runnymede / Swansea
Queen's Park / Ontario Provincial Government
Canada Post Gateway Processing Centre
Business reply mail Processing CentrE
New Toronto / Mimico South / Humber Bay Shores
Alderwood / Long Branch
The Kingsway / Montgomery Road / Old Mill North
Old Mill South / King's Mill Park / Sunnylea / Humber Bay / Mimico NE / The Queensway East / Royal York South East / Kingsway Park South East
Mimico NW / The Queensway West / South of Bloor / Kingsway Park South West / Royal York South West
Islington Avenue
West Deane Park / Princess Gardens / Martin Grove / Islington / Cloverdale
Eringate / Bloordale Gardens / Old Burnhamthorpe / Markland Wood
Humber Summit
Humberlea / Emery
Weston
Westmount
Kingsview Village / St. Phillips / Martin Grove Gardens / Richview Gardens
South Steeles / Silverstone / Humbergate / Jamestown / Mount Olive / Beaumond Heights / Thistletown / Albion Gardens
Northwest
###Markdown
Save to CSV
###Code
toronto_venues.to_csv('toronto_venues.csv')
toronto_venues.groupby('Neighborhood').count()
len(toronto_venues['Venue Category'].unique())
###Output
_____no_output_____
###Markdown
In my case, `Venue Category` named `Neighborhood` must be get rid in order to avoid some error when transforming the DataFrame into one-hot form.
###Code
toronto_venues[toronto_venues['Venue Category'].str.contains('Nei')]
toronto_venues.drop(toronto_venues[toronto_venues['Venue Category'].str.contains('Nei')].index, inplace=True)
toronto_venues[toronto_venues['Venue Category'].str.contains('Nei')]
toronto_venues['Venue Category'].value_counts()[0:20]
###Output
_____no_output_____
###Markdown
Transform to one-hot form to make it easier to cluster then.
###Code
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
list_columns = list(filter(lambda x: x != 'Neighborhood', list(toronto_onehot.columns)))
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
new_columns = ['Neighborhood'] + list_columns
toronto_onehot = toronto_onehot[new_columns]
toronto_onehot
###Output
_____no_output_____
###Markdown
Grouping same neighborhood name, since initially it based on postal code and each neighborhood may have several postal code if it has big area.
###Code
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
top_venues = 10
columns = ['1st', '2nd', '3rd', '4th', '5th', '6th', '7th', '8th', '9th', '10th']
columns = [i + ' most common' for i in columns]
columns = ['Neighborhood'] + columns
columns
toronto_venues_sorted = pd.DataFrame(columns=columns)
toronto_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for idx, row in toronto_grouped.iterrows():
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
toronto_venues_sorted.loc[idx, 1:] = row_categories_sorted.index.values[:10]
toronto_venues_sorted
from sklearn.cluster import KMeans
toronto_cluster = toronto_grouped.drop('Neighborhood', axis=1)
cluster_size = 5
kmeans = KMeans(n_clusters=cluster_size, random_state=42).fit(toronto_cluster)
kmeans.labels_[:10]
toronto_data1 = df_toronto[['Neighborhood', 'Latitude', 'Longitude']].groupby('Neighborhood').mean()
toronto_data1
toronto_data2 = toronto_venues_sorted
toronto_data2
toronto_final_data = toronto_data1.merge(toronto_data2, left_on='Neighborhood', right_on='Neighborhood')
toronto_final_data['Cluster'] = kmeans.labels_
toronto_final_data
# !conda install -c conda-forge folium --yes
import folium
import numpy as np
import matplotlib.cm as cm
import matplotlib.colors as colors
latitude = 43.722365
longitude = -79.412422
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(cluster_size)
ys = [i + x + (i*x)**2 for i in range(cluster_size)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
for idx, row in toronto_final_data.iterrows():
poi = row[0]
lat = row[1]
lon = row[2]
most_common = row[3]
cluster = row[-1]
label = folium.Popup(f'{poi} cluster {cluster} most common {most_common}', parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7
).add_to(map_clusters)
map_clusters
map_clusters.save('toronto_cluster_map.html')
###Output
_____no_output_____
###Markdown
In case the map is not showed, it can be seen in the [toronto_cluster_map.html](https://gpratama.github.io/toronto_cluster_map.html) Based on the cluster showed in rendered map it seems that the most dominant cluster, cluster 1, is centered at the city center and not so dense when it far from the city center. There also another dominant cluster, cluster 3, that seems to have no identifiable cluster center. The other cluster seems to not dominant compared to the first two. It can be said that there are two interesting cluster, cluster 1 and cluster 3.
###Code
toronto_final_data[toronto_final_data['Cluster'] == 0]
toronto_final_data[toronto_final_data['Cluster'] == 1]
###Output
_____no_output_____
###Markdown
Cluster 1 seems to have most various kind of common venues apparently.
###Code
toronto_final_data[toronto_final_data['Cluster'] == 1]['1st most common'].value_counts()
###Output
_____no_output_____
###Markdown
But when we see the count of most common venues it shows that it dominated by Coffee Shop
###Code
toronto_final_data[toronto_final_data['Cluster'] == 2]
toronto_final_data[toronto_final_data['Cluster'] == 3]
###Output
_____no_output_____
###Markdown
Cluster 3 showed that most common venue there is Park
###Code
toronto_final_data[toronto_final_data['Cluster'] == 4]
###Output
_____no_output_____
###Markdown
1) Extract data of Toronto neighborhoods from Wikipedia. - Clean and display the top 10 rows along with shape head - Import Libraries Use pandas, or the BeautifulSoup package, or any other way you are comfortable with to transform the data in the table on the Wikipedia page into the above pandas dataframe.
###Code
# importing necessary libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import folium
import requests
import json
from bs4 import BeautifulSoup
import matplotlib.cm as cm
import matplotlib.colors as colors
%matplotlib inline
print('Packages installed')
#Extracting data from the URL. Used old version of Wiki: 18:01, 8 March 2021 because a new one has another format..
url='https://en.wikipedia.org/w/index.php?title=List_of_postal_codes_of_Canada:_M&oldid=1011037969'
result = requests.get(url)
data_html = BeautifulSoup(result.content)
soup = BeautifulSoup(str(data_html))
neigh = soup.find('table')
table_str = str(neigh.extract())
df = pd.read_html(table_str)[0]
df.head()
df_dropna = df[df.Borough != 'Not assigned'].reset_index(drop=True)
#renaming the colomn for better reading
df_dropna.rename(columns={'Postal Code' : 'PostalCode'}, inplace=True)
#Dropping "Not assigned"
df = df_dropna
#Displaying first 5 rows
df.head()
#Grouping data based on "Borough"
df_grouped = df.groupby(['Borough', 'PostalCode'], as_index=False).agg(lambda x:','.join(x))
df_grouped.head()
# Checking if there are neighborhoods that are Not Assigned
df_grouped.loc[df_grouped['Borough'].isin(["Not assigned"])]
#adding the Latitude and Longitudes (LL) of each specific location
df = df_grouped
print('The DataFrame shape is', df.shape)
###Output
The DataFrame shape is (103, 3)
###Markdown
*the dataframe should be group by the Postal code, ending with a dataframe with 103 rows.* 2) Latitudes and Longitudes corresponding to the different PostalCodes
###Code
geo_url = "https://cocl.us/Geospatial_data"
geo_df = pd.read_csv(geo_url)
geo_df.rename(columns={'Postal Code': 'PostalCode'}, inplace=True)
geo_df.head()
# Merging data from two tables
df = pd.merge(df, geo_df, on='PostalCode')
df.head()
# finding how many neighborhoods in each borough
df.groupby('Borough').count()['Neighbourhood']
#finding all the neighborhoods of Toronto
df_toronto = df
df_toronto.head()
#Create list with the boroughs
boroughs = df_toronto['Borough'].unique().tolist()
#Obtaining LL coordinates of Toronto itself
lat_toronto = df_toronto['Latitude'].mean()
lon_toronto = df_toronto['Longitude'].mean()
print('The geographical coordinates of Toronto are {}, {}'.format(lat_toronto, lon_toronto))
# color categorization of each borough
borough_color = {}
for borough in boroughs:
borough_color[borough]= '#%02X%02X%02X' % tuple(np.random.choice(range(256), size=3)) #Random color
map_toronto = folium.Map(location=[lat_toronto, lon_toronto], zoom_start=10.5)
# adding markers to map
for lat, lng, borough, neighborhood in zip(df_toronto['Latitude'],
df_toronto['Longitude'],
df_toronto['Borough'],
df_toronto['Neighbourhood']):
label_text = borough + ' - ' + neighborhood
label = folium.Popup(label_text)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color=borough_color[borough],
fill_color=borough_color[borough],
fill_opacity=0.8).add_to(map_toronto)
map_toronto
CLIENT_ID = '4510O2EFHUUWKW4WLHQJT2BUYYKD10YZ53DSL1XLQH2IIZES' # your Foursquare ID
CLIENT_SECRET = 'RTMAUAZW4Y0XDJA4PAUAHH32T5D5EHKWVT3VHTB0KG14M22O' # your Foursquare Secret
VERSION = 20200514 # Foursquare API version
print('Credentials Stored')
df.loc[3, 'Neighbourhood']
###Output
_____no_output_____
###Markdown
*We will analyze the fourth Neighborhood, Davisville*
###Code
law_lat = df.loc[3, 'Latitude']
law_long = df.loc[3, 'Longitude']
law_name = df.loc[3, 'Neighbourhood']
print('Latitude and longitude values of {} are {}, {}.'.format(law_name,
law_lat,
law_long))
###Output
Latitude and longitude values of Davisville are 43.7043244, -79.3887901.
###Markdown
*Now, let's get the top 100 venues that are in Davisville within a radius of 500 meters.*
###Code
LIMIT = 100 # limit of number of venues returned by Foursquare API
radius = 500 # define radius
# create URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
law_lat,
law_long,
radius,
LIMIT)
url
results = requests.get(url).json()
results
# extracting the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
#structuring json into pandas dataframe
venues = results['response']['groups'][0]['items']
nearby_venues = pd.json_normalize(venues) # flatten JSON
# filter columns
filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# filter the category for each row
nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1)
# clean columns
nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns]
nearby_venues.head()
#finding how many venues around Davisville were found
print('{} venues were returned by Foursquare.'.format(nearby_venues.shape[0]))
###Output
34 venues were returned by Foursquare.
###Markdown
*Exploring other neighborhoods of Toronto*
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighbourhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
#creating a new dataframe with toronto_venues (from the previous request)
toronto_venues = getNearbyVenues(names=df['Neighbourhood'],
latitudes=df['Latitude'],
longitudes=df['Longitude']
)
#getting the size and shape of the dataframe
print(toronto_venues.shape)
toronto_venues.head()
toronto_venues.groupby('Neighbourhood').count()
#Checking how many unique Venues there are that can be curated
print('There are {} uniques categories.'.format(len(toronto_venues['Venue Category'].unique())))
###Output
There are 276 uniques categories.
###Markdown
Analyzing each neighborhood
###Code
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# adding neighbourhood to DF
toronto_onehot['Neighbourhood'] = toronto_venues['Neighbourhood']
# move neighbourhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
toronto_onehot.shape
toronto_grouped = toronto_onehot.groupby('Neighbourhood').mean().reset_index()
toronto_grouped
###Output
_____no_output_____
###Markdown
*neighborhood along with the top 3 most common venues:*
###Code
num_top_venues = 3
for hood in toronto_grouped['Neighbourhood']:
print("----"+hood+"----")
temp = toronto_grouped[toronto_grouped['Neighbourhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
#converting into pandas
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
#top 10 venues for each neighbourhood
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighbourhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighbourhood'] = toronto_grouped['Neighbourhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head(11)
###Output
_____no_output_____
###Markdown
Clustering neighbourhoods
###Code
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighbourhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighbourhood'] = toronto_grouped['Neighbourhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head(11)
kmeans = KMeans(n_clusters=3, init='k-means++', max_iter=15, random_state=8)
X = toronto_grouped.drop(['Neighbourhood'], axis=1)
kmeans.fit(X)
kmeans.labels_[0:10]
def get_inertia(n_clusters):
km = KMeans(n_clusters=n_clusters, init='k-means++', max_iter=15, random_state=8)
km.fit(X)
return km.inertia_
scores = [get_inertia(x) for x in range(2, 21)]
plt.figure(figsize=[10, 8])
sns.lineplot(x=range(2, 21), y=scores)
plt.title("K vs Error")
plt.xticks(range(2, 21))
plt.xlabel("K")
plt.ylabel("Error")
###Output
_____no_output_____
###Markdown
*from the plot we see that K=7 is the best choise*
###Code
kclusters = 7
toronto_grouped_clustering = toronto_grouped.drop('Neighbourhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster_Labels', kmeans.labels_)
toronto_merged = df
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighbourhood'), on='Neighbourhood')
toronto_merged.head()
toronto_merged.tail()
#Invalid types of Clusters or Venues?
toronto_drop = toronto_merged[toronto_merged.Cluster_Labels != 'NaN'].reset_index(drop=True)
toronto_merged.dropna(axis=0, how='any', thresh=None, subset=None, inplace=True)
###Output
_____no_output_____
###Markdown
Visualization
###Code
# create map
map_clusters = folium.Map(location=[lat_toronto, lon_toronto], zoom_start=10.5)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighbourhood'], toronto_merged['Cluster_Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[int(cluster)-1],
fill=True,
fill_color=rainbow[int(cluster)-1],
fill_opacity=0.75).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Cluster 1
###Code
toronto_merged.loc[toronto_merged['Cluster_Labels'] == 0, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 2
###Code
toronto_merged.loc[toronto_merged['Cluster_Labels'] == 1, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 3
###Code
toronto_merged.loc[toronto_merged['Cluster_Labels'] == 2, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 4
###Code
toronto_merged.loc[toronto_merged['Cluster_Labels'] == 3, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 5
###Code
toronto_merged.loc[toronto_merged['Cluster_Labels'] == 5, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 6
###Code
toronto_merged.loc[toronto_merged['Cluster_Labels'] == 6, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Install the needed packadges
###Code
#! pip install beautifulsoup4
#! pip3 install lxml
###Output
Requirement already satisfied: beautifulsoup4 in /opt/conda/lib/python3.8/site-packages (4.9.3)
Requirement already satisfied: soupsieve>1.2; python_version >= "3.0" in /opt/conda/lib/python3.8/site-packages (from beautifulsoup4) (2.0.1)
Requirement already satisfied: lxml in /opt/conda/lib/python3.8/site-packages (4.6.1)
###Markdown
Import the needed packadges import the page
###Code
import pandas as pd
#get the page
URL='https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
###Output
Total tables: 3
###Markdown
Transform the table to dataframes
###Code
page = pd.read_html(URL)
print(f'Total tables: {len(page)}')
###Output
_____no_output_____
###Markdown
Check what is the correct df
###Code
for table in range(len(page)):
print(' -+-+- Table',table)
page_df = page[table]
print(page_df.head())
print('\n \n -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-')
###Output
-+-+- Table 0
Postal Code Borough Neighbourhood
0 M1A Not assigned Not assigned
1 M2A Not assigned Not assigned
2 M3A North York Parkwoods
3 M4A North York Victoria Village
4 M5A Downtown Toronto Regent Park, Harbourfront
-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
-+-+- Table 1
0 \
0 NaN
1 NL NS PE NB QC ON MB SK AB BC NU/NT YT A B C E...
2 NL
3 A
1 \
0 Canadian postal codes
1 NL NS PE NB QC ON MB SK AB BC NU/NT YT A B C E...
2 NS
3 B
2 3 4 5 6 7 \
0 NaN NaN NaN NaN NaN NaN
1 NL NS PE NB QC ON MB SK AB BC NU/NT YT A B C E... NaN NaN NaN NaN NaN
2 PE NB QC QC QC ON
3 C E G H J K
8 9 10 11 12 13 14 15 16 17
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 ON ON ON ON MB SK AB BC NU/NT YT
3 L M N P R S T V X Y
-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
-+-+- Table 2
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
0 NL NS PE NB QC QC QC ON ON ON ON ON MB SK AB BC NU/NT YT
1 A B C E G H J K L M N P R S T V X Y
-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-
###Markdown
Prepare the date Remove the "Not assigned"
###Code
page_df = page[0]
print(page_df.columns)
df=page_df[page_df.Borough != 'Not assigned']
print(df)
###Output
Index(['Postal Code', 'Borough', 'Neighbourhood'], dtype='object')
Postal Code Borough \
2 M3A North York
3 M4A North York
4 M5A Downtown Toronto
5 M6A North York
6 M7A Downtown Toronto
.. ... ...
160 M8X Etobicoke
165 M4Y Downtown Toronto
168 M7Y East Toronto
169 M8Y Etobicoke
178 M8Z Etobicoke
Neighbourhood
2 Parkwoods
3 Victoria Village
4 Regent Park, Harbourfront
5 Lawrence Manor, Lawrence Heights
6 Queen's Park, Ontario Provincial Government
.. ...
160 The Kingsway, Montgomery Road, Old Mill North
165 Church and Wellesley
168 Business reply mail Processing Centre, South C...
169 Old Mill South, King's Mill Park, Sunnylea, Hu...
178 Mimico NW, The Queensway West, South of Bloor,...
[103 rows x 3 columns]
###Markdown
The shape is:
###Code
df.shape
###Output
_____no_output_____
###Markdown
Import Dependencies
###Code
!pip install geopy
import numpy as np
from geopy.geocoders import Nominatim # module to convert an address into latitude and longitude
import folium # map rendering library
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import json # library to handle JSON files
import requests # library to handle requests
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# import k-means from clustering stage
from sklearn.cluster import KMeans
print('all dependencies imported')
###Output
Requirement already satisfied: geopy in /usr/local/lib/python3.6/dist-packages (1.17.0)
Requirement already satisfied: geographiclib<2,>=1.49 in /usr/local/lib/python3.6/dist-packages (from geopy) (1.50)
all dependencies imported
###Markdown
Webscrapping the data
###Code
url='https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
dfs=pd.read_html(url)
dfs=dfs[0]
###Output
_____no_output_____
###Markdown
Process the cells that have an assigned borough.So ignore cells with a borough or a Neighborhood that is Not assigned.
###Code
dfs=dfs[dfs['Borough']!='Not assigned']
dfs=dfs[dfs['Neighborhood']!='Not assigned']
dfs.reset_index(drop=True, inplace=True)
print('DataFrame Shape = ', dfs.shape)
dfs.head()
###Output
DataFrame Shape = (103, 3)
###Markdown
Getting Location data from CSVIt was not possible to scrap the location coordenates data from web using geopy geocoder because it did not reconaize the Postal Code as input. Geocoder library did never work on my favorite notebook.
###Code
url2='http://cocl.us/Geospatial_data'
dfl=pd.read_csv(url2)
###Output
_____no_output_____
###Markdown
Including Lat and Long into the dataFrame
###Code
neighborhoods=pd.merge(dfs,dfl, on=['Postal Code'], how='inner')
neighborhoods.head()
###Output
_____no_output_____
###Markdown
Check locations on a mapLet's use Folium to do that:
###Code
latitude=neighborhoods['Latitude'].mean()
longitude=neighborhoods['Longitude'].mean()
# create map of New York using latitude and longitude values
map_ontario = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighborhood in zip(neighborhoods['Latitude'], neighborhoods['Longitude'], neighborhoods['Borough'], neighborhoods['Neighborhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_ontario)
map_ontario
###Output
_____no_output_____
###Markdown
Explore and cluster the neighborhoods in Toronto Let's isolate locations in Toronto to then explore There are at least five ways to do it, however my personal challenge was to perform the job in the more "pandastic way"
###Code
# Split Borogh on two: Prefix ("Downtown") and Postfix ("Toronto")
neighborhoods['Prefix'], neighborhoods['Postfix'] = neighborhoods['Borough'].str.split(' ', 1).str
# Then we keep only data with 'Postfix' == 'Toronto'
toronto_data = neighborhoods[neighborhoods['Postfix'] == 'Toronto'].reset_index(drop=True)
# and fianlly clean the not useful rows
neighborhoods.drop(['Prefix','Postfix'], axis=1)
toronto_data.drop(['Prefix','Postfix'], axis=1)
print(toronto_data.shape)
toronto_data.head()
###Output
(39, 7)
###Markdown
Check the Toronto locations on a mapLet's use Folium again to do that:
###Code
latitude=toronto_data['Latitude'].mean()
longitude=toronto_data['Longitude'].mean()
# create map of New York using latitude and longitude values
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=12)
# add markers to map
for lat, lng, borough, neighborhood in zip(toronto_data['Latitude'], toronto_data['Longitude'], toronto_data['Borough'], toronto_data['Neighborhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
###Output
_____no_output_____
###Markdown
Nice! we have all the locations in Toronto. Explore the venues around the Neighborhoods Define Foursquare Credentials and Version
###Code
CLIENT_ID = 'S5BZI1SDTG41WSXO01S1F2GDM4WX2UFQGDH2GXYOB2U13G0C' # your Foursquare ID
CLIENT_SECRET = '5WAP03PWNIASSDXVWLGCZPCWZ2OCPN3T4R4405UQ3KWQ4NHB' # your Foursquare Secret
VERSION = '20180605' # Foursquare API version
radius=250
LIMIT=500
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
###Output
Your credentails:
CLIENT_ID: S5BZI1SDTG41WSXO01S1F2GDM4WX2UFQGDH2GXYOB2U13G0C
CLIENT_SECRET:5WAP03PWNIASSDXVWLGCZPCWZ2OCPN3T4R4405UQ3KWQ4NHB
###Markdown
We borrow the function from the New York notebook:
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
###Output
_____no_output_____
###Markdown
Now we call the above function on each neighborhood and create a new dataframe named toronto_venues.
###Code
toronto_venues = getNearbyVenues(names=toronto_data['Neighborhood'],
latitudes=toronto_data['Latitude'],
longitudes=toronto_data['Longitude']
)
###Output
Regent Park, Harbourfront
Queen's Park, Ontario Provincial Government
Garden District, Ryerson
St. James Town
The Beaches
Berczy Park
Central Bay Street
Christie
Richmond, Adelaide, King
Dufferin, Dovercourt Village
Harbourfront East, Union Station, Toronto Islands
Little Portugal, Trinity
The Danforth West, Riverdale
Toronto Dominion Centre, Design Exchange
Brockton, Parkdale Village, Exhibition Place
India Bazaar, The Beaches West
Commerce Court, Victoria Hotel
Studio District
Lawrence Park
Roselawn
Davisville North
Forest Hill North & West, Forest Hill Road Park
High Park, The Junction South
North Toronto West, Lawrence Park
The Annex, North Midtown, Yorkville
Parkdale, Roncesvalles
Davisville
University of Toronto, Harbord
Runnymede, Swansea
Moore Park, Summerhill East
Kensington Market, Chinatown, Grange Park
Summerhill West, Rathnelly, South Hill, Forest Hill SE, Deer Park
CN Tower, King and Spadina, Railway Lands, Harbourfront West, Bathurst Quay, South Niagara, Island airport
Rosedale
Stn A PO Boxes
St. James Town, Cabbagetown
First Canadian Place, Underground city
Church and Wellesley
Business reply mail Processing Centre, South Central Letter Processing Plant Toronto
###Markdown
Let's check the size of the resulting dataframe
###Code
print(toronto_venues.shape)
print('We then found {} venues'.format(toronto_venues['Venue'].count()))
toronto_venues.head()
###Output
(1622, 7)
We then found 1622 venues
###Markdown
Categorize the venuesWe have to convert string categories into discete categories in order to perform K-means
###Code
# discrete encoding
toronto_categories = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_categories['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_categories.columns[-1]] + list(toronto_categories.columns[:-1])
toronto_categories = toronto_categories[fixed_columns]
print('Toronto categories shape={}'.format(toronto_categories.shape))
###Output
Toronto categories shape=(1622, 233)
###Markdown
Group the categories by Neighborhood.Before apply the clustering, let's group the categories by Neighborhood:
###Code
toronto_grouped = toronto_categories.groupby('Neighborhood').mean().reset_index()
toronto_grouped.head()
###Output
_____no_output_____
###Markdown
Cluster NeighborhoodsWe will run *k*-means to cluster the neighborhood into 4 clusters.
###Code
# set number of clusters
kclusters = 4
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:100]
###Output
_____no_output_____
###Markdown
Then we have to merge the original data with labels on Neighborhood
###Code
# add clustering labels. If error related to "Cluster Labels" already exists,
# please run from : "Group the categories by Neighborhood. Before apply the
# clustering, let's group the categories by Neighborhood"
toronto_grouped.insert(0, 'Cluster Labels', kmeans.labels_)
toronto_merged = toronto_data
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(toronto_grouped.set_index('Neighborhood'), on='Neighborhood')
toronto_merged.head() # check the last columns!
###Output
_____no_output_____
###Markdown
Finally, let's visualize the resulting clusters
###Code
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Segmenting and Clustering Neighborhoods in Toronto Installing packages
###Code
!conda install -c conda-forge geocoder geopy folium=0.5.0 --yes
from urllib.request import urlopen
from bs4 import BeautifulSoup
import geocoder
import numpy as np # library to handle data in a vectorized manner
import pandas as pd # library for data analsysis
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import json # library to handle JSON files
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
import requests # library to handle requests
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# import k-means from clustering stage
from sklearn.cluster import KMeans
import folium # map rendering library
print('Libraries imported.')
###Output
Libraries imported.
###Markdown
Getting page
###Code
wiki_url = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
page = urlopen(wiki_url).read().decode('utf-8')
soup = BeautifulSoup(page, 'html.parser')
###Output
_____no_output_____
###Markdown
Parsing page data
###Code
postcode_table = soup.body.table.tbody
def grab_data(element):
cells = element.find_all('td')
if len(cells) == 0 or cells[1].string == 'Not assigned' or cells[1].a == None or cells[2].a == None:
return []
return [cells[0].string, cells[1].a.text, cells[2].a.text]
codes = []
for element in postcode_table.find_all('tr'):
row = grab_data(element)
if len(row) == 3 and row[1] != 'Not assigned' and row[2] != 'Not assigned':
codes.append(row)
print('Found {0} codes'.format(len(codes)))
###Output
Found 140 codes
###Markdown
Adding geo cordinates
###Code
def get_geo(row):
postal_code = row[0]
lat_lng_coords = None # initialize your variable to None
# loop until you get the coordinates
while(lat_lng_coords is None):
g = geocoder.google('{}, Toronto, Ontario'.format(postal_code))
lat_lng_coords = g.latlng
latitude = lat_lng_coords[0]
longitude = lat_lng_coords[1]
return [latitude, longitude]
for i in range(len(codes)):
codes[i].extend(get_geo(codes[i]))
###Output
_____no_output_____
###Markdown
Making dataframe
###Code
header = ['PostalCode', 'Borough', 'Neighbourhood', 'Latitude', 'Longitude']
postal_df = pd.DataFrame.from_records(codes, columns=header)
postal_df.head()
###Output
_____no_output_____
###Markdown
Create map of Toronto using latitude and longitude values
###Code
map_toronto = folium.Map(location=[43.75, -79.32], zoom_start=10)
# add markers to map
for index, row in postal_df.iterrows():
label = '{}, {}'.format(row['Neighbourhood'], row['Borough'])
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[row['Latitude'], row['Longitude']],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
###Output
_____no_output_____
###Markdown
Define Foursquare Credentials and Version
###Code
CLIENT_ID = 'B2ISZO0KSOYNSZBXF5WUWWDQYTPWDA3RLYWWOJ3YU22JLBNE' # your Foursquare ID
CLIENT_SECRET = 'BWBE5XM1JH2WLLD5CKKO230JT2KVSW00X1K0CDDZSUKJWAOE' # your Foursquare Secret
VERSION = '20180605'
LIMIT = 30
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
###Output
Your credentails:
CLIENT_ID: B2ISZO0KSOYNSZBXF5WUWWDQYTPWDA3RLYWWOJ3YU22JLBNE
CLIENT_SECRET:BWBE5XM1JH2WLLD5CKKO230JT2KVSW00X1K0CDDZSUKJWAOE
###Markdown
Create a function to get all the neighborhoods in Toronto
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
###Output
_____no_output_____
###Markdown
Just Toronto data
###Code
toronto_boroughs = ['Downtown Toronto', 'East Toronto', 'West Toronto', 'Central Toronto']
toronto_data = postal_df[postal_df['Borough'].isin(toronto_boroughs)].reset_index(drop=True)
toronto_data.head()
toronto_data.shape
toronto_venues = getNearbyVenues(names=toronto_data['Neighbourhood'],
latitudes=toronto_data['Latitude'],
longitudes=toronto_data['Longitude']
)
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
toronto_onehot.shape
###Output
_____no_output_____
###Markdown
Next, let's group rows by neighborhood and by taking the mean of the frequency of occurrence of each categor
###Code
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped
toronto_grouped.shape
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
toronto_data.drop(toronto_data.index[len(toronto_data)-1], inplace=True)
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted
###Output
_____no_output_____
###Markdown
Run k-means to cluster the neighborhood into 5 clusters.
###Code
# set number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
toronto_merged = toronto_data
# add clustering labels
toronto_merged['Cluster Labels'] = kmeans.labels_
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighbourhood')
toronto_merged.head() # check the last columns!
# create map
map_clusters = folium.Map(location=[43.75, -79.32], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i+x+(i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighbourhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Cluster 1
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 0, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
### Cluster 2
manhattan_merged.loc[manhattan_merged['Cluster Labels'] == 1, manhattan_merged.columns[[1] + list(range(5, manhattan_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Importing Libraries
###Code
# To access system-specific parameters and functions
import sys
# A general-purpose array-processing package
import numpy as np
# A library to manage the file-related input and output operations
import io
#from IPython.display import Image
!pip install geocoder
import geocoder
# library for Data Analsysis
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
# Matplotlib and Associated Plotting Modules
import matplotlib.pyplot as plt
import matplotlib.colors as colors
# Library to Handle JSON Files
import json
# Library to Handle Requests
import requests
# uncomment this line if you haven't completed the Foursquare API lab
!conda install -c conda-forge geopy --yes
# convert an address into latitude and longitude values
from geopy.geocoders import Nominatim
# tranform JSON file into a pandas dataframe
from pandas.io.json import json_normalize
!conda install -c conda-forge scikit-learn
# import k-means from clustering stage
from sklearn.cluster import KMeans
# uncomment this line if you haven't completed the Foursquare API lab
!conda install -c conda-forge folium=0.5.0 --yes
import folium # map rendering library
!conda install -c conda-forge beautifulsoup4 --yes
from bs4 import BeautifulSoup
print('Libraries imported.')
%matplotlib inline
###Output
Requirement already satisfied: geocoder in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (1.38.1)
Requirement already satisfied: click in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (from geocoder) (7.1.2)
Requirement already satisfied: ratelim in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (from geocoder) (0.1.6)
Requirement already satisfied: future in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (from geocoder) (0.18.2)
Requirement already satisfied: requests in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (from geocoder) (2.25.1)
Requirement already satisfied: six in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (from geocoder) (1.15.0)
Requirement already satisfied: decorator in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (from ratelim->geocoder) (4.4.2)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (from requests->geocoder) (2020.12.5)
Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (from requests->geocoder) (4.0.0)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (from requests->geocoder) (2.10)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages (from requests->geocoder) (1.26.3)
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
Collecting package metadata (current_repodata.json): done
Solving environment: done
# All requested packages already installed.
Collecting package metadata (current_repodata.json): done
Solving environment: \
###Markdown
Part 1) Create DataFrame from Wikipedia page Fetching the Data from Wikipedia and Creating a Table with it
###Code
# Reading Wikipedia's page
read_url = pd.read_html("https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M")
# To veryify the reading of Wikipedia's page
#print(type(read_url))
#print(len(read_url))
# The desired DataFrame is the first item in the list read_url. We don't need the other two DataFrames
#print(read_url[0])
#print(read_url[1])
#print(read_url[2])
df = read_url[0]
#df.head(5)
# Checking if there is a duplicate in Postal Code. Every Postal Code must present once only.
a = df["Postal Code"].value_counts()
for item in a:
if item != 1:
print("Attention: There is a duplicate in Postal Code!")
# Getting rid of the "Not assigned"-values in the Borough Column:
df["Borough"].replace("Not assigned", np.nan, inplace=True)
#df.head(5)
df_new = df.dropna(subset=["Borough"])
df_new.reset_index(drop=True, inplace=True)
df_new.head(5)
###Output
_____no_output_____
###Markdown
Counting the number of "Not assigned"-values, that is left in Neighbourhood-Column:
###Code
#There should be no "Not assigned"-values in Neighbourhood-column!
df_new["Neighbourhood"].isin(['Not assigned']).sum()
df_new.shape
###Output
_____no_output_____
###Markdown
Part 2) Modify the created Dataframe Load the coordinates data and sort the dataframe by its postal code:
###Code
url="https://cocl.us/Geospatial_data"
s=requests.get(url).content
df_coords=pd.read_csv(io.StringIO(s.decode('utf-8')))
df_coords.sort_values(by=["Postal Code"], inplace=True, ignore_index=True)
df_coords.head()
###Output
_____no_output_____
###Markdown
Sort the dataframe, gained from wikipedia, by its postal code too:
###Code
df_new.sort_values(by=["Postal Code"], inplace=True, ignore_index=True)
df_new.head(10)
###Output
_____no_output_____
###Markdown
Checking if the two DataFrames are sorted the same way and if they have the same length:
###Code
if df_coords["Postal Code"].values.all() == df_new["Postal Code"].values.all():
print("The two dataframes are sorted in the same order and have the same length!")
else:
print("The two dataframes are NOT sorted in the same order!!! Don't concate them of the coordinates will be mixed!!!")
###Output
_____no_output_____
###Markdown
Drop the postal code column in df_coords and concate the two DataFrames:
###Code
df_coords.drop("Postal Code", axis=1, inplace=True)
df_coords.head()
pd.options.display.max_rows = 200
df_final = pd.concat([df_new, df_coords], axis=1)
df_final.head(103)
###Output
_____no_output_____
###Markdown
Part 3) Exploring and cluster the neighborhoods in Toronto Creating a Map of Toronto with all the Places in our created DataFrame:
###Code
address = 'Toronto'
geolocator = Nominatim(user_agent="ny_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto are {}, {}.'.format(latitude, longitude))
# create map of Toronto using latitude and longitude values
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighbourhood in zip(df_final['Latitude'], df_final['Longitude'], df_final['Borough'], df_final['Neighbourhood']):
label = '{}, {}'.format(neighbourhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
###Output
_____no_output_____
###Markdown
Focusing on the Downtown of Toronto:
###Code
downtown_data = df_final[df_final['Borough'] == 'Downtown Toronto'].reset_index(drop=True)
downtown_data.head(20)
address = 'Downtown, Toronto'
geolocator = Nominatim(user_agent="ny_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Downtown, Toronto are {}, {}.'.format(latitude, longitude))
# create map of Toronto using latitude and longitude values
map_downtown = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighbourhood in zip(downtown_data['Latitude'], downtown_data['Longitude'], downtown_data['Borough'], downtown_data['Neighbourhood']):
label = '{}, {}'.format(neighbourhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_downtown )
map_downtown
###Output
_____no_output_____
###Markdown
Exploring the neighbourhood "Central Bay Street" in Downtown Toronto:
###Code
CLIENT_ID = 'MJVQQV5B0FX2FCNI24B0JUYBWFBQAU1RVSWPVKQO20A1HR3S' # your Foursquare ID
CLIENT_SECRET = 'DQM1EE5GLE3MHXAF23ZXNHQ0I1RXURU051T2IJRFMAFUO0GE' # your Foursquare Secret
#ACCESS_TOKEN = 'deleted ;)' # your FourSquare Access Token
VERSION = '20210228' # Foursquare API version
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
selected_address = "Central Bay Street"
index = downtown_data[downtown_data["Neighbourhood"]==selected_address].index.values[0]
neighborhood_latitude = downtown_data["Latitude"].iloc[index]
neighborhood_longitude = downtown_data["Longitude"].iloc[index]
print('Latitude and longitude values of {} are {}, {}.'.format(selected_address,
neighborhood_latitude,
neighborhood_longitude))
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
neighborhood_latitude,
neighborhood_longitude,
radius,
LIMIT)
url # display URL
results = requests.get(url).json()
results
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
venues = results['response']['groups'][0]['items']
# flatten JSON
nearby_venues = json_normalize(venues)
# filter columns
filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# filter the category for each row
nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1)
# clean columns
nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns]
print(len(nearby_venues))
nearby_venues.head(10)
print(f"{len(nearby_venues)} venues in the area of the '{selected_address}' neighbourhood have been reported from foursquare")
###Output
_____no_output_____
###Markdown
Mark all the gained venues in the neighbourhood "Central Bay Street":
###Code
# create map of Central Bay Street neighbourhood using latitude and longitude values
map_nearby_venues = folium.Map(location=[neighborhood_latitude, neighborhood_longitude], zoom_start=16)
# add markers to map
for lat, lng, name, categories in zip(nearby_venues['lat'], nearby_venues['lng'], nearby_venues['name'], nearby_venues['categories']):
label = '{}, {}'.format(name, categories)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_nearby_venues )
map_nearby_venues
###Output
_____no_output_____
###Markdown
NOTICE: The Venues are either located along the Yonge Street or Collage Street. Let's try to cluster these venues!
###Code
feature_matrix = np.column_stack((nearby_venues["lat"], nearby_venues["lng"]))
print(len(feature_matrix))
feature_matrix[0:10]
k_means = KMeans(init="k-means++", n_clusters=2, n_init=20)
k_means.fit(feature_matrix)
k_means_labels = k_means.labels_
k_means_cluster_centers = k_means.cluster_centers_
# initialize the plot with the specified dimensions.
fig = plt.figure(figsize=(15, 10))
# colors uses a color map, which will produce an array of colors based on
# the number of labels. We use set(k_means_labels) to get the
# unique labels.
colors = plt.cm.Spectral(np.linspace(0, 1, len(set(k_means_labels))))
# create a plot
ax = fig.add_subplot(1, 1, 1)
# loop through the data and plot the datapoints and centroids.
# k will range from 0-3, which will match the number of clusters in the dataset.
for k, col in zip(range(len([[4,4], [-2, -1], [2, -3], [1, 1]])), colors):
# create a list of all datapoints, where the datapoitns that are
# in the cluster (ex. cluster 0) are labeled as true, else they are
# labeled as false.
my_members = (k_means_labels == k)
# define the centroid, or cluster center.
cluster_center = k_means_cluster_centers[k]
# plot the datapoints with color col.
ax.plot(feature_matrix[my_members, 0], feature_matrix[my_members, 1], 'w', markerfacecolor=col, marker='.', markersize=10)
# plot the centroids with specified color, but with a darker outline
ax.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=10)
# title of the plot
ax.set_title('KMeans')
# show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Segmenting and Clustering Neighborhoods in Toronto Impoting libraries that we use in this work.
###Code
from bs4 import BeautifulSoup
import requests
import pandas as pd
!conda install -c conda-forge folium=0.5.0 --yes
import folium # plotting library
###Output
Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... done
# All requested packages already installed.
###Markdown
Starting scrapping, we are using BeautifulSoup library for scrapping this wikipedia page to optain the table. After investigation there is just one table tag in this html data. First we request the html of the page, than using the httml parser of the BS, parse the page. We have BS oject with wikipedia html now.
###Code
url = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
wpp = requests.get(url)
soup = BeautifulSoup(wpp.content,'html.parser')
soup_table = soup.find('table')
###Output
_____no_output_____
###Markdown
Than we get every row in a loop and find all columns in every row, extract text inside it and append new list called table
###Code
table = []
for row in soup_table.find_all('tr'):
subtable = []
cell = row.find_all('td')
for i in cell:
subtable.append(i.find(text=True).rstrip("\n"))
table.append(subtable)
###Output
_____no_output_____
###Markdown
Delete first empty row of the list than count raw data.
###Code
table.remove([])
len(table)
###Output
_____no_output_____
###Markdown
We are processing list according to rules shown in assignment page. * The dataframe will consist of three columns: PostalCode, Borough, and Neighborhood* Only process the cells that have an assigned borough. Ignore cells with a borough that is Not assigned.* More than one neighborhood can exist in one postal code area. For example, in the table on the Wikipedia page, you will notice that M5A is listed twice and has two neighborhoods: Harbourfront and Regent Park. These two rows will be combined into one row with the neighborhoods separated with a comma as shown in row 11 in the above table.* If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough.* Clean your Notebook and add Markdown cells to explain your work and any assumptions you are making.* In the last cell of your notebook, use the .shape method to print the number of rows of your dataframe.
###Code
table_processed = []
for i in table:
if (i[1]=='Not assigned' and i[2]=='Not assigned'):
print('pass', end=" ")
elif i[2] == 'Not assigned':
i[2] = i[1]
table_processed.append(i)
print("change and appended", end=" ")
else:
table_processed.append(i)
print("appended", end=" ")
df = pd.DataFrame(table_processed,columns = ['PostalCode','Borough','Neighborhood'])
df.shape
df.head()
###Output
_____no_output_____
###Markdown
Getting Locations from CSV
###Code
df_loc = pd.read_csv("Geospatial_Coordinates.csv")
df_loc.head()
lat = []
lon = []
for pcode in df["PostalCode"]:
lat.append(df_loc['Latitude'].loc[df_loc['Postal Code'] == pcode].values[0])
lon.append(df_loc['Longitude'].loc[df_loc['Postal Code'] == pcode].values[0])
df['Latitude'] = lat
df['Longitude'] = lon
df.head()
map = folium.Map(location=[43.651070, -79.347015], zoom_start=9)
for index, row in df.iterrows():
folium.CircleMarker(
location=[row["Latitude"], row["Longitude"]],
radius=50,
popup=row["Neighborhood"],
color='#3186cc',
fill=True,
fill_color='#3186cc'
).add_to(map)
map
map.save('index.html')
###Output
_____no_output_____
###Markdown
Get Web ContentUse the requests package to get the web content, then use the beautifulsoup to parse the data
###Code
# get the page
url = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
page = requests.get(url)
# load html
soup = BeautifulSoup(page.content, "lxml")
tablehead = soup.find("table").find_all("th")
# get the data
columns = []
for i in tablehead:
columns.append(i.text.strip())
tabledata = soup.find("table").find_all("tr")[1:]
###Output
_____no_output_____
###Markdown
Create DataWhen parse the table content, collect the data into the dataframe.Finally, group by the postcode and Borough to paste the Neighbourhood together
###Code
# create the dataframe
data = pd.DataFrame(columns=columns)
index = 0
for i in tabledata:
temp = []
for item in i.find_all("td"):
temp.append(item.text.strip())
if "Not assigned" not in temp:
data = pd.concat([data, pd.DataFrame(dict(zip(columns, temp)), index=[index])])
index += 1
data = data.groupby(["Postcode", "Borough"])["Neighbourhood"].apply(lambda x: ", ".join(x.tolist())).reset_index()
data.loc[data.Postcode == "M5A"]
###Output
_____no_output_____
###Markdown
Merge Geo InforMerge the geo information into the data
###Code
# load geo data
geodata = pd.read_csv("./Geospatial_Coordinates.csv")
geodata.rename({"Postal Code":"Postcode", "Neighbourhood":"Neighborhood"}, axis=1, inplace=True)
# merge the geo data
data = data.merge(geodata, how="left", on="Postcode")
data.head()
###Output
_____no_output_____
###Markdown
check the dataset information, like dimentions and unique Borough
###Code
print('The dataframe has {} boroughs and {} neighborhoods.'.format(
len(data['Borough'].unique()),
data.shape[0]
)
)
data.Borough.unique()
###Output
_____no_output_____
###Markdown
As we check the Capital of Canada information [Canada - Wikipedia](https://en.wikipedia.org/wiki/Canada) and [Toronto](https://en.wikipedia.org/wiki/Toronto), let's visualizat Toronto neighborhoods in it. The location is 43°44′30″N 79°22′24″W
###Code
toronto = data.loc[data.Borough.str.contains("Toronto")].copy().reset_index(drop=True)
toronto.Borough.unique()
toronto.head(2)
# create map by toronto location
latitude, longitude = 43.633, -79.367
torontomap = folium.Map(location=[43.633, -79.367], zoom_start=12)
# add markers to map
for iterm in toronto[["Latitude", "Longitude", "Neighbourhood"]].iterrows():
lat, lng, label = iterm[1]
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(torontomap)
torontomap
###Output
_____no_output_____
###Markdown
Next, we are going to start utlizing the API to explore informaton Define credentials
###Code
with open("./credentials.txt", "r") as file:
CLIENT_ID = file.readline().strip()
CLIENT_SECRET = file.readline().strip()
VERSION = '20180605'
if False:
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
###Output
_____no_output_____
###Markdown
Explore the first neighborhood in our dataframe, we can get the name and locaton
###Code
toronto.columns
neighborhood_latitude = toronto.loc[0, 'Latitude'] # neighborhood latitude value
neighborhood_longitude = toronto.loc[0, 'Longitude'] # neighborhood longitude value
neighborhood_name = toronto.loc[0, 'Neighbourhood'] # neighborhood name
print('Latitude and longitude values of {} are {}, {}.'.format(neighborhood_name,
neighborhood_latitude,
neighborhood_longitude))
# create URL
LIMIT = 100
radius = 500
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
neighborhood_latitude,
neighborhood_longitude,
radius,
LIMIT)
###Output
_____no_output_____
###Markdown
Right now, let's get the top 100 venues that are in current location within a radius of 500 meters. First, let's create the GET request URL. Name your URL url.Send the GET request and examine the resutls
###Code
results = requests.get(url).json()
results
###Output
_____no_output_____
###Markdown
From the Foursquare lab in the previous module, we know that all the information is in the items key. Before we proceed, let's borrow the get_category_type function from the Foursquare lab.
###Code
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
venues = results['response']['groups'][0]['items']
nearby_venues = json_normalize(venues) # flatten JSON
# filter columns
filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# filter the category for each row
nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1)
# clean columns
nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns]
nearby_venues.head()
print('{} venues were returned by Foursquare At the current location.'.format(nearby_venues.shape[0]))
###Output
4 venues were returned by Foursquare At the current location.
###Markdown
Explore neighbourhoods in toronto
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500, verbose=False):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
if verbose:
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
toronto_venues = getNearbyVenues(names=toronto['Neighbourhood'],
latitudes=toronto['Latitude'],
longitudes=toronto['Longitude'],
verbose=True
)
###Output
The Beaches
The Danforth West, Riverdale
The Beaches West, India Bazaar
Studio District
Lawrence Park
Davisville North
North Toronto West
Davisville
Moore Park, Summerhill East
Deer Park, Forest Hill SE, Rathnelly, South Hill, Summerhill West
Rosedale
Cabbagetown, St. James Town
Church and Wellesley
Harbourfront, Regent Park
Ryerson, Garden District
St. James Town
Berczy Park
Central Bay Street
Adelaide, King, Richmond
Harbourfront East, Toronto Islands, Union Station
Design Exchange, Toronto Dominion Centre
Commerce Court, Victoria Hotel
Roselawn
Forest Hill North, Forest Hill West
The Annex, North Midtown, Yorkville
Harbord, University of Toronto
Chinatown, Grange Park, Kensington Market
CN Tower, Bathurst Quay, Island airport, Harbourfront West, King and Spadina, Railway Lands, South Niagara
Stn A PO Boxes 25 The Esplanade
First Canadian Place, Underground city
Christie
Dovercourt Village, Dufferin
Little Portugal, Trinity
Brockton, Exhibition Place, Parkdale Village
High Park, The Junction South
Parkdale, Roncesvalles
Runnymede, Swansea
Business Reply Mail Processing Centre 969 Eastern
###Markdown
Analyze NeighbourhoodNext, we check the dataframe venue information
###Code
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
toronto_onehot.shape
###Output
_____no_output_____
###Markdown
There are 236 Category information. Next we group rows by neighbourhood and extract mean
###Code
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped.head()
toronto_grouped.shape
###Output
_____no_output_____
###Markdown
Explore the top 5 common category in each venues. Then sort the venuses in descending order and store information
###Code
top_venue_num = 5
for hood in toronto_grouped['Neighborhood']:
print("----"+hood+"----")
temp = toronto_grouped[toronto_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(top_venue_num))
print('\n')
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
Cluster NeighbourhoodsRun k_means to cluster the neighbourhood into 4 clusters. Then create dataframe that includes as well as the top 10 stores for each neighbourhood
###Code
# set number of clusters
kclusters = 6
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
neighborhoods_venues_sorted.columns
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
neighborhoods_venues_sorted.head(2)#set_index('Neighborhood')
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto.merge(neighborhoods_venues_sorted, left_on='Neighbourhood', right_on="Neighborhood")
toronto_merged.head() # check the last columns!
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighborhood'], toronto_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Examine ClustersExamine each cluster and determin the discriminating venue categories aht distinguish each cluster Cluster 1
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 0, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 2
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 1, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 3
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 2, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 4
###Code
toronto_merged.loc[toronto_merged['Cluster Labels'] == 3, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Part 1 of the Assignment - Creating the dataframe Importing libraries and extracting table
###Code
import pandas as pd
# Webpage url
url = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
# Extract tables
dfs = pd.read_html(url)
# print number of tables
print(len(dfs))
# Get first table which is the table of interest
df = dfs[0]
###Output
3
###Markdown
Extract Required columns into a df
###Code
# Extract required columns
df2 = df[['Postal Code','Borough','Neighbourhood']]
###Output
_____no_output_____
###Markdown
Ignore cells with a borough that is Not assigned
###Code
# get rid of rows with Borough value 'Not assigned'
df2 = df2[df2.Borough != 'Not assigned'].reset_index(drop=True)
###Output
_____no_output_____
###Markdown
If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough
###Code
mask = df2['Neighbourhood'] == "Not assigned"
df2.loc[mask,'Neighbourhood'] = df2.loc[mask, 'Borough']
###Output
_____no_output_____
###Markdown
print number of rows of the df
###Code
print(df2.shape[0])
###Output
103
###Markdown
Display dataframe
###Code
df2.head(12)
###Output
_____no_output_____
###Markdown
Part 2 of the assignement - obtaining latitudes and longitudes read csv file with longitude an latitude details
###Code
df_lng_lat = pd.read_csv('Geospatial_Coordinates.csv')
df_lng_lat.head()
###Output
_____no_output_____
###Markdown
Merge two dataframes with the common column latitude and longitude
###Code
df_merged = df2.merge(df_lng_lat, on="Postal Code", how = 'left')
df_merged.head()
print(df_merged.shape[0])
###Output
103
###Markdown
Part 3 of the assignment - Explore and cluster the neighborhoods in Toronto. Extracting boroughs that contain the word Toronto
###Code
df_merged = df_merged[df_merged['Borough'].str.contains("Toronto")]
df_merged.head()
###Output
_____no_output_____
###Markdown
Create a map of Toronto with neighborhoods superimposed on top.
###Code
import numpy as np # library to handle data in a vectorized manner
import pandas as pd # library for data analsysis
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import json # library to handle JSON files
# !conda install -c conda-forge geopy --yes # uncomment this line if you haven't completed the Foursquare API lab
# from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
import requests # library to handle requests
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# import k-means from clustering stage
from sklearn.cluster import KMeans
#!conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven't completed the Foursquare API lab
import folium # map rendering library
print('Libraries imported.')
latitude = 43.651070
longitude = -79.347015
# create map of New York using latitude and longitude values
map_Toronto = folium.Map(location=[latitude, longitude], zoom_start=11)
# add markers to map
for lat, lng, borough, neighborhood in zip(df_merged['Latitude'], df_merged['Longitude'], df_merged['Borough'], df_merged['Neighbourhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_Toronto)
map_Toronto
###Output
_____no_output_____
###Markdown
Define Foursquare Credentials and Version
###Code
CLIENT_ID = 'GURYN0HXLCV2RLRBQZSKURSEVN5ZVZTB14HYM5DKEON3KGSW' # your Foursquare ID
CLIENT_SECRET = 'W54MVLZU1PPZFODSDSKH3LDDMIZEIRZMCNXXDBNQ5OQPEFB3' # your Foursquare Secret
VERSION = '20180605' # Foursquare API version
LIMIT = 100 # A default Foursquare API limit value
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
###Output
Your credentails:
CLIENT_ID: GURYN0HXLCV2RLRBQZSKURSEVN5ZVZTB14HYM5DKEON3KGSW
CLIENT_SECRET:W54MVLZU1PPZFODSDSKH3LDDMIZEIRZMCNXXDBNQ5OQPEFB3
###Markdown
Explore Neighborhoods in Toronto
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}&query=coffee'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
Toronto_venues = getNearbyVenues(names=df_merged['Neighbourhood'],
latitudes=df_merged['Latitude'],
longitudes=df_merged['Longitude']
)
###Output
Regent Park, Harbourfront
Queen's Park, Ontario Provincial Government
Garden District, Ryerson
St. James Town
The Beaches
Berczy Park
Central Bay Street
Christie
Richmond, Adelaide, King
Dufferin, Dovercourt Village
Harbourfront East, Union Station, Toronto Islands
Little Portugal, Trinity
The Danforth West, Riverdale
Toronto Dominion Centre, Design Exchange
Brockton, Parkdale Village, Exhibition Place
India Bazaar, The Beaches West
Commerce Court, Victoria Hotel
Studio District
Lawrence Park
Roselawn
Davisville North
Forest Hill North & West, Forest Hill Road Park
High Park, The Junction South
North Toronto West, Lawrence Park
The Annex, North Midtown, Yorkville
Parkdale, Roncesvalles
Davisville
University of Toronto, Harbord
Runnymede, Swansea
Moore Park, Summerhill East
Kensington Market, Chinatown, Grange Park
Summerhill West, Rathnelly, South Hill, Forest Hill SE, Deer Park
CN Tower, King and Spadina, Railway Lands, Harbourfront West, Bathurst Quay, South Niagara, Island airport
Rosedale
Stn A PO Boxes
St. James Town, Cabbagetown
First Canadian Place, Underground city
Church and Wellesley
Business reply mail Processing Centre, South Central Letter Processing Plant Toronto
###Markdown
Cluster the neighborhoods
###Code
# set number of clusters
kclusters = 5
# toronto_grouped_clustering = df_merged.drop('Neighbourhood', 1)
toronto_grouped_clustering = df_merged.drop(['Neighbourhood', 'Borough', 'Postal Code'], axis=1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
print(len(kmeans.labels_))
print(toronto_grouped_clustering.shape[0])
# add clustering labels
df_merged.insert(0, 'Cluster Labels', kmeans.labels_)
df_merged.head()
###Output
_____no_output_____
###Markdown
Display clusters on map
###Code
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(df_merged['Latitude'], df_merged['Longitude'], df_merged['Neighbourhood'], df_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Segmenting and Clustering Neighborhoods in Toronto
###Code
#import necessary libraries
import pandas as pd
import numpy as np
from bs4 import BeautifulSoup
import requests
###Output
_____no_output_____
###Markdown
Scraped data from wikipedia and convert into dataframe.
###Code
source = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text
soup = BeautifulSoup(source, 'lxml')
table = soup.find('table', {'class':'wikitable sortable'})
df = pd.read_html(str(table))[0]
df.head()
###Output
_____no_output_____
###Markdown
Drop "Not assigned" value in Borough column.
###Code
df = df[df['Borough'] != 'Not assigned']
df.reset_index(drop=True, inplace=True)
df[df['Borough'] == 'Not assigned'].count()
###Output
_____no_output_____
###Markdown
Replace / with ,
###Code
df['Neighborhood'] = df['Neighborhood'].str.replace(' /', ',')
df.head(12)
###Output
_____no_output_____
###Markdown
Drop NaN value
###Code
df['Neighborhood'].fillna(df['Borough'], inplace=True)
df.isna().sum()
###Output
_____no_output_____
###Markdown
Show the data
###Code
df.head(12)
###Output
_____no_output_____
###Markdown
Show shape of dataframe
###Code
df.shape
###Output
_____no_output_____
###Markdown
Peer-Graded Assignment: Segmenting and Clustering Neighborhoods in Toronto Import Necessary Libraries
###Code
!pip install beautifulsoup4
!pip install lxml
!pip install html5lib
from bs4 import BeautifulSoup
import lxml
import html5lib
import numpy as np
import pandas as pd
import requests
print('imported')
###Output
Requirement already satisfied: beautifulsoup4 in /opt/conda/envs/Python36/lib/python3.6/site-packages (4.7.1)
Requirement already satisfied: soupsieve>=1.2 in /opt/conda/envs/Python36/lib/python3.6/site-packages (from beautifulsoup4) (1.7.1)
Requirement already satisfied: lxml in /opt/conda/envs/Python36/lib/python3.6/site-packages (4.3.1)
Requirement already satisfied: html5lib in /opt/conda/envs/Python36/lib/python3.6/site-packages (1.0.1)
Requirement already satisfied: six>=1.9 in /opt/conda/envs/Python36/lib/python3.6/site-packages (from html5lib) (1.12.0)
Requirement already satisfied: webencodings in /opt/conda/envs/Python36/lib/python3.6/site-packages (from html5lib) (0.5.1)
imported
###Markdown
Download and Explore the Dataset
###Code
url = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
res = requests.get(url)
soup = BeautifulSoup(res.content,"html.parser")
table = soup.find_all('table')[0]
df = pd.read_html(str(table))[0]
###Output
_____no_output_____
###Markdown
Data CleaningProcessing unassigned cells and setting the dataframe
###Code
df.columns = ['PostalCode','Borough','Neighborhood']
toronto_data = df[df['Borough']!= 'Not assigned']
toronto_data = toronto_data.reset_index(drop=True)
toronto_data = toronto_data.groupby("PostalCode").agg(lambda x:','.join(set(x)))
cond = toronto_data['Neighborhood'] == "Not assigned"
toronto_data.loc[cond, 'Neighborhood'] = toronto_data.loc[cond, 'Borough']
toronto_data.reset_index(inplace=True)
toronto_data.set_index(keys='PostalCode')
toronto_data
url = 'http://cocl.us/Geospatial_data'
df_GeoData = pd.read_csv(url)
df_GeoData.rename(columns={'Postal Code':'PostalCode'},inplace=True)
df_GeoData.set_index(keys='PostalCode')
toronto_GeoData = pd.merge(toronto_data, df_GeoData, on='PostalCode' )
toronto_GeoData.head(15)
###Output
_____no_output_____
###Markdown
Part 3 - Explore and cluster the neighborhoods in Toronto
###Code
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import json
!pip install geopy
from geopy.geocoders import Nominatim
import requests
from pandas.io.json import json_normalize
import matplotlib.cm as cm
import matplotlib.colors as colors
from sklearn.cluster import KMeans
!pip install folium
import folium
print('imported!')
#work with only boroughs that contain the word Toronto
toronto_boroughs= toronto_GeoData[toronto_GeoData['Borough'].str.contains('Toronto', na = False)].reset_index(drop=True)
toronto_boroughs.head()
toronto_boroughs.shape
###Output
_____no_output_____
###Markdown
The geograpical coordinate of Toronto are 43.6532° N, 79.3832° W
###Code
latitude = 43.6532
longitude = -79.3832
# create map of Toronto using latitude and longitude values
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=11)
# add markers to map
for lat, lng, label in zip(toronto_boroughs['Latitude'], toronto_boroughs['Longitude'],
toronto_boroughs['Neighborhood']):
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
###Output
_____no_output_____
###Markdown
Define Foursquare Credentials and Version
###Code
CLIENT_ID = 'BEPFM1143I5AMGLPZB3VK0QRYPX1NYB1A3M424XL04RVKLRP' # your Foursquare ID
CLIENT_SECRET = 'IGH3HJBG5XWJF4D1NMQVRLIATICVUZUCBVGYMNHOIMIFDABB' # your Foursquare Secret
VERSION = '20200523' # Foursquare API version
LIMIT = 100
# A function to explore Toronto neighborhoods
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
###Output
_____no_output_____
###Markdown
The code to run the above function on each neighborhood and create a new dataframe called toronto_venues
###Code
toronto_venues = getNearbyVenues(names=toronto_boroughs['Neighborhood'],
latitudes=toronto_boroughs['Latitude'],
longitudes=toronto_boroughs['Longitude']
)
###Output
The Beaches
The Danforth West, Riverdale
India Bazaar, The Beaches West
Studio District
Lawrence Park
Davisville North
North Toronto West
Davisville
Moore Park, Summerhill East
Summerhill West, Rathnelly, South Hill, Forest Hill SE, Deer Park
Rosedale
St. James Town, Cabbagetown
Church and Wellesley
Regent Park, Harbourfront
Garden District, Ryerson
St. James Town
Berczy Park
Central Bay Street
Richmond, Adelaide, King
Harbourfront East, Union Station, Toronto Islands
Toronto Dominion Centre, Design Exchange
Commerce Court, Victoria Hotel
Roselawn
Forest Hill North & West
The Annex, North Midtown, Yorkville
University of Toronto, Harbord
Kensington Market, Chinatown, Grange Park
CN Tower, King and Spadina, Railway Lands, Harbourfront West, Bathurst Quay, South Niagara, Island airport
Stn A PO Boxes
First Canadian Place, Underground city
Christie
Dufferin, Dovercourt Village
Little Portugal, Trinity
Brockton, Parkdale Village, Exhibition Place
High Park, The Junction South
Parkdale, Roncesvalles
Runnymede, Swansea
Queen's Park, Ontario Provincial Government
Business reply mail Processing Centre
###Markdown
Checking the dataframe
###Code
print(toronto_venues.shape)
toronto_venues.head()
###Output
(1613, 7)
###Markdown
Let's check how many venues were returned for each neighborhood
###Code
toronto_venues.groupby('Neighborhood').count()
###Output
_____no_output_____
###Markdown
Let's find out how many unique categories can be curated from all the returned venues
###Code
print('There are {} uniques categories.'.format(len(toronto_venues['Venue Category'].unique())))
###Output
There are 239 uniques categories.
###Markdown
Analyze Each Neighborhood
###Code
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
###Output
_____no_output_____
###Markdown
let's examine the new dataframe size.
###Code
toronto_onehot.shape
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped.head()
###Output
_____no_output_____
###Markdown
let's confirm the dataframe size
###Code
toronto_grouped.shape
###Output
_____no_output_____
###Markdown
Let's print each neighborhood along with the top 5 most common venues
###Code
num_top_venues = 5
for hood in toronto_grouped['Neighborhood']:
print("----"+hood+"----")
temp = toronto_grouped[toronto_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
# set number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
kmeans.labels_[0:10]
toronto_boroughs_merged = toronto_boroughs
toronto_boroughs_merged['Cluster Labels'] = kmeans.labels_
toronto_boroughs_merged = toronto_boroughs_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'),on='Neighborhood')
toronto_boroughs_merged.head()
###Output
_____no_output_____
###Markdown
Finally, let's visualize the resulting clusters
###Code
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_boroughs_merged['Latitude'],
toronto_boroughs_merged['Longitude'],
toronto_boroughs_merged['Neighborhood'],
toronto_boroughs_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Cluster 1
###Code
toronto_boroughs_merged.loc[toronto_boroughs_merged['Cluster Labels'] == 0,
toronto_boroughs_merged.columns[[1] +
list(range(5, toronto_boroughs_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 2
###Code
toronto_boroughs_merged.loc[toronto_boroughs_merged['Cluster Labels'] == 1,
toronto_boroughs_merged.columns[[1] +
list(range(5, toronto_boroughs_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 3
###Code
toronto_boroughs_merged.loc[toronto_boroughs_merged['Cluster Labels'] == 2,
toronto_boroughs_merged.columns[[1] +
list(range(5, toronto_boroughs_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 4
###Code
toronto_boroughs_merged.loc[toronto_boroughs_merged['Cluster Labels'] == 3,
toronto_boroughs_merged.columns[[1] +
list(range(5, toronto_boroughs_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 5
###Code
toronto_boroughs_merged.loc[toronto_boroughs_merged['Cluster Labels'] == 4,
toronto_boroughs_merged.columns[[1] +
list(range(5, toronto_boroughs_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Segmenting and Clustering Neighborhoods in Toronto | Part-1 1. Start by creating a new Notebook for this assignment.2. Use the Notebook to build the code to scrape the following Wikipedia page, https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M, in order to obtain the data that is in the table of postal codes and to transform the data into a pandas dataframe.For this assignment, you will be required to explore and cluster the neighborhoods in Toronto.3. To create the above dataframe:. - The dataframe will consist of three columns: PostalCode, Borough, and Neighborhood.- Only process the cells that have an assigned borough. Ignore cells with a borough that is Not assigned.- More than one neighborhood can exist in one postal code area. For example, in the table on the Wikipedia page, you will notice that M5A is listed twice and has two neighborhoods: Harbourfront and Regent Park. These two rows will be combined into one row with the neighborhoods separated with a comma as shown in row 11 in the above table.- If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough. So for the 9th cell in the table on the Wikipedia page, the value of the Borough and the Neighborhood columns will be Queen's Park.- Clean your Notebook and add Markdown cells to explain your work and any assumptions you are making.- In the last cell of your notebook, use the .shape method to print the number of rows of your dataframe..4. Submit a link to your Notebook on your Github repository. (10 marks)Note: There are different website scraping libraries and packages in Python. For scraping the above table, you can simply use pandas to read the table into a pandas dataframe.Another way, which would help to learn for more complicated cases of web scraping is using the BeautifulSoup package. Here is the package's main documentation page: http://beautiful-soup-4.readthedocs.io/en/latest/The package is so popular that there is a plethora of tutorials and examples on how to use it. Here is a very good Youtube video on how to use the BeautifulSoup package: https://www.youtube.com/watch?v=ng2o98k983kUse pandas, or the BeautifulSoup package, or any other way you are comfortable with to transform the data in the table on the Wikipedia page into the above pandas dataframe. Scraping Wikipedia page and creating a Dataframe and Transforming the data on Wiki page into pandas dataframe. Importing Libraries
###Code
import pandas as pd
import requests
from bs4 import BeautifulSoup
print("Imported!")
###Output
Imported!
###Markdown
Using BeautifulSoup Scraping List of Postal Codes of Given Wikipedia Page
###Code
url = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
extracting_data = requests.get(url).text
wiki_data = BeautifulSoup(extracting_data, 'lxml')
wiki_data
###Output
_____no_output_____
###Markdown
Converting content of PostalCode HTML table as dataframe
###Code
column_names = ['Postalcode','Borough','Neighborhood']
toronto = pd.DataFrame(columns = column_names)
content = wiki_data.find('div', class_='mw-parser-output')
table = content.table.tbody
postcode = 0
borough = 0
neighborhood = 0
for tr in table.find_all('tr'):
i = 0
for td in tr.find_all('td'):
if i == 0:
postcode = td.text
i = i + 1
elif i == 1:
borough = td.text
i = i + 1
elif i == 2:
neighborhood = td.text.strip('\n').replace(']','')
toronto = toronto.append({'Postalcode': postcode,'Borough': borough,'Neighborhood': neighborhood},ignore_index=True)
# clean dataframe
toronto = toronto[toronto.Borough!='Not assigned']
toronto = toronto[toronto.Borough!= 0]
toronto.reset_index(drop = True, inplace = True)
i = 0
for i in range(0,toronto.shape[0]):
if toronto.iloc[i][2] == 'Not assigned':
toronto.iloc[i][2] = toronto.iloc[i][1]
i = i+1
df = toronto.groupby(['Postalcode','Borough'])['Neighborhood'].apply(', '.join).reset_index()
df
df.describe()
###Output
_____no_output_____
###Markdown
Data Cleaning | Drop None rows of df and row which contains 'Not assigned' value | All "Not assigned" will be replace to 'NaN'
###Code
df = df.dropna()
empty = 'Not assigned'
df = df[(df.Postalcode != empty ) & (df.Borough != empty) & (df.Neighborhood != empty)]
df.head()
def neighborhood_list(grouped):
return ', '.join(sorted(grouped['Neighborhood'].tolist()))
grp = df.groupby(['Postalcode', 'Borough'])
df_2 = grp.apply(neighborhood_list).reset_index(name='Neighborhood')
df_2.describe()
print(df_2.shape)
df_2.head()
df_2.to_csv('toronto.csv', index=False)
###Output
_____no_output_____
###Markdown
Finding our required table from where data to be retrieved.
###Code
right_table=soup.find('table', class_='wikitable sortable')
right_table
###Output
_____no_output_____
###Markdown
Storing the table column values to different lists
###Code
#Generate lists
A=[]
B=[]
C=[]
for row in right_table.findAll("tr"):
states = row.findAll('th') #To store second column data
cells = row.findAll('td')
if len(cells)==3: #Only extract table body not heading
A.append(cells[0].find(text=True))
B.append(cells[1].find(text=True))
C.append(cells[2].find(text=True))
###Output
_____no_output_____
###Markdown
Make a Pandas Dataframe from the above lists
###Code
#import pandas to convert list to data frame
import pandas as pd
df=pd.DataFrame(A,columns=['Postcode'])
df['Borough']=B
df['Neighbourhood']=C
df
###Output
_____no_output_____
###Markdown
Removing those rows whose Borough value is 'Not assigned'
###Code
df = df.drop(df[(df.Borough == 'Not assigned')].index)
# reset index, because we droped two rows
df.reset_index(drop = True, inplace = True)
df
###Output
_____no_output_____
###Markdown
Combining the rows with more than one neighborhood in one postal code area with the neighborhoods separated with a comma.
###Code
aggregations = {
#'Neighbourhood': {lambda x: x.str.cat(x, sep =", ")}
'Neighbourhood': {lambda x: ",".join(tuple(x.str.rstrip()))}
}
df_final = df.groupby(['Postcode', 'Borough'], as_index=False).agg(aggregations)
df_final
###Output
_____no_output_____
###Markdown
Displaying proper column names
###Code
df_final.columns = ['Postcode', 'Borough', 'Neighbourhood']
df_final
###Output
_____no_output_____
###Markdown
Replacing Neighbourhood value with Borough value if Neighbourhood value is Not assigned!
###Code
df_final.loc[df_final['Neighbourhood'] == 'Not assigned', 'Neighbourhood'] = df_final['Borough']
df_final
###Output
_____no_output_____
###Markdown
Showing Dimension of the Dataframe
###Code
df_final.shape
new_df = pd.read_csv("http://cocl.us/Geospatial_data")
new_df
merged_df = pd.merge(df_final, new_df, on=df_final.index, how='outer')
merged_df
merged_df.drop(['key_0', 'Postal Code'], axis=1, inplace=True)
merged_df
!conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven't completed the Foursquare API lab
import folium # map rendering library
!conda install -c conda-forge geopy --yes # uncomment this line if you haven't completed the Foursquare API lab
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
address = 'Toronto, CA'
geolocator = Nominatim()
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto, CA are {}, {}.'.format(latitude, longitude))
# create map of New York using latitude and longitude values
map_totonto = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighborhood in zip(merged_df['Latitude'], merged_df['Longitude'], merged_df['Borough'], merged_df['Neighbourhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_totonto)
map_totonto
###Output
_____no_output_____
###Markdown
Define Foursquare Credentials and Version
###Code
CLIENT_ID = 'FUXKTDIP0GJ5PJD43PQZBMVKCRQQ240MZDC0IAQBWNRSIZHY' # your Foursquare ID
CLIENT_SECRET = 'W43NUZK3RP5LRWLPOVFM5I5P5WFKZNSXBT1FK1VKCGWPHEM0' # your Foursquare Secret
VERSION = '20180605' # Foursquare API version
merged_df.loc[75, 'Neighbourhood']
###Output
_____no_output_____
###Markdown
Now,going to explore the 'Christie' Neighbourhood of 'Downtown Toronto'. Get the neighborhood's latitude and longitude values.
###Code
neighborhood_latitude = merged_df.loc[75, 'Latitude'] # neighborhood latitude value
neighborhood_longitude = merged_df.loc[75, 'Longitude'] # neighborhood longitude value
neighborhood_name = merged_df.loc[75, 'Neighbourhood'] # neighborhood name
print('Latitude and longitude values of {} are {}, {}.'.format(neighborhood_name,
neighborhood_latitude,
neighborhood_longitude))
###Output
Latitude and longitude values of Christie are 43.669542, -79.4225637.
###Markdown
Now, let's get the top 100 venues that are in Rouge,Malvern within a radius of 500 meters.
###Code
# type your answer here
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
neighborhood_latitude,
neighborhood_longitude,
500,
100)
import requests # library to handle requests
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
results = requests.get(url).json()
results
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
venues = results['response']['groups'][0]['items']
nearby_venues = json_normalize(venues) # flatten JSON
# filter columns
filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# filter the category for each row
nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1)
# clean columns
nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns]
nearby_venues
print('{} venues were returned by Foursquare.'.format(nearby_venues.shape[0]))
###Output
16 venues were returned by Foursquare.
###Markdown
Explore Neighborhoods in Toronto Let's create a function to repeat the same process to all the neighborhoods in Toronto
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500, LIMIT=100):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
toronto_venues = getNearbyVenues(names=merged_df['Neighbourhood'],
latitudes=merged_df['Latitude'],
longitudes=merged_df['Longitude']
)
###Output
Rouge,Malvern
Highland Creek,Rouge Hill,Port Union
Guildwood,Morningside,West Hill
Woburn
Cedarbrae
Scarborough Village
East Birchmount Park,Ionview,Kennedy Park
Clairlea,Golden Mile,Oakridge
Cliffcrest,Cliffside,Scarborough Village West
Birch Cliff,Cliffside West
Dorset Park,Scarborough Town Centre,Wexford Heights
Maryvale,Wexford
Agincourt
Clarks Corners,Sullivan,Tam O'Shanter
Agincourt North,L'Amoreaux East,Milliken,Steeles East
L'Amoreaux West,Steeles West
Upper Rouge
Hillcrest Village
Fairview,Henry Farm,Oriole
Bayview Village
Silver Hills,York Mills
Newtonbrook,Willowdale
Willowdale South
York Mills West
Willowdale West
Parkwoods
Don Mills North
Flemingdon Park,Don Mills South
Bathurst Manor,Downsview North,Wilson Heights
Northwood Park,York University
CFB Toronto,Downsview East
Downsview West
Downsview Central
Downsview Northwest
Victoria Village
Woodbine Gardens,Parkview Hill
Woodbine Heights
The Beaches
Leaside
Thorncliffe Park
East Toronto
The Danforth West,Riverdale
The Beaches West,India Bazaar
Studio District
Lawrence Park
Davisville North
North Toronto West
Davisville
Moore Park,Summerhill East
Deer Park,Forest Hill SE,Rathnelly,South Hill,Summerhill West
Rosedale
Cabbagetown,St. James Town
Church and Wellesley
Harbourfront,Regent Park
Ryerson,Garden District
St. James Town
Berczy Park
Central Bay Street
Adelaide,King,Richmond
Harbourfront East,Toronto Islands,Union Station
Design Exchange,Toronto Dominion Centre
Commerce Court,Victoria Hotel
Bedford Park,Lawrence Manor East
Roselawn
Forest Hill North,Forest Hill West
The Annex,North Midtown,Yorkville
Harbord,University of Toronto
Chinatown,Grange Park,Kensington Market
CN Tower,Bathurst Quay,Island airport,Harbourfront West,King and Spadina,Railway Lands,South Niagara
Stn A PO Boxes 25 The Esplanade
First Canadian Place,Underground city
Lawrence Heights,Lawrence Manor
Glencairn
Humewood-Cedarvale
Caledonia-Fairbanks
Christie
Dovercourt Village,Dufferin
Little Portugal,Trinity
Brockton,Exhibition Place,Parkdale Village
Maple Leaf Park,North Park,Upwood Park
Del Ray,Keelsdale,Mount Dennis,Silverthorn
The Junction North,Runnymede
High Park,The Junction South
Parkdale,Roncesvalles
Runnymede,Swansea
Queen's Park
Canada Post Gateway Processing Centre
Business reply mail Processing Centre969 Eastern
Humber Bay Shores,Mimico South,New Toronto
Alderwood,Long Branch
The Kingsway,Montgomery Road,Old Mill North
Humber Bay,King's Mill Park,Kingsway Park South East,Mimico NE,Old Mill South,The Queensway East,Royal York South East,Sunnylea
Kingsway Park South West,Mimico NW,The Queensway West,Royal York South West,South of Bloor
Islington Avenue
Cloverdale,Islington,Martin Grove,Princess Gardens,West Deane Park
Bloordale Gardens,Eringate,Markland Wood,Old Burnhamthorpe
Humber Summit
Emery,Humberlea
Weston
Westmount
Kingsview Village,Martin Grove Gardens,Richview Gardens,St. Phillips
Albion Gardens,Beaumond Heights,Humbergate,Jamestown,Mount Olive,Silverstone,South Steeles,Thistletown
Northwest
###Markdown
Let's check the size of the resulting dataframe
###Code
print(toronto_venues.shape)
toronto_venues.head()
toronto_venues.groupby('Neighborhood').count()
###Output
_____no_output_____
###Markdown
Let's find out how many unique categories can be curated from all the returned venues
###Code
print('There are {} uniques categories.'.format(len(toronto_venues['Venue Category'].unique())))
###Output
There are 271 uniques categories.
###Markdown
Let's Analyze Each Neighborhood
###Code
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighborhood'] = toronto_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot = toronto_onehot[fixed_columns]
toronto_onehot.head()
toronto_onehot.shape
###Output
_____no_output_____
###Markdown
Next, let's group rows by neighborhood and by taking the mean of the frequency of occurrence of each category
###Code
toronto_grouped = toronto_onehot.groupby('Neighborhood').mean().reset_index()
toronto_grouped
###Output
_____no_output_____
###Markdown
Let's check the new size
###Code
toronto_grouped.shape
###Output
_____no_output_____
###Markdown
Let's print each neighborhood along with the top 5 most common venues
###Code
num_top_venues = 5
for hood in toronto_grouped['Neighborhood']:
print("----"+hood+"----")
temp = toronto_grouped[toronto_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
###Output
----Adelaide,King,Richmond----
venue freq
0 Coffee Shop 0.07
1 Café 0.06
2 American Restaurant 0.04
3 Steakhouse 0.04
4 Thai Restaurant 0.04
----Agincourt----
venue freq
0 Lounge 0.25
1 Breakfast Spot 0.25
2 Clothing Store 0.25
3 Skating Rink 0.25
4 Yoga Studio 0.00
----Agincourt North,L'Amoreaux East,Milliken,Steeles East----
venue freq
0 Playground 0.5
1 Park 0.5
2 Mexican Restaurant 0.0
3 Monument / Landmark 0.0
4 Molecular Gastronomy Restaurant 0.0
----Albion Gardens,Beaumond Heights,Humbergate,Jamestown,Mount Olive,Silverstone,South Steeles,Thistletown----
venue freq
0 Grocery Store 0.17
1 Pizza Place 0.17
2 Discount Store 0.08
3 Beer Store 0.08
4 Fried Chicken Joint 0.08
----Alderwood,Long Branch----
venue freq
0 Pizza Place 0.18
1 Pool 0.09
2 Bank 0.09
3 Dance Studio 0.09
4 Pub 0.09
----Bathurst Manor,Downsview North,Wilson Heights----
venue freq
0 Coffee Shop 0.11
1 Ice Cream Shop 0.06
2 Sandwich Place 0.06
3 Supermarket 0.06
4 Frozen Yogurt Shop 0.06
----Bayview Village----
venue freq
0 Japanese Restaurant 0.25
1 Café 0.25
2 Bank 0.25
3 Chinese Restaurant 0.25
4 Yoga Studio 0.00
----Bedford Park,Lawrence Manor East----
venue freq
0 Fast Food Restaurant 0.08
1 Sushi Restaurant 0.08
2 Italian Restaurant 0.08
3 Coffee Shop 0.08
4 Indian Restaurant 0.04
----Berczy Park----
venue freq
0 Coffee Shop 0.09
1 Cocktail Bar 0.06
2 Cheese Shop 0.04
3 Beer Bar 0.04
4 Seafood Restaurant 0.04
----Birch Cliff,Cliffside West----
venue freq
0 Café 0.25
1 General Entertainment 0.25
2 Skating Rink 0.25
3 College Stadium 0.25
4 Pharmacy 0.00
----Bloordale Gardens,Eringate,Markland Wood,Old Burnhamthorpe----
venue freq
0 Pizza Place 0.17
1 Café 0.17
2 Pharmacy 0.17
3 Beer Store 0.17
4 Liquor Store 0.17
----Brockton,Exhibition Place,Parkdale Village----
venue freq
0 Coffee Shop 0.12
1 Breakfast Spot 0.08
2 Café 0.08
3 Nightclub 0.08
4 Performing Arts Venue 0.08
----Business reply mail Processing Centre969 Eastern----
venue freq
0 Light Rail Station 0.11
1 Yoga Studio 0.06
2 Auto Workshop 0.06
3 Comic Shop 0.06
4 Recording Studio 0.06
----CFB Toronto,Downsview East----
venue freq
0 Park 0.33
1 Bus Stop 0.33
2 Airport 0.33
3 Middle Eastern Restaurant 0.00
4 Monument / Landmark 0.00
----CN Tower,Bathurst Quay,Island airport,Harbourfront West,King and Spadina,Railway Lands,South Niagara----
venue freq
0 Airport Lounge 0.14
1 Airport Terminal 0.14
2 Airport Service 0.14
3 Harbor / Marina 0.07
4 Sculpture Garden 0.07
----Cabbagetown,St. James Town----
venue freq
0 Coffee Shop 0.11
1 Restaurant 0.09
2 Pub 0.04
3 Pizza Place 0.04
4 Italian Restaurant 0.04
----Caledonia-Fairbanks----
venue freq
0 Park 0.33
1 Women's Store 0.17
2 Market 0.17
3 Fast Food Restaurant 0.17
4 Pharmacy 0.17
----Canada Post Gateway Processing Centre----
venue freq
0 Coffee Shop 0.2
1 Hotel 0.2
2 American Restaurant 0.1
3 Burrito Place 0.1
4 Mediterranean Restaurant 0.1
----Cedarbrae----
venue freq
0 Fried Chicken Joint 0.14
1 Bakery 0.14
2 Athletics & Sports 0.14
3 Hakka Restaurant 0.14
4 Caribbean Restaurant 0.14
----Central Bay Street----
venue freq
0 Coffee Shop 0.15
1 Café 0.06
2 Italian Restaurant 0.05
3 Bar 0.04
4 Japanese Restaurant 0.04
----Chinatown,Grange Park,Kensington Market----
venue freq
0 Café 0.08
1 Vegetarian / Vegan Restaurant 0.06
2 Chinese Restaurant 0.05
3 Bar 0.05
4 Mexican Restaurant 0.04
----Christie----
venue freq
0 Café 0.19
1 Grocery Store 0.19
2 Park 0.12
3 Baby Store 0.06
4 Nightclub 0.06
----Church and Wellesley----
venue freq
0 Japanese Restaurant 0.08
1 Coffee Shop 0.06
2 Sushi Restaurant 0.05
3 Burger Joint 0.05
4 Gay Bar 0.05
----Clairlea,Golden Mile,Oakridge----
venue freq
0 Bakery 0.2
1 Bus Line 0.2
2 Ice Cream Shop 0.1
3 Soccer Field 0.1
4 Metro Station 0.1
----Clarks Corners,Sullivan,Tam O'Shanter----
venue freq
0 Pizza Place 0.22
1 Chinese Restaurant 0.11
2 Fried Chicken Joint 0.11
3 Italian Restaurant 0.11
4 Thai Restaurant 0.11
----Cliffcrest,Cliffside,Scarborough Village West----
venue freq
0 Intersection 0.33
1 Motel 0.33
2 American Restaurant 0.33
3 Yoga Studio 0.00
4 Movie Theater 0.00
----Cloverdale,Islington,Martin Grove,Princess Gardens,West Deane Park----
venue freq
0 Bank 1.0
1 Yoga Studio 0.0
2 Miscellaneous Shop 0.0
3 Motel 0.0
4 Monument / Landmark 0.0
----Commerce Court,Victoria Hotel----
venue freq
0 Coffee Shop 0.14
1 Hotel 0.06
2 Café 0.06
3 Restaurant 0.05
4 American Restaurant 0.04
----Davisville----
venue freq
0 Sandwich Place 0.08
1 Dessert Shop 0.08
2 Coffee Shop 0.05
3 Pizza Place 0.05
4 Seafood Restaurant 0.05
----Davisville North----
venue freq
0 Food & Drink Shop 0.12
1 Park 0.12
2 Burger Joint 0.12
3 Dance Studio 0.12
4 Clothing Store 0.12
----Deer Park,Forest Hill SE,Rathnelly,South Hill,Summerhill West----
venue freq
0 Coffee Shop 0.14
1 Pub 0.14
2 Sports Bar 0.07
3 Sushi Restaurant 0.07
4 Supermarket 0.07
----Del Ray,Keelsdale,Mount Dennis,Silverthorn----
venue freq
0 Turkish Restaurant 0.25
1 Skating Rink 0.25
2 Sandwich Place 0.25
3 Discount Store 0.25
4 Mediterranean Restaurant 0.00
----Design Exchange,Toronto Dominion Centre----
venue freq
0 Coffee Shop 0.15
1 Hotel 0.10
2 Café 0.07
3 American Restaurant 0.04
4 Gastropub 0.03
----Don Mills North----
venue freq
0 Café 0.17
1 Pool 0.17
2 Gym / Fitness Center 0.17
3 Caribbean Restaurant 0.17
4 Japanese Restaurant 0.17
----Dorset Park,Scarborough Town Centre,Wexford Heights----
venue freq
0 Indian Restaurant 0.33
1 Latin American Restaurant 0.17
2 Chinese Restaurant 0.17
3 Pet Store 0.17
4 Vietnamese Restaurant 0.17
----Dovercourt Village,Dufferin----
venue freq
0 Supermarket 0.12
1 Bakery 0.12
2 Gas Station 0.06
3 Discount Store 0.06
4 Fast Food Restaurant 0.06
----Downsview Central----
venue freq
0 Business Service 0.25
1 Food Truck 0.25
2 Home Service 0.25
3 Baseball Field 0.25
4 Motel 0.00
----Downsview Northwest----
venue freq
0 Discount Store 0.25
1 Liquor Store 0.25
2 Grocery Store 0.25
3 Athletics & Sports 0.25
4 Yoga Studio 0.00
----Downsview West----
venue freq
0 Grocery Store 0.50
1 Bank 0.25
2 Shopping Mall 0.25
3 Yoga Studio 0.00
4 Miscellaneous Shop 0.00
----East Birchmount Park,Ionview,Kennedy Park----
venue freq
0 Discount Store 0.25
1 Convenience Store 0.12
2 Bus Station 0.12
3 Department Store 0.12
4 Coffee Shop 0.12
----East Toronto----
venue freq
0 Park 0.50
1 Coffee Shop 0.25
2 Convenience Store 0.25
3 Mexican Restaurant 0.00
4 Monument / Landmark 0.00
----Emery,Humberlea----
venue freq
0 Baseball Field 0.5
1 Furniture / Home Store 0.5
2 Yoga Studio 0.0
3 Molecular Gastronomy Restaurant 0.0
4 Modern European Restaurant 0.0
----Fairview,Henry Farm,Oriole----
venue freq
0 Clothing Store 0.16
1 Fast Food Restaurant 0.07
2 Coffee Shop 0.06
3 Cosmetics Shop 0.04
4 Metro Station 0.03
----First Canadian Place,Underground city----
venue freq
0 Coffee Shop 0.12
1 Café 0.08
2 Hotel 0.06
3 Restaurant 0.05
4 Steakhouse 0.04
----Flemingdon Park,Don Mills South----
venue freq
0 Coffee Shop 0.10
1 Asian Restaurant 0.10
2 Gym 0.10
3 Beer Store 0.10
4 Dim Sum Restaurant 0.05
----Forest Hill North,Forest Hill West----
venue freq
0 Jewelry Store 0.25
1 Sushi Restaurant 0.25
2 Trail 0.25
3 Mexican Restaurant 0.25
4 Yoga Studio 0.00
----Glencairn----
venue freq
0 Italian Restaurant 0.25
1 Japanese Restaurant 0.25
2 Bakery 0.25
3 Pub 0.25
4 Mobile Phone Shop 0.00
----Guildwood,Morningside,West Hill----
venue freq
0 Pizza Place 0.17
1 Breakfast Spot 0.17
2 Rental Car Location 0.17
3 Mexican Restaurant 0.17
4 Electronics Store 0.17
----Harbord,University of Toronto----
venue freq
0 Café 0.11
1 Yoga Studio 0.06
2 Bakery 0.06
3 Bookstore 0.06
4 Restaurant 0.06
----Harbourfront East,Toronto Islands,Union Station----
venue freq
0 Coffee Shop 0.14
1 Hotel 0.05
2 Pizza Place 0.04
3 Café 0.04
4 Aquarium 0.04
----Harbourfront,Regent Park----
venue freq
0 Coffee Shop 0.17
1 Park 0.08
2 Bakery 0.08
3 Café 0.06
4 Mexican Restaurant 0.04
----High Park,The Junction South----
venue freq
0 Mexican Restaurant 0.08
1 Bar 0.08
2 Café 0.08
3 Fried Chicken Joint 0.04
4 Italian Restaurant 0.04
----Highland Creek,Rouge Hill,Port Union----
venue freq
0 Bar 1.0
1 Yoga Studio 0.0
2 Motel 0.0
3 Monument / Landmark 0.0
4 Molecular Gastronomy Restaurant 0.0
----Hillcrest Village----
venue freq
0 Dog Run 0.25
1 Pool 0.25
2 Mediterranean Restaurant 0.25
3 Golf Course 0.25
4 Yoga Studio 0.00
----Humber Bay Shores,Mimico South,New Toronto----
venue freq
0 Fast Food Restaurant 0.07
1 Flower Shop 0.07
2 Sandwich Place 0.07
3 Café 0.07
4 Fried Chicken Joint 0.07
----Humber Bay,King's Mill Park,Kingsway Park South East,Mimico NE,Old Mill South,The Queensway East,Royal York South East,Sunnylea----
venue freq
0 Baseball Field 0.5
1 Breakfast Spot 0.5
2 Yoga Studio 0.0
3 Monument / Landmark 0.0
4 Molecular Gastronomy Restaurant 0.0
----Humber Summit----
venue freq
0 Pizza Place 0.5
1 Empanada Restaurant 0.5
2 Movie Theater 0.0
3 Massage Studio 0.0
4 Medical Center 0.0
----Humewood-Cedarvale----
venue freq
0 Playground 0.25
1 Field 0.25
2 Hockey Arena 0.25
3 Trail 0.25
4 Middle Eastern Restaurant 0.00
----Kingsview Village,Martin Grove Gardens,Richview Gardens,St. Phillips----
venue freq
0 Pizza Place 0.33
1 Mobile Phone Shop 0.33
2 Park 0.33
3 Mexican Restaurant 0.00
4 Monument / Landmark 0.00
----Kingsway Park South West,Mimico NW,The Queensway West,Royal York South West,South of Bloor----
venue freq
0 Convenience Store 0.08
1 Thrift / Vintage Store 0.08
2 Fast Food Restaurant 0.08
3 Burger Joint 0.08
4 Sandwich Place 0.08
----L'Amoreaux West,Steeles West----
venue freq
0 Fast Food Restaurant 0.15
1 Chinese Restaurant 0.15
2 Coffee Shop 0.08
3 Breakfast Spot 0.08
4 Sandwich Place 0.08
----Lawrence Heights,Lawrence Manor----
venue freq
0 Clothing Store 0.18
1 Furniture / Home Store 0.09
2 Shoe Store 0.09
3 Accessories Store 0.09
4 Boutique 0.09
----Lawrence Park----
venue freq
0 Dim Sum Restaurant 0.25
1 Bus Line 0.25
2 Swim School 0.25
3 Park 0.25
4 Yoga Studio 0.00
----Leaside----
venue freq
0 Coffee Shop 0.09
1 Sporting Goods Shop 0.09
2 Burger Joint 0.06
3 Breakfast Spot 0.03
4 Smoothie Shop 0.03
----Little Portugal,Trinity----
venue freq
0 Bar 0.12
1 Café 0.06
2 Coffee Shop 0.05
3 Restaurant 0.05
4 Asian Restaurant 0.03
----Maple Leaf Park,North Park,Upwood Park----
venue freq
0 Park 0.25
1 Construction & Landscaping 0.25
2 Bakery 0.25
3 Basketball Court 0.25
4 Middle Eastern Restaurant 0.00
----Maryvale,Wexford----
venue freq
0 Smoke Shop 0.25
1 Breakfast Spot 0.25
2 Bakery 0.25
3 Middle Eastern Restaurant 0.25
4 Mexican Restaurant 0.00
----Moore Park,Summerhill East----
venue freq
0 Playground 0.2
1 Tennis Court 0.2
2 Park 0.2
3 Restaurant 0.2
4 Intersection 0.2
----North Toronto West----
venue freq
0 Coffee Shop 0.11
1 Sporting Goods Shop 0.11
2 Clothing Store 0.11
3 Yoga Studio 0.05
4 Shoe Store 0.05
----Northwest----
venue freq
0 Rental Car Location 0.5
1 Drugstore 0.5
2 Yoga Studio 0.0
3 Miscellaneous Shop 0.0
4 Monument / Landmark 0.0
----Northwood Park,York University----
venue freq
0 Massage Studio 0.17
1 Furniture / Home Store 0.17
2 Metro Station 0.17
3 Coffee Shop 0.17
4 Caribbean Restaurant 0.17
----Parkdale,Roncesvalles----
venue freq
0 Breakfast Spot 0.13
1 Gift Shop 0.13
2 Movie Theater 0.07
3 Restaurant 0.07
4 Piano Bar 0.07
----Parkwoods----
venue freq
0 Fast Food Restaurant 0.33
1 Food & Drink Shop 0.33
2 Park 0.33
3 Mexican Restaurant 0.00
4 Molecular Gastronomy Restaurant 0.00
----Queen's Park----
venue freq
0 Coffee Shop 0.24
1 Japanese Restaurant 0.05
2 Sushi Restaurant 0.05
3 Diner 0.05
4 Gym 0.05
----Rosedale----
venue freq
0 Park 0.50
1 Playground 0.25
2 Trail 0.25
3 Middle Eastern Restaurant 0.00
4 Monument / Landmark 0.00
----Roselawn----
venue freq
0 Garden 0.5
1 Pool 0.5
2 Middle Eastern Restaurant 0.0
3 Motel 0.0
4 Monument / Landmark 0.0
----Rouge,Malvern----
venue freq
0 Fast Food Restaurant 1.0
1 Movie Theater 0.0
2 Martial Arts Dojo 0.0
3 Massage Studio 0.0
4 Medical Center 0.0
----Runnymede,Swansea----
venue freq
0 Coffee Shop 0.11
1 Sushi Restaurant 0.08
2 Café 0.08
3 Pizza Place 0.05
4 Italian Restaurant 0.05
----Ryerson,Garden District----
venue freq
0 Coffee Shop 0.09
1 Clothing Store 0.07
2 Cosmetics Shop 0.04
3 Café 0.04
4 Bar 0.03
----Scarborough Village----
venue freq
0 Women's Store 0.33
1 Construction & Landscaping 0.33
2 Playground 0.33
3 Wine Bar 0.00
4 Movie Theater 0.00
----Silver Hills,York Mills----
venue freq
0 Cafeteria 1.0
1 Miscellaneous Shop 0.0
2 Movie Theater 0.0
3 Motel 0.0
4 Monument / Landmark 0.0
----St. James Town----
venue freq
0 Coffee Shop 0.08
1 Café 0.06
2 Restaurant 0.05
3 Clothing Store 0.04
4 Hotel 0.04
----Stn A PO Boxes 25 The Esplanade----
venue freq
0 Coffee Shop 0.11
1 Café 0.04
2 Restaurant 0.03
3 Cocktail Bar 0.03
4 Hotel 0.03
----Studio District----
venue freq
0 Café 0.10
1 Coffee Shop 0.08
2 Bakery 0.05
3 Gastropub 0.05
4 American Restaurant 0.05
----The Annex,North Midtown,Yorkville----
venue freq
0 Coffee Shop 0.13
1 Sandwich Place 0.13
2 Café 0.13
3 Pizza Place 0.09
4 BBQ Joint 0.04
----The Beaches----
venue freq
0 Music Venue 0.25
1 Coffee Shop 0.25
2 Pub 0.25
3 Middle Eastern Restaurant 0.00
4 Motel 0.00
----The Beaches West,India Bazaar----
venue freq
0 Pizza Place 0.05
1 Intersection 0.05
2 Fast Food Restaurant 0.05
3 Fish & Chips Shop 0.05
4 Burger Joint 0.05
----The Danforth West,Riverdale----
venue freq
0 Greek Restaurant 0.21
1 Coffee Shop 0.10
2 Ice Cream Shop 0.07
3 Bookstore 0.05
4 Italian Restaurant 0.05
----The Junction North,Runnymede----
venue freq
0 Pizza Place 0.25
1 Bus Line 0.25
2 Convenience Store 0.25
3 Bakery 0.25
4 Yoga Studio 0.00
----The Kingsway,Montgomery Road,Old Mill North----
venue freq
0 River 0.5
1 Park 0.5
2 Yoga Studio 0.0
3 Middle Eastern Restaurant 0.0
4 Monument / Landmark 0.0
----Thorncliffe Park----
venue freq
0 Indian Restaurant 0.13
1 Yoga Studio 0.07
2 Pharmacy 0.07
3 Park 0.07
4 Coffee Shop 0.07
----Victoria Village----
venue freq
0 Pizza Place 0.2
1 Coffee Shop 0.2
2 Hockey Arena 0.2
3 Portuguese Restaurant 0.2
4 Intersection 0.2
----Westmount----
venue freq
0 Pizza Place 0.29
1 Coffee Shop 0.14
2 Middle Eastern Restaurant 0.14
3 Sandwich Place 0.14
4 Chinese Restaurant 0.14
----Weston----
venue freq
0 Park 0.5
1 Convenience Store 0.5
2 Mexican Restaurant 0.0
3 Monument / Landmark 0.0
4 Molecular Gastronomy Restaurant 0.0
----Willowdale South----
venue freq
0 Ramen Restaurant 0.09
1 Restaurant 0.09
2 Pizza Place 0.06
3 Sandwich Place 0.06
4 Café 0.06
----Willowdale West----
venue freq
0 Pizza Place 0.25
1 Wine Bar 0.25
2 Pharmacy 0.25
3 Coffee Shop 0.25
4 Middle Eastern Restaurant 0.00
----Woburn----
venue freq
0 Coffee Shop 0.50
1 Insurance Office 0.25
2 Korean Restaurant 0.25
3 Yoga Studio 0.00
4 Mobile Phone Shop 0.00
----Woodbine Gardens,Parkview Hill----
venue freq
0 Pizza Place 0.15
1 Fast Food Restaurant 0.15
2 Rock Climbing Spot 0.08
3 Bank 0.08
4 Athletics & Sports 0.08
----Woodbine Heights----
venue freq
0 Curling Ice 0.14
1 Skating Rink 0.14
2 Asian Restaurant 0.14
3 Cosmetics Shop 0.14
4 Beer Store 0.14
----York Mills West----
venue freq
0 Park 0.5
1 Bank 0.5
2 Middle Eastern Restaurant 0.0
3 Monument / Landmark 0.0
4 Molecular Gastronomy Restaurant 0.0
###Markdown
Let's put that into a *pandas* dataframe First, let's write a function to sort the venues in descending order.
###Code
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
###Output
_____no_output_____
###Markdown
Now let's create the new dataframe and display the top 10 venues for each neighborhood.
###Code
import numpy as np # library to handle data in a vectorized manner
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = toronto_grouped['Neighborhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted
###Output
_____no_output_____
###Markdown
Cluster Neighborhoods Now Run *k*-means to cluster the neighborhood into 5 clusters.
###Code
# import k-means from clustering stage
from sklearn.cluster import KMeans
# set number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
###Output
_____no_output_____
###Markdown
Let's create a new dataframe that includes the cluster as well as the top 10 venues for each neighborhood.
###Code
toronto_merged = merged_df
# add clustering labels
toronto_merged['Cluster Labels'] = pd.Series(kmeans.labels_)
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighbourhood')
toronto_merged # check the last columns!
###Output
_____no_output_____
###Markdown
Identifying and removing rows with NaN value for the columnn Cluster Labels
###Code
toronto_merged['Cluster Labels'] = pd.to_numeric(toronto_merged['Cluster Labels'], errors='coerce')
toronto_merged_filtered = toronto_merged.dropna(subset=['Cluster Labels'])
toronto_merged_filtered
###Output
_____no_output_____
###Markdown
Finally, let's visualize the resulting clusters
###Code
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i+x+(i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged_filtered['Latitude'], toronto_merged_filtered['Longitude'], toronto_merged_filtered['Neighbourhood'], toronto_merged_filtered['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[int(cluster)-1],
fill=True,
fill_color=rainbow[int(cluster)-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Now Let's Examine the Clusters Cluster 1
###Code
toronto_merged_filtered.loc[toronto_merged_filtered['Cluster Labels'] == 0, toronto_merged_filtered.columns[[1] + list(range(5, toronto_merged_filtered.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 2
###Code
toronto_merged_filtered.loc[toronto_merged_filtered['Cluster Labels'] == 1, toronto_merged_filtered.columns[[1] + list(range(5, toronto_merged_filtered.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 3
###Code
toronto_merged_filtered.loc[toronto_merged_filtered['Cluster Labels'] == 2, toronto_merged_filtered.columns[[1] + list(range(5, toronto_merged_filtered.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 4
###Code
toronto_merged_filtered.loc[toronto_merged_filtered['Cluster Labels'] == 3, toronto_merged_filtered.columns[[1] + list(range(5, toronto_merged_filtered.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 5
###Code
toronto_merged_filtered.loc[toronto_merged_filtered['Cluster Labels'] == 4, toronto_merged_filtered.columns[[1] + list(range(5, toronto_merged_filtered.shape[1]))]]
###Output
_____no_output_____
###Markdown
Segmenting and Clustering Neighbourhoods in Toronto A Coursera Data Science Capstone Assignment
###Code
#import necessary modules
import pandas as pd
import numpy as np
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
import matplotlib.cm as cm
import matplotlib.colors as colors
###Output
_____no_output_____
###Markdown
1. Web Scraping for Toronto Neighbourhood Data Set The data will be scraped from wikipedia at https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M
###Code
#install needed packages
!pip install lxml html5lib beautifulsoup4 folium
#!pip install folium
#import folium
#!conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven't completed the Foursquare API lab
import folium # map rendering library
import json # library to handle JSON files
from sklearn.cluster import KMeans
URL = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
dfs = pd.read_html(URL)
print('There are {} tables on the page'.format(len(dfs))) #this show that there are
df = dfs[0] #Inspection shows that our table of interest is the first
df.head()
###Output
_____no_output_____
###Markdown
Renaming column Postal Code to PostalCode
###Code
df.rename(columns={'Postal Code':'PostalCode'},inplace=True)
df.head()
###Output
_____no_output_____
###Markdown
Only process the cells that have an assigned borough. Ignore cells with a borough that is Not assigned.
###Code
df.drop(df[df.Borough=='Not assigned'].index,inplace=True)
df.index = range(len(df))
df.head()
###Output
_____no_output_____
###Markdown
More than one neighborhood can exist in one postal code area. For example, in the table on the Wikipedia page, you will notice that M5A is listed twice and has two neighborhoods: Harbourfront and Regent Park. These two rows will be combined into one row with the neighborhoods separated with a comma as shown in row 11 in the above table.
###Code
df.head()
###Output
_____no_output_____
###Markdown
If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough.
###Code
df['Neighbourhood'] = df['Neighbourhood'].apply(lambda x: df['Borough'] if x == 'Not Assigned' else x)
df.head()
###Output
_____no_output_____
###Markdown
Assumptions: - The website is available- The table of interest is available as the first on the page - index 0- The Schema of the tables are assumed consistent as-is
###Code
df.shape
###Output
_____no_output_____
###Markdown
2. Integrating Latitude and Longitude for each Neighbourhod
###Code
CLIENT_ID = 'xxx' # your Foursquare ID
CLIENT_SECRET = 'xxx' # your Foursquare Secret
ACCESS_TOKEN = 'xxx' # your FourSquare Access Token
VERSION = '20180604'
LIMIT = 30
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
#!pip install geopy geocoder
from geopy.geocoders import Nominatim # module to convert an address into latitude and longitude values
###Output
_____no_output_____
###Markdown
import geocoder import geocoderfor index, row in df.iterrows(): i=0 postal_code = row['PostalCode'] latitude = 0 longitude = 0 location = None while(location is None and i<2): using geocoder location = geocoder.google('{}, Toronto, Ontario'.format(postal_code)) using geolocator geolocator = Nominatim(user_agent="foursquare_agent") location = geolocator.geocode('{}, Toronto, Ontario'.format(postal_code)) i += 1 if(location is not None): geolocator latitude = location.latitude longitude = location.longitude print(location.latitude,location.longitude) google eocoder latitude = lat_lng_coords[0] longitude = lat_lng_coords[1] df.loc[index,'Latitude'] = latitude df.loc[index,'Longitude'] = longitudedf.head()
###Code
cvURI = 'https://cocl.us/Geospatial_data'
dfloc = pd.read_csv(cvURI)
dfloc.head()
for index, row in df.iterrows():
match = None
match = dfloc[dfloc['Postal Code']==row['PostalCode']]
if(match is not None):
longitude = match['Longitude'].values[0]
latitude = match['Latitude'].values[0]
else:
longitude = row['Longitude']
latitude = row['Latitude']
df.loc[index,'Latitude'] = latitude
df.loc[index,'Longitude'] = longitude
df.head()
#Create new dataframe merging 2 previous dataframes when matching Postal Codes
#locationDF = pd.merge(dataDF, geoDataDF, on='Postal Code')
#locationDF.head()
###Output
_____no_output_____
###Markdown
3. Neighborhood Exploration Analysis Inspect our data give basic count summaries
###Code
neighborhoods = df
print('The dataframe has {} boroughs, {} Postal Codes, and {} neighborhoods.'.format(
len(neighborhoods['Borough'].unique()),len(neighborhoods['PostalCode'].unique()),
len(neighborhoods['Neighbourhood'].unique()),
)
)
#### How many unique neighbourhoods are there?
df.groupby(['Neighbourhood']).count().shape
address = 'Toronto, Ontario, Canada'
geolocator = Nominatim(user_agent="ontario_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto are {}, {}.'.format(latitude, longitude))
###Output
The geograpical coordinate of Toronto are 43.6534817, -79.3839347.
###Markdown
Create a map of Toronto, Canada with neighborhoods superimposed on top.
###Code
# create map of New York using latitude and longitude values
map_toronto = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighborhood in zip(neighborhoods['Latitude'], neighborhoods['Longitude'], neighborhoods['Borough'], neighborhoods['Neighbourhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
###Output
_____no_output_____
###Markdown
Now let's isolate the Neighbourhood of North York Borough for analysis
###Code
north_york = df[df['Borough']=='North York']
north_york
north_york.loc[0,'Latitude']
address = 'North York, Toronto, Canada'
geolocator = Nominatim(user_agent="ontario_explorer")
location = geolocator.geocode(address)
neighborhood_latitude = location.latitude
neighborhood_longitude = location.longitude
neighborhood_name = 'North York'
print('Latitude and longitude values of {} are {}, {}.'.format(neighborhood_name,
neighborhood_latitude,
neighborhood_longitude))
###Output
Latitude and longitude values of North York are 43.7543263, -79.44911696639593.
###Markdown
Now, let's get the top 100 venues that are in Parkwoods within a radius of 500 meters. First, let's create the GET request URL. Name your URL url.
###Code
# type your answer here
LIMIT = 50 # limit of number of venues returned by Foursquare API
radius = 500 # define radius
# create URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
neighborhood_latitude,
neighborhood_longitude,
radius,
LIMIT)
url
###Output
_____no_output_____
###Markdown
Send the GET request and examine the resutls
###Code
import requests
results = requests.get(url).json()
#results
###Output
_____no_output_____
###Markdown
We reuse the get_category_type function from class lab
###Code
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
venues = results['response']['groups'][0]['items']
nearby_venues = json_normalize(venues) # flatten JSON
# filter columns
filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng']
nearby_venues =nearby_venues.loc[:, filtered_columns]
# filter the category for each row
nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1)
# clean columns
nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns]
nearby_venues.head()
print('{} venues were returned by Foursquare.'.format(nearby_venues.shape[0]))
###Output
2 venues were returned by Foursquare.
###Markdown
4 Explore Neighbourhoods in North York
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
#print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
# type your answer here
north_york_venues = getNearbyVenues(names=north_york['Neighbourhood'],
latitudes=north_york['Latitude'],
longitudes=north_york['Longitude']
)
print(north_york_venues.shape)
north_york_venues.head()
###Output
(222, 7)
###Markdown
Let's check how many venues were returned for each neighborhood
###Code
north_york_venues.groupby('Neighborhood').count()
###Output
_____no_output_____
###Markdown
Let's find out how many unique categories can be curated from all the returned venues
###Code
print('There are {} uniques categories.'.format(len(north_york_venues['Venue Category'].unique())))
###Output
There are 96 uniques categories.
###Markdown
5. Analyze Each Neighborhood
###Code
# one hot encoding
ny_onehot = pd.get_dummies(north_york_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
ny_onehot['Neighborhood'] = north_york_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [ny_onehot.columns[-1]] + list(ny_onehot.columns[:-1])
ny_onehot = ny_onehot[fixed_columns]
ny_onehot.head()
###Output
_____no_output_____
###Markdown
And let's examine the new dataframe size.
###Code
ny_onehot.shape
###Output
_____no_output_____
###Markdown
Next, let's group rows by neighborhood and by taking the mean of the frequency of occurrence of each category
###Code
ny_grouped = ny_onehot.groupby('Neighborhood').mean().reset_index()
ny_grouped
###Output
_____no_output_____
###Markdown
Let's confirm the new size
###Code
ny_grouped.shape
###Output
_____no_output_____
###Markdown
Let's print each neighborhood along with the top 5 most common venues
###Code
num_top_venues = 5
for hood in ny_grouped['Neighborhood']:
print("----"+hood+"----")
temp = ny_grouped[ny_grouped['Neighborhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
###Output
----Bathurst Manor, Wilson Heights, Downsview North----
venue freq
0 Coffee Shop 0.09
1 Bank 0.09
2 Park 0.04
3 Fried Chicken Joint 0.04
4 Middle Eastern Restaurant 0.04
----Bayview Village----
venue freq
0 Chinese Restaurant 0.25
1 Café 0.25
2 Bank 0.25
3 Japanese Restaurant 0.25
4 Accessories Store 0.00
----Bedford Park, Lawrence Manor East----
venue freq
0 Sandwich Place 0.08
1 Italian Restaurant 0.08
2 Coffee Shop 0.08
3 Cupcake Shop 0.04
4 Butcher 0.04
----Don Mills----
venue freq
0 Gym 0.12
1 Beer Store 0.08
2 Coffee Shop 0.08
3 Restaurant 0.08
4 Japanese Restaurant 0.08
----Downsview----
venue freq
0 Grocery Store 0.21
1 Park 0.14
2 Home Service 0.07
3 Athletics & Sports 0.07
4 Gym / Fitness Center 0.07
----Fairview, Henry Farm, Oriole----
venue freq
0 Clothing Store 0.14
1 Coffee Shop 0.10
2 Fast Food Restaurant 0.06
3 Women's Store 0.04
4 Bank 0.04
----Glencairn----
venue freq
0 Pizza Place 0.4
1 Japanese Restaurant 0.2
2 Pub 0.2
3 Bakery 0.2
4 Park 0.0
----Hillcrest Village----
venue freq
0 Golf Course 0.2
1 Dog Run 0.2
2 Pool 0.2
3 Mediterranean Restaurant 0.2
4 Fast Food Restaurant 0.2
----Humber Summit----
venue freq
0 Pizza Place 0.33
1 Furniture / Home Store 0.33
2 Intersection 0.33
3 Japanese Restaurant 0.00
4 Park 0.00
----Humberlea, Emery----
venue freq
0 Construction & Landscaping 0.5
1 Baseball Field 0.5
2 Accessories Store 0.0
3 Japanese Restaurant 0.0
4 Park 0.0
----Lawrence Manor, Lawrence Heights----
venue freq
0 Clothing Store 0.50
1 Accessories Store 0.08
2 Boutique 0.08
3 Furniture / Home Store 0.08
4 Event Space 0.08
----North Park, Maple Leaf Park, Upwood Park----
venue freq
0 Construction & Landscaping 0.25
1 Park 0.25
2 Bakery 0.25
3 Basketball Court 0.25
4 Juice Bar 0.00
----Northwood Park, York University----
venue freq
0 Coffee Shop 0.2
1 Furniture / Home Store 0.2
2 Caribbean Restaurant 0.2
3 Bar 0.2
4 Massage Studio 0.2
----Parkwoods----
venue freq
0 Park 0.5
1 Food & Drink Shop 0.5
2 Accessories Store 0.0
3 Japanese Restaurant 0.0
4 Movie Theater 0.0
----Victoria Village----
venue freq
0 Hockey Arena 0.25
1 French Restaurant 0.25
2 Portuguese Restaurant 0.25
3 Coffee Shop 0.25
4 Accessories Store 0.00
----Willowdale, Willowdale East----
venue freq
0 Ramen Restaurant 0.09
1 Pizza Place 0.06
2 Coffee Shop 0.06
3 Restaurant 0.06
4 Café 0.06
----Willowdale, Willowdale West----
venue freq
0 Pizza Place 0.25
1 Coffee Shop 0.25
2 Butcher 0.25
3 Pharmacy 0.25
4 Hobby Shop 0.00
----York Mills West----
venue freq
0 Park 0.5
1 Convenience Store 0.5
2 Accessories Store 0.0
3 Greek Restaurant 0.0
4 Movie Theater 0.0
###Markdown
Let's put that into a pandas dataframeFirst, let's write a function to sort the venues in descending order.
###Code
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
###Output
_____no_output_____
###Markdown
Now let's create the new dataframe and display the top 10 venues for each neighborhood.
###Code
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = ny_grouped['Neighborhood']
for ind in np.arange(ny_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(ny_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
6. Cluster Neighborhoods Run k-means to cluster the neighborhood into 5 clusters.
###Code
# set number of clusters
kclusters = 5
ny_grouped_clustering = ny_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(ny_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
###Output
_____no_output_____
###Markdown
Let's create a new dataframe that includes the cluster as well as the top 10 venues for each neighborhood.
###Code
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
ny_merged = north_york
# merge manhattan_grouped with manhattan_data to add latitude/longitude for each neighborhood
ny_merged = ny_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighbourhood')
ny_merged.head() # check the last columns!
###Output
_____no_output_____
###Markdown
Finally, let's visualize the resulting clusters
###Code
# create map
map_clusters = folium.Map(location=[neighborhood_latitude, neighborhood_longitude], zoom_start=11)
import math
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(ny_merged['Latitude'], ny_merged['Longitude'], ny_merged['Neighbourhood'], ny_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
if(math.isnan(cluster)):
cluster = 1
else:
cluster = int(cluster)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
7. Examine Clusters Now, we can examine each cluster and determine the discriminating venue categories that distinguish each cluster. Based on the defining categories, you can then assign a name to each cluster.
###Code
ny_merged.loc[ny_merged['Cluster Labels'] == 0, ny_merged.columns[[1] + list(range(5, ny_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Question 1 Use the BeautifulSoup package or any other way you are comfortable with to transform the data in the table on the Wikipedia page into the above pandas dataframe Importing lib to get data in required format
###Code
import requests
website_url = requests.get('https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M').text
from bs4 import BeautifulSoup
soup = BeautifulSoup(website_url,'lxml')
print(soup.prettify())
###Output
<!DOCTYPE html>
<html class="client-nojs" dir="ltr" lang="en">
<head>
<meta charset="utf-8"/>
<title>
List of postal codes of Canada: M - Wikipedia
</title>
<script>
document.documentElement.className = document.documentElement.className.replace( /(^|\s)client-nojs(\s|$)/, "$1client-js$2" );
</script>
<script>
(window.RLQ=window.RLQ||[]).push(function(){mw.config.set({"wgCanonicalNamespace":"","wgCanonicalSpecialPageName":false,"wgNamespaceNumber":0,"wgPageName":"List_of_postal_codes_of_Canada:_M","wgTitle":"List of postal codes of Canada: M","wgCurRevisionId":876823784,"wgRevisionId":876823784,"wgArticleId":539066,"wgIsArticle":true,"wgIsRedirect":false,"wgAction":"view","wgUserName":null,"wgUserGroups":["*"],"wgCategories":["Communications in Ontario","Postal codes in Canada","Toronto","Ontario-related lists"],"wgBreakFrames":false,"wgPageContentLanguage":"en","wgPageContentModel":"wikitext","wgSeparatorTransformTable":["",""],"wgDigitTransformTable":["",""],"wgDefaultDateFormat":"dmy","wgMonthNames":["","January","February","March","April","May","June","July","August","September","October","November","December"],"wgMonthNamesShort":["","Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"],"wgRelevantPageName":"List_of_postal_codes_of_Canada:_M","wgRelevantArticleId":539066,"wgRequestId":"XEMEaApAICAAAJhnpFAAAABP","wgCSPNonce":false,"wgIsProbablyEditable":true,"wgRelevantPageIsProbablyEditable":true,"wgRestrictionEdit":[],"wgRestrictionMove":[],"wgFlaggedRevsParams":{"tags":{}},"wgStableRevisionId":null,"wgCategoryTreePageCategoryOptions":"{\"mode\":0,\"hideprefix\":20,\"showcount\":true,\"namespaces\":false}","wgWikiEditorEnabledModules":[],"wgBetaFeaturesFeatures":[],"wgMediaViewerOnClick":true,"wgMediaViewerEnabledByDefault":true,"wgPopupsShouldSendModuleToUser":true,"wgPopupsConflictsWithNavPopupGadget":false,"wgVisualEditor":{"pageLanguageCode":"en","pageLanguageDir":"ltr","pageVariantFallbacks":"en","usePageImages":true,"usePageDescriptions":true},"wgMFExpandAllSectionsUserOption":true,"wgMFEnableFontChanger":true,"wgMFDisplayWikibaseDescriptions":{"search":true,"nearby":true,"watchlist":true,"tagline":false},"wgRelatedArticles":null,"wgRelatedArticlesUseCirrusSearch":true,"wgRelatedArticlesOnlyUseCirrusSearch":false,"wgWMESchemaEditAttemptStepOversample":false,"wgULSCurrentAutonym":"English","wgNoticeProject":"wikipedia","wgCentralNoticeCookiesToDelete":[],"wgCentralNoticeCategoriesUsingLegacy":["Fundraising","fundraising"],"wgWikibaseItemId":"Q3248240","wgScoreNoteLanguages":{"arabic":"العربية","catalan":"català","deutsch":"Deutsch","english":"English","espanol":"español","italiano":"italiano","nederlands":"Nederlands","norsk":"norsk","portugues":"português","suomi":"suomi","svenska":"svenska","vlaams":"West-Vlams"},"wgScoreDefaultNoteLanguage":"nederlands","wgCentralAuthMobileDomain":false,"wgCodeMirrorEnabled":true,"wgVisualEditorToolbarScrollOffset":0,"wgVisualEditorUnsupportedEditParams":["undo","undoafter","veswitched"],"wgEditSubmitButtonLabelPublish":true});mw.loader.state({"ext.gadget.charinsert-styles":"ready","ext.globalCssJs.user.styles":"ready","ext.globalCssJs.site.styles":"ready","site.styles":"ready","noscript":"ready","user.styles":"ready","ext.globalCssJs.user":"ready","ext.globalCssJs.site":"ready","user":"ready","user.options":"ready","user.tokens":"loading","ext.cite.styles":"ready","mediawiki.legacy.shared":"ready","mediawiki.legacy.commonPrint":"ready","wikibase.client.init":"ready","ext.visualEditor.desktopArticleTarget.noscript":"ready","ext.uls.interlanguage":"ready","ext.wikimediaBadges":"ready","ext.3d.styles":"ready","mediawiki.skinning.interface":"ready","skins.vector.styles":"ready"});mw.loader.implement("user.tokens@0tffind",function($,jQuery,require,module){/*@nomin*/mw.user.tokens.set({"editToken":"+\\","patrolToken":"+\\","watchToken":"+\\","csrfToken":"+\\"});
});RLPAGEMODULES=["ext.cite.ux-enhancements","site","mediawiki.page.startup","mediawiki.page.ready","jquery.tablesorter","mediawiki.searchSuggest","ext.gadget.teahouse","ext.gadget.ReferenceTooltips","ext.gadget.watchlist-notice","ext.gadget.DRN-wizard","ext.gadget.charinsert","ext.gadget.refToolbar","ext.gadget.extra-toolbar-buttons","ext.gadget.switcher","ext.centralauth.centralautologin","mmv.head","mmv.bootstrap.autostart","ext.popups","ext.visualEditor.desktopArticleTarget.init","ext.visualEditor.targetLoader","ext.eventLogging","ext.wikimediaEvents","ext.navigationTiming","ext.uls.eventlogger","ext.uls.init","ext.uls.compactlinks","ext.uls.interface","ext.quicksurveys.init","ext.centralNotice.geoIP","ext.centralNotice.startUp","skins.vector.js"];mw.loader.load(RLPAGEMODULES);});
</script>
<link href="/w/load.php?debug=false&lang=en&modules=ext.3d.styles%7Cext.cite.styles%7Cext.uls.interlanguage%7Cext.visualEditor.desktopArticleTarget.noscript%7Cext.wikimediaBadges%7Cmediawiki.legacy.commonPrint%2Cshared%7Cmediawiki.skinning.interface%7Cskins.vector.styles%7Cwikibase.client.init&only=styles&skin=vector" rel="stylesheet"/>
<script async="" src="/w/load.php?debug=false&lang=en&modules=startup&only=scripts&skin=vector">
</script>
<meta content="" name="ResourceLoaderDynamicStyles"/>
<link href="/w/load.php?debug=false&lang=en&modules=ext.gadget.charinsert-styles&only=styles&skin=vector" rel="stylesheet"/>
<link href="/w/load.php?debug=false&lang=en&modules=site.styles&only=styles&skin=vector" rel="stylesheet"/>
<meta content="MediaWiki 1.33.0-wmf.13" name="generator"/>
<meta content="origin" name="referrer"/>
<meta content="origin-when-crossorigin" name="referrer"/>
<meta content="origin-when-cross-origin" name="referrer"/>
<link href="android-app://org.wikipedia/http/en.m.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M" rel="alternate"/>
<link href="/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit" rel="alternate" title="Edit this page" type="application/x-wiki"/>
<link href="/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit" rel="edit" title="Edit this page"/>
<link href="/static/apple-touch/wikipedia.png" rel="apple-touch-icon"/>
<link href="/static/favicon/wikipedia.ico" rel="shortcut icon"/>
<link href="/w/opensearch_desc.php" rel="search" title="Wikipedia (en)" type="application/opensearchdescription+xml"/>
<link href="//en.wikipedia.org/w/api.php?action=rsd" rel="EditURI" type="application/rsd+xml"/>
<link href="//creativecommons.org/licenses/by-sa/3.0/" rel="license"/>
<link href="https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M" rel="canonical"/>
<link href="//login.wikimedia.org" rel="dns-prefetch"/>
<link href="//meta.wikimedia.org" rel="dns-prefetch"/>
<!--[if lt IE 9]><script src="/w/load.php?debug=false&lang=en&modules=html5shiv&only=scripts&skin=vector&sync=1"></script><![endif]-->
</head>
<body class="mediawiki ltr sitedir-ltr mw-hide-empty-elt ns-0 ns-subject mw-editable page-List_of_postal_codes_of_Canada_M rootpage-List_of_postal_codes_of_Canada_M skin-vector action-view">
<div class="noprint" id="mw-page-base">
</div>
<div class="noprint" id="mw-head-base">
</div>
<div class="mw-body" id="content" role="main">
<a id="top">
</a>
<div class="mw-body-content" id="siteNotice">
<!-- CentralNotice -->
</div>
<div class="mw-indicators mw-body-content">
</div>
<h1 class="firstHeading" id="firstHeading" lang="en">
List of postal codes of Canada: M
</h1>
<div class="mw-body-content" id="bodyContent">
<div class="noprint" id="siteSub">
From Wikipedia, the free encyclopedia
</div>
<div id="contentSub">
</div>
<div id="jump-to-nav">
</div>
<a class="mw-jump-link" href="#mw-head">
Jump to navigation
</a>
<a class="mw-jump-link" href="#p-search">
Jump to search
</a>
<div class="mw-content-ltr" dir="ltr" id="mw-content-text" lang="en">
<div class="mw-parser-output">
<p>
This is a list of
<a href="/wiki/Postal_codes_in_Canada" title="Postal codes in Canada">
postal codes in Canada
</a>
where the first letter is M. Postal codes beginning with M are located within the city of
<a href="/wiki/Toronto" title="Toronto">
Toronto
</a>
in the province of
<a href="/wiki/Ontario" title="Ontario">
Ontario
</a>
. Only the first three characters are listed, corresponding to the Forward Sortation Area.
</p>
<p>
<a href="/wiki/Canada_Post" title="Canada Post">
Canada Post
</a>
provides a free postal code look-up tool on its website,
<sup class="reference" id="cite_ref-1">
<a href="#cite_note-1">
[1]
</a>
</sup>
via its
<a href="/wiki/Mobile_app" title="Mobile app">
applications
</a>
for such
<a class="mw-redirect" href="/wiki/Smartphones" title="Smartphones">
smartphones
</a>
as the
<a href="/wiki/IPhone" title="IPhone">
iPhone
</a>
and
<a href="/wiki/BlackBerry" title="BlackBerry">
BlackBerry
</a>
,
<sup class="reference" id="cite_ref-2">
<a href="#cite_note-2">
[2]
</a>
</sup>
and sells hard-copy directories and
<a href="/wiki/CD-ROM" title="CD-ROM">
CD-ROMs
</a>
. Many vendors also sell validation tools, which allow customers to properly match addresses and postal codes. Hard-copy directories can also be consulted in all post offices, and some libraries.
</p>
<h2>
<span class="mw-headline" id="Toronto_-_FSAs">
<a href="/wiki/Toronto" title="Toronto">
Toronto
</a>
-
<a href="/wiki/Postal_codes_in_Canada#Forward_sortation_areas" title="Postal codes in Canada">
FSAs
</a>
</span>
<span class="mw-editsection">
<span class="mw-editsection-bracket">
[
</span>
<a href="/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit&section=1" title="Edit section: Toronto - FSAs">
edit
</a>
<span class="mw-editsection-bracket">
]
</span>
</span>
</h2>
<p>
Note: There are no rural FSAs in Toronto, hence no postal codes start with M0.
</p>
<table class="wikitable sortable">
<tbody>
<tr>
<th>
Postcode
</th>
<th>
Borough
</th>
<th>
Neighbourhood
</th>
</tr>
<tr>
<td>
M1A
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M2A
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3A
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Parkwoods" title="Parkwoods">
Parkwoods
</a>
</td>
</tr>
<tr>
<td>
M4A
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Victoria_Village" title="Victoria Village">
Victoria Village
</a>
</td>
</tr>
<tr>
<td>
M5A
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Harbourfront_(Toronto)" title="Harbourfront (Toronto)">
Harbourfront
</a>
</td>
</tr>
<tr>
<td>
M5A
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Regent_Park" title="Regent Park">
Regent Park
</a>
</td>
</tr>
<tr>
<td>
M6A
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Lawrence_Heights" title="Lawrence Heights">
Lawrence Heights
</a>
</td>
</tr>
<tr>
<td>
M6A
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Lawrence_Manor" title="Lawrence Manor">
Lawrence Manor
</a>
</td>
</tr>
<tr>
<td>
M7A
</td>
<td>
<a href="/wiki/Queen%27s_Park_(Toronto)" title="Queen's Park (Toronto)">
Queen's Park
</a>
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8A
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9A
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Islington_Avenue" title="Islington Avenue">
Islington Avenue
</a>
</td>
</tr>
<tr>
<td>
M1B
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Rouge,_Toronto" title="Rouge, Toronto">
Rouge
</a>
</td>
</tr>
<tr>
<td>
M1B
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Malvern,_Toronto" title="Malvern, Toronto">
Malvern
</a>
</td>
</tr>
<tr>
<td>
M2B
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3B
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
Don Mills North
</td>
</tr>
<tr>
<td>
M4B
</td>
<td>
<a href="/wiki/East_York" title="East York">
East York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Woodbine_Gardens" title="Woodbine Gardens">
Woodbine Gardens
</a>
</td>
</tr>
<tr>
<td>
M4B
</td>
<td>
<a href="/wiki/East_York" title="East York">
East York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Parkview_Hill" title="Parkview Hill">
Parkview Hill
</a>
</td>
</tr>
<tr>
<td>
M5B
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Ryerson" title="Ryerson">
Ryerson
</a>
</td>
</tr>
<tr>
<td>
M5B
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
Garden District
</td>
</tr>
<tr>
<td>
M6B
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Glencairn,_Ontario" title="Glencairn, Ontario">
Glencairn
</a>
</td>
</tr>
<tr>
<td>
M7B
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8B
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9B
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Cloverdale
</td>
</tr>
<tr>
<td>
M9B
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Islington" title="Islington">
Islington
</a>
</td>
</tr>
<tr>
<td>
M9B
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Martin Grove
</td>
</tr>
<tr>
<td>
M9B
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Princess_Gardens" title="Princess Gardens">
Princess Gardens
</a>
</td>
</tr>
<tr>
<td>
M9B
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/West_Deane_Park" title="West Deane Park">
West Deane Park
</a>
</td>
</tr>
<tr>
<td>
M1C
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Highland_Creek_(Toronto)" title="Highland Creek (Toronto)">
Highland Creek
</a>
</td>
</tr>
<tr>
<td>
M1C
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Rouge_Hill" title="Rouge Hill">
Rouge Hill
</a>
</td>
</tr>
<tr>
<td>
M1C
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Port_Union,_Toronto" title="Port Union, Toronto">
Port Union
</a>
</td>
</tr>
<tr>
<td>
M2C
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3C
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Flemingdon_Park" title="Flemingdon Park">
Flemingdon Park
</a>
</td>
</tr>
<tr>
<td>
M3C
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
Don Mills South
</td>
</tr>
<tr>
<td>
M4C
</td>
<td>
<a href="/wiki/East_York" title="East York">
East York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Woodbine_Heights" title="Woodbine Heights">
Woodbine Heights
</a>
</td>
</tr>
<tr>
<td>
M5C
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/St._James_Town" title="St. James Town">
St. James Town
</a>
</td>
</tr>
<tr>
<td>
M6C
</td>
<td>
York
</td>
<td>
<a class="mw-redirect" href="/wiki/Humewood-Cedarvale" title="Humewood-Cedarvale">
Humewood-Cedarvale
</a>
</td>
</tr>
<tr>
<td>
M7C
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8C
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9C
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Bloordale Gardens
</td>
</tr>
<tr>
<td>
M9C
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Eringate
</td>
</tr>
<tr>
<td>
M9C
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Markland_Wood" title="Markland Wood">
Markland Wood
</a>
</td>
</tr>
<tr>
<td>
M9C
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Old Burnhamthorpe
</td>
</tr>
<tr>
<td>
M1E
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
Guildwood
</td>
</tr>
<tr>
<td>
M1E
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Morningside,_Toronto" title="Morningside, Toronto">
Morningside
</a>
</td>
</tr>
<tr>
<td>
M1E
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/West_Hill,_Toronto" title="West Hill, Toronto">
West Hill
</a>
</td>
</tr>
<tr>
<td>
M2E
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3E
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M4E
</td>
<td>
<a href="/wiki/East_Toronto" title="East Toronto">
East Toronto
</a>
</td>
<td>
<a href="/wiki/The_Beaches" title="The Beaches">
The Beaches
</a>
</td>
</tr>
<tr>
<td>
M5E
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Berczy_Park" title="Berczy Park">
Berczy Park
</a>
</td>
</tr>
<tr>
<td>
M6E
</td>
<td>
York
</td>
<td>
Caledonia-Fairbanks
</td>
</tr>
<tr>
<td>
M7E
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8E
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9E
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M1G
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Woburn,_Toronto" title="Woburn, Toronto">
Woburn
</a>
</td>
</tr>
<tr>
<td>
M2G
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3G
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M4G
</td>
<td>
<a href="/wiki/East_York" title="East York">
East York
</a>
</td>
<td>
<a href="/wiki/Leaside" title="Leaside">
Leaside
</a>
</td>
</tr>
<tr>
<td>
M5G
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
Central Bay Street
</td>
</tr>
<tr>
<td>
M6G
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
Christie
</td>
</tr>
<tr>
<td>
M7G
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8G
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9G
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M1H
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Woburn,_Toronto" title="Woburn, Toronto">
Cedarbrae
</a>
</td>
</tr>
<tr>
<td>
M2H
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Hillcrest_Village" title="Hillcrest Village">
Hillcrest Village
</a>
</td>
</tr>
<tr>
<td>
M3H
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Bathurst_Manor" title="Bathurst Manor">
Bathurst Manor
</a>
</td>
</tr>
<tr>
<td>
M3H
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
Downsview North
</td>
</tr>
<tr>
<td>
M3H
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Wilson_Heights,_Toronto" title="Wilson Heights, Toronto">
Wilson Heights
</a>
</td>
</tr>
<tr>
<td>
M4H
</td>
<td>
<a href="/wiki/East_York" title="East York">
East York
</a>
</td>
<td>
<a href="/wiki/Thorncliffe_Park" title="Thorncliffe Park">
Thorncliffe Park
</a>
</td>
</tr>
<tr>
<td>
M5H
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Adelaide" title="Adelaide">
Adelaide
</a>
</td>
</tr>
<tr>
<td>
M5H
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/King" title="King">
King
</a>
</td>
</tr>
<tr>
<td>
M5H
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
Richmond
</td>
</tr>
<tr>
<td>
M6H
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Dovercourt_Village" title="Dovercourt Village">
Dovercourt Village
</a>
</td>
</tr>
<tr>
<td>
M6H
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
Dufferin
</td>
</tr>
<tr>
<td>
M7H
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8H
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9H
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M1J
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Scarborough_Village" title="Scarborough Village">
Scarborough Village
</a>
</td>
</tr>
<tr>
<td>
M2J
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
Fairview
</td>
</tr>
<tr>
<td>
M2J
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Henry_Farm" title="Henry Farm">
Henry Farm
</a>
</td>
</tr>
<tr>
<td>
M2J
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
Oriole
</td>
</tr>
<tr>
<td>
M3J
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Northwood_Park" title="Northwood Park">
Northwood Park
</a>
</td>
</tr>
<tr>
<td>
M3J
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/York_University" title="York University">
York University
</a>
</td>
</tr>
<tr>
<td>
M4J
</td>
<td>
<a href="/wiki/East_York" title="East York">
East York
</a>
</td>
<td>
<a href="/wiki/East_Toronto" title="East Toronto">
East Toronto
</a>
</td>
</tr>
<tr>
<td>
M5J
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
Harbourfront East
</td>
</tr>
<tr>
<td>
M5J
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Toronto_Islands" title="Toronto Islands">
Toronto Islands
</a>
</td>
</tr>
<tr>
<td>
M5J
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Union_Station_(Toronto)" title="Union Station (Toronto)">
Union Station
</a>
</td>
</tr>
<tr>
<td>
M6J
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
<a href="/wiki/Little_Portugal,_Toronto" title="Little Portugal, Toronto">
Little Portugal
</a>
</td>
</tr>
<tr>
<td>
M6J
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
<a href="/wiki/Trinity%E2%80%93Bellwoods" title="Trinity–Bellwoods">
Trinity
</a>
</td>
</tr>
<tr>
<td>
M7J
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8J
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9J
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M1K
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
East Birchmount Park
</td>
</tr>
<tr>
<td>
M1K
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Ionview" title="Ionview">
Ionview
</a>
</td>
</tr>
<tr>
<td>
M1K
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Kennedy_Park,_Toronto" title="Kennedy Park, Toronto">
Kennedy Park
</a>
</td>
</tr>
<tr>
<td>
M2K
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Bayview_Village" title="Bayview Village">
Bayview Village
</a>
</td>
</tr>
<tr>
<td>
M3K
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/CFB_Toronto" title="CFB Toronto">
CFB Toronto
</a>
</td>
</tr>
<tr>
<td>
M3K
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
Downsview East
</td>
</tr>
<tr>
<td>
M4K
</td>
<td>
<a href="/wiki/East_Toronto" title="East Toronto">
East Toronto
</a>
</td>
<td>
The Danforth West
</td>
</tr>
<tr>
<td>
M4K
</td>
<td>
<a href="/wiki/East_Toronto" title="East Toronto">
East Toronto
</a>
</td>
<td>
<a href="/wiki/Riverdale,_Toronto" title="Riverdale, Toronto">
Riverdale
</a>
</td>
</tr>
<tr>
<td>
M5K
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Design_Exchange" title="Design Exchange">
Design Exchange
</a>
</td>
</tr>
<tr>
<td>
M5K
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Toronto_Dominion_Centre" title="Toronto Dominion Centre">
Toronto Dominion Centre
</a>
</td>
</tr>
<tr>
<td>
M6K
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
Brockton
</td>
</tr>
<tr>
<td>
M6K
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
<a href="/wiki/Exhibition_Place" title="Exhibition Place">
Exhibition Place
</a>
</td>
</tr>
<tr>
<td>
M6K
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Parkdale_Village" title="Parkdale Village">
Parkdale Village
</a>
</td>
</tr>
<tr>
<td>
M7K
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8K
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9K
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M1L
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Clairlea" title="Clairlea">
Clairlea
</a>
</td>
</tr>
<tr>
<td>
M1L
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Golden_Mile,_Toronto" title="Golden Mile, Toronto">
Golden Mile
</a>
</td>
</tr>
<tr>
<td>
M1L
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Oakridge,_Toronto" title="Oakridge, Toronto">
Oakridge
</a>
</td>
</tr>
<tr>
<td>
M2L
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Silver_Hills" title="Silver Hills">
Silver Hills
</a>
</td>
</tr>
<tr>
<td>
M2L
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/York_Mills" title="York Mills">
York Mills
</a>
</td>
</tr>
<tr>
<td>
M3L
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Downsview" title="Downsview">
Downsview West
</a>
</td>
</tr>
<tr>
<td>
M4L
</td>
<td>
<a href="/wiki/East_Toronto" title="East Toronto">
East Toronto
</a>
</td>
<td>
The Beaches West
</td>
</tr>
<tr>
<td>
M4L
</td>
<td>
<a href="/wiki/East_Toronto" title="East Toronto">
East Toronto
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/India_Bazaar" title="India Bazaar">
India Bazaar
</a>
</td>
</tr>
<tr>
<td>
M5L
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Commerce_Court" title="Commerce Court">
Commerce Court
</a>
</td>
</tr>
<tr>
<td>
M5L
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
Victoria Hotel
</td>
</tr>
<tr>
<td>
M6L
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Maple_Leaf_Park" title="Maple Leaf Park">
Maple Leaf Park
</a>
</td>
</tr>
<tr>
<td>
M6L
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
North Park
</td>
</tr>
<tr>
<td>
M6L
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
Upwood Park
</td>
</tr>
<tr>
<td>
M7L
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8L
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9L
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Humber_Summit" title="Humber Summit">
Humber Summit
</a>
</td>
</tr>
<tr>
<td>
M1M
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Cliffcrest" title="Cliffcrest">
Cliffcrest
</a>
</td>
</tr>
<tr>
<td>
M1M
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Cliffside,_Toronto" title="Cliffside, Toronto">
Cliffside
</a>
</td>
</tr>
<tr>
<td>
M1M
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
Scarborough Village West
</td>
</tr>
<tr>
<td>
M2M
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Newtonbrook" title="Newtonbrook">
Newtonbrook
</a>
</td>
</tr>
<tr>
<td>
M2M
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Willowdale,_Toronto" title="Willowdale, Toronto">
Willowdale
</a>
</td>
</tr>
<tr>
<td>
M3M
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
Downsview Central
</td>
</tr>
<tr>
<td>
M4M
</td>
<td>
<a href="/wiki/East_Toronto" title="East Toronto">
East Toronto
</a>
</td>
<td>
Studio District
</td>
</tr>
<tr>
<td>
M5M
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a href="/wiki/Bedford_Park,_Toronto" title="Bedford Park, Toronto">
Bedford Park
</a>
</td>
</tr>
<tr>
<td>
M5M
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
Lawrence Manor East
</td>
</tr>
<tr>
<td>
M6M
</td>
<td>
<a href="/wiki/York" title="York">
York
</a>
</td>
<td>
Del Ray
</td>
</tr>
<tr>
<td>
M6M
</td>
<td>
<a href="/wiki/York" title="York">
York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Keelesdale" title="Keelesdale">
Keelesdale
</a>
</td>
</tr>
<tr>
<td>
M6M
</td>
<td>
<a href="/wiki/York" title="York">
York
</a>
</td>
<td>
<a href="/wiki/Mount_Dennis" title="Mount Dennis">
Mount Dennis
</a>
</td>
</tr>
<tr>
<td>
M6M
</td>
<td>
<a href="/wiki/York" title="York">
York
</a>
</td>
<td>
<a href="/wiki/Silverthorn,_Toronto" title="Silverthorn, Toronto">
Silverthorn
</a>
</td>
</tr>
<tr>
<td>
M7M
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8M
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9M
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Emery,_Toronto" title="Emery, Toronto">
Emery
</a>
</td>
</tr>
<tr>
<td>
M9M
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Humberlea" title="Humberlea">
Humberlea
</a>
</td>
</tr>
<tr>
<td>
M1N
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Birch_Cliff" title="Birch Cliff">
Birch Cliff
</a>
</td>
</tr>
<tr>
<td>
M1N
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
Cliffside West
</td>
</tr>
<tr>
<td>
M2N
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
Willowdale South
</td>
</tr>
<tr>
<td>
M3N
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
Downsview Northwest
</td>
</tr>
<tr>
<td>
M4N
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
<a href="/wiki/Lawrence_Park,_Toronto" title="Lawrence Park, Toronto">
Lawrence Park
</a>
</td>
</tr>
<tr>
<td>
M5N
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
Roselawn
</td>
</tr>
<tr>
<td>
M6N
</td>
<td>
<a href="/wiki/York" title="York">
York
</a>
</td>
<td>
The Junction North
</td>
</tr>
<tr>
<td>
M6N
</td>
<td>
<a href="/wiki/York" title="York">
York
</a>
</td>
<td>
Runnymede
</td>
</tr>
<tr>
<td>
M7N
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8N
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9N
</td>
<td>
<a href="/wiki/York" title="York">
York
</a>
</td>
<td>
<a href="/wiki/Weston,_Toronto" title="Weston, Toronto">
Weston
</a>
</td>
</tr>
<tr>
<td>
M1P
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Dorset_Park" title="Dorset Park">
Dorset Park
</a>
</td>
</tr>
<tr>
<td>
M1P
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Scarborough_Town_Centre" title="Scarborough Town Centre">
Scarborough Town Centre
</a>
</td>
</tr>
<tr>
<td>
M1P
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Wexford_Heights" title="Wexford Heights">
Wexford Heights
</a>
</td>
</tr>
<tr>
<td>
M2P
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
York Mills West
</td>
</tr>
<tr>
<td>
M3P
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M4P
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
Davisville North
</td>
</tr>
<tr>
<td>
M5P
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Forest_Hill_North" title="Forest Hill North">
Forest Hill North
</a>
</td>
</tr>
<tr>
<td>
M5P
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
Forest Hill West
</td>
</tr>
<tr>
<td>
M6P
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
<a href="/wiki/High_Park" title="High Park">
High Park
</a>
</td>
</tr>
<tr>
<td>
M6P
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
The Junction South
</td>
</tr>
<tr>
<td>
M7P
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8P
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9P
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Westmount" title="Westmount">
Westmount
</a>
</td>
</tr>
<tr>
<td>
M1R
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Maryvale,_Toronto" title="Maryvale, Toronto">
Maryvale
</a>
</td>
</tr>
<tr>
<td>
M1R
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Wexford" title="Wexford">
Wexford
</a>
</td>
</tr>
<tr>
<td>
M2R
</td>
<td>
<a href="/wiki/North_York" title="North York">
North York
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Willowdale_West" title="Willowdale West">
Willowdale West
</a>
</td>
</tr>
<tr>
<td>
M3R
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M4R
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
North Toronto West
</td>
</tr>
<tr>
<td>
M5R
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
<a href="/wiki/The_Annex" title="The Annex">
The Annex
</a>
</td>
</tr>
<tr>
<td>
M5R
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
North Midtown
</td>
</tr>
<tr>
<td>
M5R
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
<a href="/wiki/Yorkville,_Toronto" title="Yorkville, Toronto">
Yorkville
</a>
</td>
</tr>
<tr>
<td>
M6R
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
<a href="/wiki/Parkdale,_Toronto" title="Parkdale, Toronto">
Parkdale
</a>
</td>
</tr>
<tr>
<td>
M6R
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
<a href="/wiki/Roncesvalles" title="Roncesvalles">
Roncesvalles
</a>
</td>
</tr>
<tr>
<td>
M7R
</td>
<td>
Mississauga
</td>
<td>
Canada Post Gateway Processing Centre
</td>
</tr>
<tr>
<td>
M8R
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9R
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Kingsview_Village" title="Kingsview Village">
Kingsview Village
</a>
</td>
</tr>
<tr>
<td>
M9R
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Martin Grove Gardens
</td>
</tr>
<tr>
<td>
M9R
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Richview Gardens
</td>
</tr>
<tr>
<td>
M9R
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
St. Phillips
</td>
</tr>
<tr>
<td>
M1S
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Agincourt,_Toronto" title="Agincourt, Toronto">
Agincourt
</a>
</td>
</tr>
<tr>
<td>
M2S
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3S
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M4S
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
Davisville
</td>
</tr>
<tr>
<td>
M5S
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
Harbord
</td>
</tr>
<tr>
<td>
M5S
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/University_of_Toronto" title="University of Toronto">
University of Toronto
</a>
</td>
</tr>
<tr>
<td>
M6S
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
<a href="/wiki/Runnymede" title="Runnymede">
Runnymede
</a>
</td>
</tr>
<tr>
<td>
M6S
</td>
<td>
<a href="/wiki/West_Toronto" title="West Toronto">
West Toronto
</a>
</td>
<td>
<a href="/wiki/Swansea" title="Swansea">
Swansea
</a>
</td>
</tr>
<tr>
<td>
M7S
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8S
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9S
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M1T
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
Clarks Corners
</td>
</tr>
<tr>
<td>
M1T
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
Sullivan
</td>
</tr>
<tr>
<td>
M1T
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Tam_O%27Shanter_%E2%80%93_Sullivan" title="Tam O'Shanter – Sullivan">
Tam O'Shanter
</a>
</td>
</tr>
<tr>
<td>
M2T
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3T
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M4T
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
<a href="/wiki/Moore_Park,_Toronto" title="Moore Park, Toronto">
Moore Park
</a>
</td>
</tr>
<tr>
<td>
M4T
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
Summerhill East
</td>
</tr>
<tr>
<td>
M5T
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Chinatown" title="Chinatown">
Chinatown
</a>
</td>
</tr>
<tr>
<td>
M5T
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Grange_Park_(Toronto)" title="Grange Park (Toronto)">
Grange Park
</a>
</td>
</tr>
<tr>
<td>
M5T
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Kensington_Market" title="Kensington Market">
Kensington Market
</a>
</td>
</tr>
<tr>
<td>
M6T
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M7T
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8T
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M9T
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M1V
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Agincourt_North" title="Agincourt North">
Agincourt North
</a>
</td>
</tr>
<tr>
<td>
M1V
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
L'Amoreaux East
</td>
</tr>
<tr>
<td>
M1V
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a href="/wiki/Milliken,_Ontario" title="Milliken, Ontario">
Milliken
</a>
</td>
</tr>
<tr>
<td>
M1V
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
Steeles East
</td>
</tr>
<tr>
<td>
M2V
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3V
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M4V
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
<a href="/wiki/Deer_Park,_Toronto" title="Deer Park, Toronto">
Deer Park
</a>
</td>
</tr>
<tr>
<td>
M4V
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
Forest Hill SE
</td>
</tr>
<tr>
<td>
M4V
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Rathnelly" title="Rathnelly">
Rathnelly
</a>
</td>
</tr>
<tr>
<td>
M4V
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
<a href="/wiki/South_Hill,_Toronto" title="South Hill, Toronto">
South Hill
</a>
</td>
</tr>
<tr>
<td>
M4V
</td>
<td>
<a class="mw-redirect" href="/wiki/Central_Toronto" title="Central Toronto">
Central Toronto
</a>
</td>
<td>
Summerhill West
</td>
</tr>
<tr>
<td>
M5V
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/CN_Tower" title="CN Tower">
CN Tower
</a>
</td>
</tr>
<tr>
<td>
M5V
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
Bathurst Quay
</td>
</tr>
<tr>
<td>
M5V
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
Island airport
</td>
</tr>
<tr>
<td>
M5V
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
Harbourfront West
</td>
</tr>
<tr>
<td>
M5V
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/King_and_Spadina" title="King and Spadina">
King and Spadina
</a>
</td>
</tr>
<tr>
<td>
M5V
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Railway_Lands" title="Railway Lands">
Railway Lands
</a>
</td>
</tr>
<tr>
<td>
M5V
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/South_Niagara" title="South Niagara">
South Niagara
</a>
</td>
</tr>
<tr>
<td>
M6V
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M7V
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8V
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Humber Bay Shores
</td>
</tr>
<tr>
<td>
M8V
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Mimico South
</td>
</tr>
<tr>
<td>
M8V
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/New_Toronto" title="New Toronto">
New Toronto
</a>
</td>
</tr>
<tr>
<td>
M9V
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Albion Gardens
</td>
</tr>
<tr>
<td>
M9V
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Beaumond_Heights" title="Beaumond Heights">
Beaumond Heights
</a>
</td>
</tr>
<tr>
<td>
M9V
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Humbergate
</td>
</tr>
<tr>
<td>
M9V
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Mount_Olive-Silverstone-Jamestown" title="Mount Olive-Silverstone-Jamestown">
Jamestown
</a>
</td>
</tr>
<tr>
<td>
M9V
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Mount_Olive-Silverstone-Jamestown" title="Mount Olive-Silverstone-Jamestown">
Mount Olive
</a>
</td>
</tr>
<tr>
<td>
M9V
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Silverstone" title="Silverstone">
Silverstone
</a>
</td>
</tr>
<tr>
<td>
M9V
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/South_Steeles" title="South Steeles">
South Steeles
</a>
</td>
</tr>
<tr>
<td>
M9V
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Thistletown" title="Thistletown">
Thistletown
</a>
</td>
</tr>
<tr>
<td>
M1W
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
L'Amoreaux West
</td>
</tr>
<tr>
<td>
M1W
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Steeles_West" title="Steeles West">
Steeles West
</a>
</td>
</tr>
<tr>
<td>
M2W
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3W
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M4W
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Rosedale,_Toronto" title="Rosedale, Toronto">
Rosedale
</a>
</td>
</tr>
<tr>
<td>
M5W
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
Stn A PO Boxes 25 The Esplanade
</td>
</tr>
<tr>
<td>
M6W
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M7W
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8W
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Alderwood,_Toronto" title="Alderwood, Toronto">
Alderwood
</a>
</td>
</tr>
<tr>
<td>
M8W
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Long_Branch,_Toronto" title="Long Branch, Toronto">
Long Branch
</a>
</td>
</tr>
<tr>
<td>
M9W
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Northwest" title="Northwest">
Northwest
</a>
</td>
</tr>
<tr>
<td>
M1X
</td>
<td>
<a href="/wiki/Scarborough,_Toronto" title="Scarborough, Toronto">
Scarborough
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Upper_Rouge" title="Upper Rouge">
Upper Rouge
</a>
</td>
</tr>
<tr>
<td>
M2X
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3X
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M4X
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Cabbagetown,_Toronto" title="Cabbagetown, Toronto">
Cabbagetown
</a>
</td>
</tr>
<tr>
<td>
M4X
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/St._James_Town" title="St. James Town">
St. James Town
</a>
</td>
</tr>
<tr>
<td>
M5X
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/First_Canadian_Place" title="First Canadian Place">
First Canadian Place
</a>
</td>
</tr>
<tr>
<td>
M5X
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Underground_city" title="Underground city">
Underground city
</a>
</td>
</tr>
<tr>
<td>
M6X
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M7X
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8X
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/The_Kingsway" title="The Kingsway">
The Kingsway
</a>
</td>
</tr>
<tr>
<td>
M8X
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Montgomery Road
</td>
</tr>
<tr>
<td>
M8X
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Old Mill North
</td>
</tr>
<tr>
<td>
M9X
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M1Y
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M2Y
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3Y
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M4Y
</td>
<td>
<a href="/wiki/Downtown_Toronto" title="Downtown Toronto">
Downtown Toronto
</a>
</td>
<td>
<a href="/wiki/Church_and_Wellesley" title="Church and Wellesley">
Church and Wellesley
</a>
</td>
</tr>
<tr>
<td>
M5Y
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M6Y
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M7Y
</td>
<td>
<a href="/wiki/East_Toronto" title="East Toronto">
East Toronto
</a>
</td>
<td>
Business Reply Mail Processing Centre 969 Eastern
</td>
</tr>
<tr>
<td>
M8Y
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Humber_Bay" title="Humber Bay">
Humber Bay
</a>
</td>
</tr>
<tr>
<td>
M8Y
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Kingsmills_Park" title="Kingsmills Park">
King's Mill Park
</a>
</td>
</tr>
<tr>
<td>
M8Y
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Kingsway Park South East
</td>
</tr>
<tr>
<td>
M8Y
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Mimico" title="Mimico">
Mimico NE
</a>
</td>
</tr>
<tr>
<td>
M8Y
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Old_Mill,_Toronto" title="Old Mill, Toronto">
Old Mill South
</a>
</td>
</tr>
<tr>
<td>
M8Y
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/The_Queensway" title="The Queensway">
The Queensway East
</a>
</td>
</tr>
<tr>
<td>
M8Y
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Fairmont_Royal_York_Hotel" title="Fairmont Royal York Hotel">
Royal York South East
</a>
</td>
</tr>
<tr>
<td>
M8Y
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a class="mw-redirect" href="/wiki/Sunnylea" title="Sunnylea">
Sunnylea
</a>
</td>
</tr>
<tr>
<td>
M9Y
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M1Z
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M2Z
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M3Z
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M4Z
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M5Z
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M6Z
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M7Z
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
<tr>
<td>
M8Z
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Kingsway Park South West
</td>
</tr>
<tr>
<td>
M8Z
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Mimico" title="Mimico">
Mimico NW
</a>
</td>
</tr>
<tr>
<td>
M8Z
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/The_Queensway" title="The Queensway">
The Queensway West
</a>
</td>
</tr>
<tr>
<td>
M8Z
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
Royal York South West
</td>
</tr>
<tr>
<td>
M8Z
</td>
<td>
<a href="/wiki/Etobicoke" title="Etobicoke">
Etobicoke
</a>
</td>
<td>
<a href="/wiki/Bloor" title="Bloor">
South of Bloor
</a>
</td>
</tr>
<tr>
<td>
M9Z
</td>
<td>
Not assigned
</td>
<td>
Not assigned
</td>
</tr>
</tbody>
</table>
<div>
<table class="multicol" role="presentation" style="border-collapse: collapse; padding: 0; border: 0; background:transparent; width:100%;">
</table>
<h2>
<span id="Most_populated_FSAs.5B3.5D">
</span>
<span class="mw-headline" id="Most_populated_FSAs[3]">
Most populated FSAs
<sup class="reference" id="cite_ref-statcan_3-0">
<a href="#cite_note-statcan-3">
[3]
</a>
</sup>
</span>
<span class="mw-editsection">
<span class="mw-editsection-bracket">
[
</span>
<a href="/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit&section=2" title="Edit section: Most populated FSAs[3]">
edit
</a>
<span class="mw-editsection-bracket">
]
</span>
</span>
</h2>
<ol>
<li>
M1B, 65,129
</li>
<li>
M2N, 60,124
</li>
<li>
M1V, 55,250
</li>
<li>
M9V, 55,159
</li>
<li>
M2J, 54,391
</li>
</ol>
<p>
</p>
<table cellpadding="2" cellspacing="0" rules="all" style="width:100%; border-collapse:collapse; border:1px solid #ccc;">
<tbody>
<tr>
<td>
</td>
</tr>
</tbody>
</table>
</div>
<p>
</p>
<h2>
<span id="Least_populated_FSAs.5B3.5D">
</span>
<span class="mw-headline" id="Least_populated_FSAs[3]">
Least populated FSAs
<sup class="reference" id="cite_ref-statcan_3-1">
<a href="#cite_note-statcan-3">
[3]
</a>
</sup>
</span>
<span class="mw-editsection">
<span class="mw-editsection-bracket">
[
</span>
<a href="/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit&section=3" title="Edit section: Least populated FSAs[3]">
edit
</a>
<span class="mw-editsection-bracket">
]
</span>
</span>
</h2>
<ol>
<li>
M5K, 5
</li>
<li>
M5L, 5
</li>
<li>
M5W, 5
</li>
<li>
M5X, 5
</li>
<li>
M7A, 5
</li>
</ol>
<p>
</p>
<h2>
<span class="mw-headline" id="References">
References
</span>
<span class="mw-editsection">
<span class="mw-editsection-bracket">
[
</span>
<a href="/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit&section=4" title="Edit section: References">
edit
</a>
<span class="mw-editsection-bracket">
]
</span>
</span>
</h2>
<div class="mw-references-wrap">
<ol class="references">
<li id="cite_note-1">
<span class="mw-cite-backlink">
<b>
<a href="#cite_ref-1">
^
</a>
</b>
</span>
<span class="reference-text">
<cite class="citation web">
Canada Post.
<a class="external text" href="https://www.canadapost.ca/cpotools/apps/fpc/personal/findByCity?execution=e2s1" rel="nofollow">
"Canada Post - Find a Postal Code"
</a>
<span class="reference-accessdate">
. Retrieved
<span class="nowrap">
9 November
</span>
2008
</span>
.
</cite>
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=unknown&rft.btitle=Canada+Post+-+Find+a+Postal+Code&rft.au=Canada+Post&rft_id=https%3A%2F%2Fwww.canadapost.ca%2Fcpotools%2Fapps%2Ffpc%2Fpersonal%2FfindByCity%3Fexecution%3De2s1&rfr_id=info%3Asid%2Fen.wikipedia.org%3AList+of+postal+codes+of+Canada%3A+M">
</span>
<style data-mw-deduplicate="TemplateStyles:r879151008">
.mw-parser-output cite.citation{font-style:inherit}.mw-parser-output .citation q{quotes:"\"""\"""'""'"}.mw-parser-output .citation .cs1-lock-free a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-limited a,.mw-parser-output .citation .cs1-lock-registration a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .citation .cs1-lock-subscription a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-ws-icon a{background:url("//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/12px-Wikisource-logo.svg.png")no-repeat;background-position:right .1em center}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-maint{display:none;color:#33aa33;margin-left:0.3em}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
</style>
</span>
</li>
<li id="cite_note-2">
<span class="mw-cite-backlink">
<b>
<a href="#cite_ref-2">
^
</a>
</b>
</span>
<span class="reference-text">
<cite class="citation web">
<a class="external text" href="https://web.archive.org/web/20110519093024/http://www.canadapost.ca/cpo/mc/personal/tools/mobileapp/default.jsf" rel="nofollow">
"Mobile Apps"
</a>
. Canada Post. Archived from
<a class="external text" href="http://www.canadapost.ca/cpo/mc/personal/tools/mobileapp/default.jsf" rel="nofollow">
the original
</a>
on 2011-05-19.
</cite>
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=unknown&rft.btitle=Mobile+Apps&rft.pub=Canada+Post&rft_id=http%3A%2F%2Fwww.canadapost.ca%2Fcpo%2Fmc%2Fpersonal%2Ftools%2Fmobileapp%2Fdefault.jsf&rfr_id=info%3Asid%2Fen.wikipedia.org%3AList+of+postal+codes+of+Canada%3A+M">
</span>
<link href="mw-data:TemplateStyles:r879151008" rel="mw-deduplicated-inline-style"/>
</span>
</li>
<li id="cite_note-statcan-3">
<span class="mw-cite-backlink">
^
<a href="#cite_ref-statcan_3-0">
<sup>
<i>
<b>
a
</b>
</i>
</sup>
</a>
<a href="#cite_ref-statcan_3-1">
<sup>
<i>
<b>
b
</b>
</i>
</sup>
</a>
</span>
<span class="reference-text">
<cite class="citation web">
<a class="external text" href="http://www12.statcan.ca/english/census06/data/popdwell/Table.cfm?T=1201&SR=1&S=0&O=A&RPP=9999&PR=0&CMA=0" rel="nofollow">
"2006 Census of Population"
</a>
. 15 October 2008.
</cite>
<span class="Z3988" title="ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=unknown&rft.btitle=2006+Census+of+Population&rft.date=2008-10-15&rft_id=http%3A%2F%2Fwww12.statcan.ca%2Fenglish%2Fcensus06%2Fdata%2Fpopdwell%2FTable.cfm%3FT%3D1201%26SR%3D1%26S%3D0%26O%3DA%26RPP%3D9999%26PR%3D0%26CMA%3D0&rfr_id=info%3Asid%2Fen.wikipedia.org%3AList+of+postal+codes+of+Canada%3A+M">
</span>
<link href="mw-data:TemplateStyles:r879151008" rel="mw-deduplicated-inline-style"/>
</span>
</li>
</ol>
</div>
<table class="navbox">
<tbody>
<tr>
<td style="width:36px; text-align:center">
<a class="image" href="/wiki/File:Flag_of_Canada.svg" title="Flag of Canada">
<img alt="Flag of Canada" data-file-height="500" data-file-width="1000" decoding="async" height="18" src="//upload.wikimedia.org/wikipedia/en/thumb/c/cf/Flag_of_Canada.svg/36px-Flag_of_Canada.svg.png" srcset="//upload.wikimedia.org/wikipedia/en/thumb/c/cf/Flag_of_Canada.svg/54px-Flag_of_Canada.svg.png 1.5x, //upload.wikimedia.org/wikipedia/en/thumb/c/cf/Flag_of_Canada.svg/72px-Flag_of_Canada.svg.png 2x" width="36"/>
</a>
</td>
<th class="navbox-title" style="font-size:110%">
<a href="/wiki/Postal_codes_in_Canada" title="Postal codes in Canada">
Canadian postal codes
</a>
</th>
<td style="width:36px; text-align:center">
<a class="image" href="/wiki/File:Canadian_postal_district_map_(without_legends).svg">
<img alt="Canadian postal district map (without legends).svg" data-file-height="846" data-file-width="1000" decoding="async" height="18" src="//upload.wikimedia.org/wikipedia/commons/thumb/e/eb/Canadian_postal_district_map_%28without_legends%29.svg/21px-Canadian_postal_district_map_%28without_legends%29.svg.png" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/e/eb/Canadian_postal_district_map_%28without_legends%29.svg/32px-Canadian_postal_district_map_%28without_legends%29.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/e/eb/Canadian_postal_district_map_%28without_legends%29.svg/43px-Canadian_postal_district_map_%28without_legends%29.svg.png 2x" width="21"/>
</a>
</td>
</tr>
<tr>
<td colspan="3" style="text-align:center; font-size: 100%;">
<table cellspacing="0" style="background-color: #F8F8F8;" width="100%">
<tbody>
<tr>
<td style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/Newfoundland_and_Labrador" title="Newfoundland and Labrador">
NL
</a>
</td>
<td style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/Nova_Scotia" title="Nova Scotia">
NS
</a>
</td>
<td style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/Prince_Edward_Island" title="Prince Edward Island">
PE
</a>
</td>
<td style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/New_Brunswick" title="New Brunswick">
NB
</a>
</td>
<td colspan="3" style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/Quebec" title="Quebec">
QC
</a>
</td>
<td colspan="5" style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/Ontario" title="Ontario">
ON
</a>
</td>
<td style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/Manitoba" title="Manitoba">
MB
</a>
</td>
<td style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/Saskatchewan" title="Saskatchewan">
SK
</a>
</td>
<td style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/Alberta" title="Alberta">
AB
</a>
</td>
<td style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/British_Columbia" title="British Columbia">
BC
</a>
</td>
<td style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/Nunavut" title="Nunavut">
NU
</a>
/
<a href="/wiki/Northwest_Territories" title="Northwest Territories">
NT
</a>
</td>
<td style="text-align:center; border:1px solid #aaa;">
<a href="/wiki/Yukon" title="Yukon">
YT
</a>
</td>
</tr>
<tr>
<td align="center" style="border: 1px solid #FF0000; background-color: #FFE0E0; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_A" title="List of postal codes of Canada: A">
A
</a>
</td>
<td align="center" style="border: 1px solid #FF4000; background-color: #FFE8E0; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_B" title="List of postal codes of Canada: B">
B
</a>
</td>
<td align="center" style="border: 1px solid #FF8000; background-color: #FFF0E0; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_C" title="List of postal codes of Canada: C">
C
</a>
</td>
<td align="center" style="border: 1px solid #FFC000; background-color: #FFF8E0; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_E" title="List of postal codes of Canada: E">
E
</a>
</td>
<td align="center" style="border: 1px solid #FFFF00; background-color: #FFFFE0; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_G" title="List of postal codes of Canada: G">
G
</a>
</td>
<td align="center" style="border: 1px solid #C0FF00; background-color: #F8FFE0; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_H" title="List of postal codes of Canada: H">
H
</a>
</td>
<td align="center" style="border: 1px solid #80FF00; background-color: #F0FFE0; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_J" title="List of postal codes of Canada: J">
J
</a>
</td>
<td align="center" style="border: 1px solid #00FF00; background-color: #E0FFE0; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_K" title="List of postal codes of Canada: K">
K
</a>
</td>
<td align="center" style="border: 1px solid #00FF80; background-color: #E0FFF0; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_L" title="List of postal codes of Canada: L">
L
</a>
</td>
<td align="center" style="border: 1px solid #E0FFF8; background-color: #00FFC0; font-size: 135%; color: black;" width="5%">
<a class="mw-selflink selflink">
M
</a>
</td>
<td align="center" style="border: 1px solid #00FFE0; background-color: #E0FFFC; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_N" title="List of postal codes of Canada: N">
N
</a>
</td>
<td align="center" style="border: 1px solid #00FFFF; background-color: #E0FFFF; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_P" title="List of postal codes of Canada: P">
P
</a>
</td>
<td align="center" style="border: 1px solid #00C0FF; background-color: #E0F8FF; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_R" title="List of postal codes of Canada: R">
R
</a>
</td>
<td align="center" style="border: 1px solid #0080FF; background-color: #E0F0FF; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_S" title="List of postal codes of Canada: S">
S
</a>
</td>
<td align="center" style="border: 1px solid #0040FF; background-color: #E0E8FF; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_T" title="List of postal codes of Canada: T">
T
</a>
</td>
<td align="center" style="border: 1px solid #0000FF; background-color: #E0E0FF; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_V" title="List of postal codes of Canada: V">
V
</a>
</td>
<td align="center" style="border: 1px solid #A000FF; background-color: #E8E0FF; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_X" title="List of postal codes of Canada: X">
X
</a>
</td>
<td align="center" style="border: 1px solid #FF00FF; background-color: #FFE0FF; font-size: 135%;" width="5%">
<a href="/wiki/List_of_postal_codes_of_Canada:_Y" title="List of postal codes of Canada: Y">
Y
</a>
</td>
</tr>
</tbody>
</table>
</td>
</tr>
</tbody>
</table>
<!--
NewPP limit report
Parsed by mw1271
Cached time: 20190119110414
Cache expiry: 1900800
Dynamic content: false
CPU time usage: 0.216 seconds
Real time usage: 0.264 seconds
Preprocessor visited node count: 587/1000000
Preprocessor generated node count: 0/1500000
Post‐expand include size: 10232/2097152 bytes
Template argument size: 13/2097152 bytes
Highest expansion depth: 4/40
Expensive parser function count: 0/500
Unstrip recursion depth: 1/20
Unstrip post‐expand size: 9025/5000000 bytes
Number of Wikibase entities loaded: 0/400
Lua time usage: 0.052/10.000 seconds
Lua memory usage: 1.67 MB/50 MB
-->
<!--
Transclusion expansion time report (%,ms,calls,template)
100.00% 119.829 1 -total
79.52% 95.288 3 Template:Cite_web
4.73% 5.667 1 Template:Col-2
4.52% 5.419 1 Template:Canadian_postal_codes
3.05% 3.653 1 Template:Col-begin
2.88% 3.454 1 Template:Col-break
1.97% 2.359 2 Template:Col-end
-->
<!-- Saved in parser cache with key enwiki:pcache:idhash:539066-0!canonical and timestamp 20190119110414 and revision id 876823784
-->
</div>
<noscript>
<img alt="" height="1" src="//en.wikipedia.org/wiki/Special:CentralAutoLogin/start?type=1x1" style="border: none; position: absolute;" title="" width="1"/>
</noscript>
</div>
<div class="printfooter">
Retrieved from "
<a dir="ltr" href="https://en.wikipedia.org/w/index.php?title=List_of_postal_codes_of_Canada:_M&oldid=876823784">
https://en.wikipedia.org/w/index.php?title=List_of_postal_codes_of_Canada:_M&oldid=876823784
</a>
"
</div>
<div class="catlinks" data-mw="interface" id="catlinks">
<div class="mw-normal-catlinks" id="mw-normal-catlinks">
<a href="/wiki/Help:Category" title="Help:Category">
Categories
</a>
:
<ul>
<li>
<a href="/wiki/Category:Communications_in_Ontario" title="Category:Communications in Ontario">
Communications in Ontario
</a>
</li>
<li>
<a href="/wiki/Category:Postal_codes_in_Canada" title="Category:Postal codes in Canada">
Postal codes in Canada
</a>
</li>
<li>
<a href="/wiki/Category:Toronto" title="Category:Toronto">
Toronto
</a>
</li>
<li>
<a href="/wiki/Category:Ontario-related_lists" title="Category:Ontario-related lists">
Ontario-related lists
</a>
</li>
</ul>
</div>
</div>
<div class="visualClear">
</div>
</div>
</div>
<div id="mw-navigation">
<h2>
Navigation menu
</h2>
<div id="mw-head">
<div aria-labelledby="p-personal-label" id="p-personal" role="navigation">
<h3 id="p-personal-label">
Personal tools
</h3>
<ul>
<li id="pt-anonuserpage">
Not logged in
</li>
<li id="pt-anontalk">
<a accesskey="n" href="/wiki/Special:MyTalk" title="Discussion about edits from this IP address [n]">
Talk
</a>
</li>
<li id="pt-anoncontribs">
<a accesskey="y" href="/wiki/Special:MyContributions" title="A list of edits made from this IP address [y]">
Contributions
</a>
</li>
<li id="pt-createaccount">
<a href="/w/index.php?title=Special:CreateAccount&returnto=List+of+postal+codes+of+Canada%3A+M" title="You are encouraged to create an account and log in; however, it is not mandatory">
Create account
</a>
</li>
<li id="pt-login">
<a accesskey="o" href="/w/index.php?title=Special:UserLogin&returnto=List+of+postal+codes+of+Canada%3A+M" title="You're encouraged to log in; however, it's not mandatory. [o]">
Log in
</a>
</li>
</ul>
</div>
<div id="left-navigation">
<div aria-labelledby="p-namespaces-label" class="vectorTabs" id="p-namespaces" role="navigation">
<h3 id="p-namespaces-label">
Namespaces
</h3>
<ul>
<li class="selected" id="ca-nstab-main">
<span>
<a accesskey="c" href="/wiki/List_of_postal_codes_of_Canada:_M" title="View the content page [c]">
Article
</a>
</span>
</li>
<li id="ca-talk">
<span>
<a accesskey="t" href="/wiki/Talk:List_of_postal_codes_of_Canada:_M" rel="discussion" title="Discussion about the content page [t]">
Talk
</a>
</span>
</li>
</ul>
</div>
<div aria-labelledby="p-variants-label" class="vectorMenu emptyPortlet" id="p-variants" role="navigation">
<input aria-labelledby="p-variants-label" class="vectorMenuCheckbox" type="checkbox"/>
<h3 id="p-variants-label">
<span>
Variants
</span>
</h3>
<ul class="menu">
</ul>
</div>
</div>
<div id="right-navigation">
<div aria-labelledby="p-views-label" class="vectorTabs" id="p-views" role="navigation">
<h3 id="p-views-label">
Views
</h3>
<ul>
<li class="collapsible selected" id="ca-view">
<span>
<a href="/wiki/List_of_postal_codes_of_Canada:_M">
Read
</a>
</span>
</li>
<li class="collapsible" id="ca-edit">
<span>
<a accesskey="e" href="/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=edit" title="Edit this page [e]">
Edit
</a>
</span>
</li>
<li class="collapsible" id="ca-history">
<span>
<a accesskey="h" href="/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=history" title="Past revisions of this page [h]">
View history
</a>
</span>
</li>
</ul>
</div>
<div aria-labelledby="p-cactions-label" class="vectorMenu emptyPortlet" id="p-cactions" role="navigation">
<input aria-labelledby="p-cactions-label" class="vectorMenuCheckbox" type="checkbox"/>
<h3 id="p-cactions-label">
<span>
More
</span>
</h3>
<ul class="menu">
</ul>
</div>
<div id="p-search" role="search">
<h3>
<label for="searchInput">
Search
</label>
</h3>
<form action="/w/index.php" id="searchform">
<div id="simpleSearch">
<input accesskey="f" id="searchInput" name="search" placeholder="Search Wikipedia" title="Search Wikipedia [f]" type="search"/>
<input name="title" type="hidden" value="Special:Search"/>
<input class="searchButton mw-fallbackSearchButton" id="mw-searchButton" name="fulltext" title="Search Wikipedia for this text" type="submit" value="Search"/>
<input class="searchButton" id="searchButton" name="go" title="Go to a page with this exact name if it exists" type="submit" value="Go"/>
</div>
</form>
</div>
</div>
</div>
<div id="mw-panel">
<div id="p-logo" role="banner">
<a class="mw-wiki-logo" href="/wiki/Main_Page" title="Visit the main page">
</a>
</div>
<div aria-labelledby="p-navigation-label" class="portal" id="p-navigation" role="navigation">
<h3 id="p-navigation-label">
Navigation
</h3>
<div class="body">
<ul>
<li id="n-mainpage-description">
<a accesskey="z" href="/wiki/Main_Page" title="Visit the main page [z]">
Main page
</a>
</li>
<li id="n-contents">
<a href="/wiki/Portal:Contents" title="Guides to browsing Wikipedia">
Contents
</a>
</li>
<li id="n-featuredcontent">
<a href="/wiki/Portal:Featured_content" title="Featured content – the best of Wikipedia">
Featured content
</a>
</li>
<li id="n-currentevents">
<a href="/wiki/Portal:Current_events" title="Find background information on current events">
Current events
</a>
</li>
<li id="n-randompage">
<a accesskey="x" href="/wiki/Special:Random" title="Load a random article [x]">
Random article
</a>
</li>
<li id="n-sitesupport">
<a href="https://donate.wikimedia.org/wiki/Special:FundraiserRedirector?utm_source=donate&utm_medium=sidebar&utm_campaign=C13_en.wikipedia.org&uselang=en" title="Support us">
Donate to Wikipedia
</a>
</li>
<li id="n-shoplink">
<a href="//shop.wikimedia.org" title="Visit the Wikipedia store">
Wikipedia store
</a>
</li>
</ul>
</div>
</div>
<div aria-labelledby="p-interaction-label" class="portal" id="p-interaction" role="navigation">
<h3 id="p-interaction-label">
Interaction
</h3>
<div class="body">
<ul>
<li id="n-help">
<a href="/wiki/Help:Contents" title="Guidance on how to use and edit Wikipedia">
Help
</a>
</li>
<li id="n-aboutsite">
<a href="/wiki/Wikipedia:About" title="Find out about Wikipedia">
About Wikipedia
</a>
</li>
<li id="n-portal">
<a href="/wiki/Wikipedia:Community_portal" title="About the project, what you can do, where to find things">
Community portal
</a>
</li>
<li id="n-recentchanges">
<a accesskey="r" href="/wiki/Special:RecentChanges" title="A list of recent changes in the wiki [r]">
Recent changes
</a>
</li>
<li id="n-contactpage">
<a href="//en.wikipedia.org/wiki/Wikipedia:Contact_us" title="How to contact Wikipedia">
Contact page
</a>
</li>
</ul>
</div>
</div>
<div aria-labelledby="p-tb-label" class="portal" id="p-tb" role="navigation">
<h3 id="p-tb-label">
Tools
</h3>
<div class="body">
<ul>
<li id="t-whatlinkshere">
<a accesskey="j" href="/wiki/Special:WhatLinksHere/List_of_postal_codes_of_Canada:_M" title="List of all English Wikipedia pages containing links to this page [j]">
What links here
</a>
</li>
<li id="t-recentchangeslinked">
<a accesskey="k" href="/wiki/Special:RecentChangesLinked/List_of_postal_codes_of_Canada:_M" rel="nofollow" title="Recent changes in pages linked from this page [k]">
Related changes
</a>
</li>
<li id="t-upload">
<a accesskey="u" href="/wiki/Wikipedia:File_Upload_Wizard" title="Upload files [u]">
Upload file
</a>
</li>
<li id="t-specialpages">
<a accesskey="q" href="/wiki/Special:SpecialPages" title="A list of all special pages [q]">
Special pages
</a>
</li>
<li id="t-permalink">
<a href="/w/index.php?title=List_of_postal_codes_of_Canada:_M&oldid=876823784" title="Permanent link to this revision of the page">
Permanent link
</a>
</li>
<li id="t-info">
<a href="/w/index.php?title=List_of_postal_codes_of_Canada:_M&action=info" title="More information about this page">
Page information
</a>
</li>
<li id="t-wikibase">
<a accesskey="g" href="https://www.wikidata.org/wiki/Special:EntityPage/Q3248240" title="Link to connected data repository item [g]">
Wikidata item
</a>
</li>
<li id="t-cite">
<a href="/w/index.php?title=Special:CiteThisPage&page=List_of_postal_codes_of_Canada%3A_M&id=876823784" title="Information on how to cite this page">
Cite this page
</a>
</li>
</ul>
</div>
</div>
<div aria-labelledby="p-coll-print_export-label" class="portal" id="p-coll-print_export" role="navigation">
<h3 id="p-coll-print_export-label">
Print/export
</h3>
<div class="body">
<ul>
<li id="coll-create_a_book">
<a href="/w/index.php?title=Special:Book&bookcmd=book_creator&referer=List+of+postal+codes+of+Canada%3A+M">
Create a book
</a>
</li>
<li id="coll-download-as-rdf2latex">
<a href="/w/index.php?title=Special:ElectronPdf&page=List+of+postal+codes+of+Canada%3A+M&action=show-download-screen">
Download as PDF
</a>
</li>
<li id="t-print">
<a accesskey="p" href="/w/index.php?title=List_of_postal_codes_of_Canada:_M&printable=yes" title="Printable version of this page [p]">
Printable version
</a>
</li>
</ul>
</div>
</div>
<div aria-labelledby="p-lang-label" class="portal" id="p-lang" role="navigation">
<h3 id="p-lang-label">
Languages
</h3>
<div class="body">
<ul>
<li class="interlanguage-link interwiki-fr">
<a class="interlanguage-link-target" href="https://fr.wikipedia.org/wiki/Liste_des_codes_postaux_canadiens_d%C3%A9butant_par_M" hreflang="fr" lang="fr" title="Liste des codes postaux canadiens débutant par M – French">
Français
</a>
</li>
</ul>
<div class="after-portlet after-portlet-lang">
<span class="wb-langlinks-edit wb-langlinks-link">
<a class="wbc-editpage" href="https://www.wikidata.org/wiki/Special:EntityPage/Q3248240#sitelinks-wikipedia" title="Edit interlanguage links">
Edit links
</a>
</span>
</div>
</div>
</div>
</div>
</div>
<div id="footer" role="contentinfo">
<ul id="footer-info">
<li id="footer-info-lastmod">
This page was last edited on 4 January 2019, at 18:32
<span class="anonymous-show">
(UTC)
</span>
.
</li>
<li id="footer-info-copyright">
Text is available under the
<a href="//en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License" rel="license">
Creative Commons Attribution-ShareAlike License
</a>
<a href="//creativecommons.org/licenses/by-sa/3.0/" rel="license" style="display:none;">
</a>
;
additional terms may apply. By using this site, you agree to the
<a href="//foundation.wikimedia.org/wiki/Terms_of_Use">
Terms of Use
</a>
and
<a href="//foundation.wikimedia.org/wiki/Privacy_policy">
Privacy Policy
</a>
. Wikipedia® is a registered trademark of the
<a href="//www.wikimediafoundation.org/">
Wikimedia Foundation, Inc.
</a>
, a non-profit organization.
</li>
</ul>
<ul id="footer-places">
<li id="footer-places-privacy">
<a class="extiw" href="https://foundation.wikimedia.org/wiki/Privacy_policy" title="wmf:Privacy policy">
Privacy policy
</a>
</li>
<li id="footer-places-about">
<a href="/wiki/Wikipedia:About" title="Wikipedia:About">
About Wikipedia
</a>
</li>
<li id="footer-places-disclaimer">
<a href="/wiki/Wikipedia:General_disclaimer" title="Wikipedia:General disclaimer">
Disclaimers
</a>
</li>
<li id="footer-places-contact">
<a href="//en.wikipedia.org/wiki/Wikipedia:Contact_us">
Contact Wikipedia
</a>
</li>
<li id="footer-places-developers">
<a href="https://www.mediawiki.org/wiki/Special:MyLanguage/How_to_contribute">
Developers
</a>
</li>
<li id="footer-places-cookiestatement">
<a href="https://foundation.wikimedia.org/wiki/Cookie_statement">
Cookie statement
</a>
</li>
<li id="footer-places-mobileview">
<a class="noprint stopMobileRedirectToggle" href="//en.m.wikipedia.org/w/index.php?title=List_of_postal_codes_of_Canada:_M&mobileaction=toggle_view_mobile">
Mobile view
</a>
</li>
</ul>
<ul class="noprint" id="footer-icons">
<li id="footer-copyrightico">
<a href="https://wikimediafoundation.org/">
<img alt="Wikimedia Foundation" height="31" src="/static/images/wikimedia-button.png" srcset="/static/images/wikimedia-button-1.5x.png 1.5x, /static/images/wikimedia-button-2x.png 2x" width="88"/>
</a>
</li>
<li id="footer-poweredbyico">
<a href="//www.mediawiki.org/">
<img alt="Powered by MediaWiki" height="31" src="/static/images/poweredby_mediawiki_88x31.png" srcset="/static/images/poweredby_mediawiki_132x47.png 1.5x, /static/images/poweredby_mediawiki_176x62.png 2x" width="88"/>
</a>
</li>
</ul>
<div style="clear: both;">
</div>
</div>
<script>
(window.RLQ=window.RLQ||[]).push(function(){mw.config.set({"wgPageParseReport":{"limitreport":{"cputime":"0.216","walltime":"0.264","ppvisitednodes":{"value":587,"limit":1000000},"ppgeneratednodes":{"value":0,"limit":1500000},"postexpandincludesize":{"value":10232,"limit":2097152},"templateargumentsize":{"value":13,"limit":2097152},"expansiondepth":{"value":4,"limit":40},"expensivefunctioncount":{"value":0,"limit":500},"unstrip-depth":{"value":1,"limit":20},"unstrip-size":{"value":9025,"limit":5000000},"entityaccesscount":{"value":0,"limit":400},"timingprofile":["100.00% 119.829 1 -total"," 79.52% 95.288 3 Template:Cite_web"," 4.73% 5.667 1 Template:Col-2"," 4.52% 5.419 1 Template:Canadian_postal_codes"," 3.05% 3.653 1 Template:Col-begin"," 2.88% 3.454 1 Template:Col-break"," 1.97% 2.359 2 Template:Col-end"]},"scribunto":{"limitreport-timeusage":{"value":"0.052","limit":"10.000"},"limitreport-memusage":{"value":1746922,"limit":52428800}},"cachereport":{"origin":"mw1271","timestamp":"20190119110414","ttl":1900800,"transientcontent":false}}});});
</script>
<script type="application/ld+json">
{"@context":"https:\/\/schema.org","@type":"Article","name":"List of postal codes of Canada: M","url":"https:\/\/en.wikipedia.org\/wiki\/List_of_postal_codes_of_Canada:_M","sameAs":"http:\/\/www.wikidata.org\/entity\/Q3248240","mainEntity":"http:\/\/www.wikidata.org\/entity\/Q3248240","author":{"@type":"Organization","name":"Contributors to Wikimedia projects"},"publisher":{"@type":"Organization","name":"Wikimedia Foundation, Inc.","logo":{"@type":"ImageObject","url":"https:\/\/www.wikimedia.org\/static\/images\/wmf-hor-googpub.png"}},"datePublished":"2004-03-20T10:02:13Z","dateModified":"2019-01-04T18:32:45Z","headline":"Wikimedia list article"}
</script>
<script>
(window.RLQ=window.RLQ||[]).push(function(){mw.config.set({"wgBackendResponseTime":95,"wgHostname":"mw1330"});});
</script>
</body>
</html>
###Markdown
By observation we can see that the tabular data is availabe in table and belongs to class="wikitable sortable"So let's extract only table
###Code
My_table = soup.find('table',{'class':'wikitable sortable'})
My_table
print(My_table.tr.text)
headers="Postcode,Borough,Neighbourhood"
###Output
_____no_output_____
###Markdown
Getting all values in tr and seperating each td within by ","
###Code
table1=""
for tr in My_table.find_all('tr'):
row1=""
for tds in tr.find_all('td'):
row1=row1+","+tds.text
table1=table1+row1[1:]
print(table1)
###Output
M1A,Not assigned,Not assigned
M2A,Not assigned,Not assigned
M3A,North York,Parkwoods
M4A,North York,Victoria Village
M5A,Downtown Toronto,Harbourfront
M5A,Downtown Toronto,Regent Park
M6A,North York,Lawrence Heights
M6A,North York,Lawrence Manor
M7A,Queen's Park,Not assigned
M8A,Not assigned,Not assigned
M9A,Etobicoke,Islington Avenue
M1B,Scarborough,Rouge
M1B,Scarborough,Malvern
M2B,Not assigned,Not assigned
M3B,North York,Don Mills North
M4B,East York,Woodbine Gardens
M4B,East York,Parkview Hill
M5B,Downtown Toronto,Ryerson
M5B,Downtown Toronto,Garden District
M6B,North York,Glencairn
M7B,Not assigned,Not assigned
M8B,Not assigned,Not assigned
M9B,Etobicoke,Cloverdale
M9B,Etobicoke,Islington
M9B,Etobicoke,Martin Grove
M9B,Etobicoke,Princess Gardens
M9B,Etobicoke,West Deane Park
M1C,Scarborough,Highland Creek
M1C,Scarborough,Rouge Hill
M1C,Scarborough,Port Union
M2C,Not assigned,Not assigned
M3C,North York,Flemingdon Park
M3C,North York,Don Mills South
M4C,East York,Woodbine Heights
M5C,Downtown Toronto,St. James Town
M6C,York,Humewood-Cedarvale
M7C,Not assigned,Not assigned
M8C,Not assigned,Not assigned
M9C,Etobicoke,Bloordale Gardens
M9C,Etobicoke,Eringate
M9C,Etobicoke,Markland Wood
M9C,Etobicoke,Old Burnhamthorpe
M1E,Scarborough,Guildwood
M1E,Scarborough,Morningside
M1E,Scarborough,West Hill
M2E,Not assigned,Not assigned
M3E,Not assigned,Not assigned
M4E,East Toronto,The Beaches
M5E,Downtown Toronto,Berczy Park
M6E,York,Caledonia-Fairbanks
M7E,Not assigned,Not assigned
M8E,Not assigned,Not assigned
M9E,Not assigned,Not assigned
M1G,Scarborough,Woburn
M2G,Not assigned,Not assigned
M3G,Not assigned,Not assigned
M4G,East York,Leaside
M5G,Downtown Toronto,Central Bay Street
M6G,Downtown Toronto,Christie
M7G,Not assigned,Not assigned
M8G,Not assigned,Not assigned
M9G,Not assigned,Not assigned
M1H,Scarborough,Cedarbrae
M2H,North York,Hillcrest Village
M3H,North York,Bathurst Manor
M3H,North York,Downsview North
M3H,North York,Wilson Heights
M4H,East York,Thorncliffe Park
M5H,Downtown Toronto,Adelaide
M5H,Downtown Toronto,King
M5H,Downtown Toronto,Richmond
M6H,West Toronto,Dovercourt Village
M6H,West Toronto,Dufferin
M7H,Not assigned,Not assigned
M8H,Not assigned,Not assigned
M9H,Not assigned,Not assigned
M1J,Scarborough,Scarborough Village
M2J,North York,Fairview
M2J,North York,Henry Farm
M2J,North York,Oriole
M3J,North York,Northwood Park
M3J,North York,York University
M4J,East York,East Toronto
M5J,Downtown Toronto,Harbourfront East
M5J,Downtown Toronto,Toronto Islands
M5J,Downtown Toronto,Union Station
M6J,West Toronto,Little Portugal
M6J,West Toronto,Trinity
M7J,Not assigned,Not assigned
M8J,Not assigned,Not assigned
M9J,Not assigned,Not assigned
M1K,Scarborough,East Birchmount Park
M1K,Scarborough,Ionview
M1K,Scarborough,Kennedy Park
M2K,North York,Bayview Village
M3K,North York,CFB Toronto
M3K,North York,Downsview East
M4K,East Toronto,The Danforth West
M4K,East Toronto,Riverdale
M5K,Downtown Toronto,Design Exchange
M5K,Downtown Toronto,Toronto Dominion Centre
M6K,West Toronto,Brockton
M6K,West Toronto,Exhibition Place
M6K,West Toronto,Parkdale Village
M7K,Not assigned,Not assigned
M8K,Not assigned,Not assigned
M9K,Not assigned,Not assigned
M1L,Scarborough,Clairlea
M1L,Scarborough,Golden Mile
M1L,Scarborough,Oakridge
M2L,North York,Silver Hills
M2L,North York,York Mills
M3L,North York,Downsview West
M4L,East Toronto,The Beaches West
M4L,East Toronto,India Bazaar
M5L,Downtown Toronto,Commerce Court
M5L,Downtown Toronto,Victoria Hotel
M6L,North York,Maple Leaf Park
M6L,North York,North Park
M6L,North York,Upwood Park
M7L,Not assigned,Not assigned
M8L,Not assigned,Not assigned
M9L,North York,Humber Summit
M1M,Scarborough,Cliffcrest
M1M,Scarborough,Cliffside
M1M,Scarborough,Scarborough Village West
M2M,North York,Newtonbrook
M2M,North York,Willowdale
M3M,North York,Downsview Central
M4M,East Toronto,Studio District
M5M,North York,Bedford Park
M5M,North York,Lawrence Manor East
M6M,York,Del Ray
M6M,York,Keelesdale
M6M,York,Mount Dennis
M6M,York,Silverthorn
M7M,Not assigned,Not assigned
M8M,Not assigned,Not assigned
M9M,North York,Emery
M9M,North York,Humberlea
M1N,Scarborough,Birch Cliff
M1N,Scarborough,Cliffside West
M2N,North York,Willowdale South
M3N,North York,Downsview Northwest
M4N,Central Toronto,Lawrence Park
M5N,Central Toronto,Roselawn
M6N,York,The Junction North
M6N,York,Runnymede
M7N,Not assigned,Not assigned
M8N,Not assigned,Not assigned
M9N,York,Weston
M1P,Scarborough,Dorset Park
M1P,Scarborough,Scarborough Town Centre
M1P,Scarborough,Wexford Heights
M2P,North York,York Mills West
M3P,Not assigned,Not assigned
M4P,Central Toronto,Davisville North
M5P,Central Toronto,Forest Hill North
M5P,Central Toronto,Forest Hill West
M6P,West Toronto,High Park
M6P,West Toronto,The Junction South
M7P,Not assigned,Not assigned
M8P,Not assigned,Not assigned
M9P,Etobicoke,Westmount
M1R,Scarborough,Maryvale
M1R,Scarborough,Wexford
M2R,North York,Willowdale West
M3R,Not assigned,Not assigned
M4R,Central Toronto,North Toronto West
M5R,Central Toronto,The Annex
M5R,Central Toronto,North Midtown
M5R,Central Toronto,Yorkville
M6R,West Toronto,Parkdale
M6R,West Toronto,Roncesvalles
M7R,Mississauga,Canada Post Gateway Processing Centre
M8R,Not assigned,Not assigned
M9R,Etobicoke,Kingsview Village
M9R,Etobicoke,Martin Grove Gardens
M9R,Etobicoke,Richview Gardens
M9R,Etobicoke,St. Phillips
M1S,Scarborough,Agincourt
M2S,Not assigned,Not assigned
M3S,Not assigned,Not assigned
M4S,Central Toronto,Davisville
M5S,Downtown Toronto,Harbord
M5S,Downtown Toronto,University of Toronto
M6S,West Toronto,Runnymede
M6S,West Toronto,Swansea
M7S,Not assigned,Not assigned
M8S,Not assigned,Not assigned
M9S,Not assigned,Not assigned
M1T,Scarborough,Clarks Corners
M1T,Scarborough,Sullivan
M1T,Scarborough,Tam O'Shanter
M2T,Not assigned,Not assigned
M3T,Not assigned,Not assigned
M4T,Central Toronto,Moore Park
M4T,Central Toronto,Summerhill East
M5T,Downtown Toronto,Chinatown
M5T,Downtown Toronto,Grange Park
M5T,Downtown Toronto,Kensington Market
M6T,Not assigned,Not assigned
M7T,Not assigned,Not assigned
M8T,Not assigned,Not assigned
M9T,Not assigned,Not assigned
M1V,Scarborough,Agincourt North
M1V,Scarborough,L'Amoreaux East
M1V,Scarborough,Milliken
M1V,Scarborough,Steeles East
M2V,Not assigned,Not assigned
M3V,Not assigned,Not assigned
M4V,Central Toronto,Deer Park
M4V,Central Toronto,Forest Hill SE
M4V,Central Toronto,Rathnelly
M4V,Central Toronto,South Hill
M4V,Central Toronto,Summerhill West
M5V,Downtown Toronto,CN Tower
M5V,Downtown Toronto,Bathurst Quay
M5V,Downtown Toronto,Island airport
M5V,Downtown Toronto,Harbourfront West
M5V,Downtown Toronto,King and Spadina
M5V,Downtown Toronto,Railway Lands
M5V,Downtown Toronto,South Niagara
M6V,Not assigned,Not assigned
M7V,Not assigned,Not assigned
M8V,Etobicoke,Humber Bay Shores
M8V,Etobicoke,Mimico South
M8V,Etobicoke,New Toronto
M9V,Etobicoke,Albion Gardens
M9V,Etobicoke,Beaumond Heights
M9V,Etobicoke,Humbergate
M9V,Etobicoke,Jamestown
M9V,Etobicoke,Mount Olive
M9V,Etobicoke,Silverstone
M9V,Etobicoke,South Steeles
M9V,Etobicoke,Thistletown
M1W,Scarborough,L'Amoreaux West
M1W,Scarborough,Steeles West
M2W,Not assigned,Not assigned
M3W,Not assigned,Not assigned
M4W,Downtown Toronto,Rosedale
M5W,Downtown Toronto,Stn A PO Boxes 25 The Esplanade
M6W,Not assigned,Not assigned
M7W,Not assigned,Not assigned
M8W,Etobicoke,Alderwood
M8W,Etobicoke,Long Branch
M9W,Etobicoke,Northwest
M1X,Scarborough,Upper Rouge
M2X,Not assigned,Not assigned
M3X,Not assigned,Not assigned
M4X,Downtown Toronto,Cabbagetown
M4X,Downtown Toronto,St. James Town
M5X,Downtown Toronto,First Canadian Place
M5X,Downtown Toronto,Underground city
M6X,Not assigned,Not assigned
M7X,Not assigned,Not assigned
M8X,Etobicoke,The Kingsway
M8X,Etobicoke,Montgomery Road
M8X,Etobicoke,Old Mill North
M9X,Not assigned,Not assigned
M1Y,Not assigned,Not assigned
M2Y,Not assigned,Not assigned
M3Y,Not assigned,Not assigned
M4Y,Downtown Toronto,Church and Wellesley
M5Y,Not assigned,Not assigned
M6Y,Not assigned,Not assigned
M7Y,East Toronto,Business Reply Mail Processing Centre 969 Eastern
M8Y,Etobicoke,Humber Bay
M8Y,Etobicoke,King's Mill Park
M8Y,Etobicoke,Kingsway Park South East
M8Y,Etobicoke,Mimico NE
M8Y,Etobicoke,Old Mill South
M8Y,Etobicoke,The Queensway East
M8Y,Etobicoke,Royal York South East
M8Y,Etobicoke,Sunnylea
M9Y,Not assigned,Not assigned
M1Z,Not assigned,Not assigned
M2Z,Not assigned,Not assigned
M3Z,Not assigned,Not assigned
M4Z,Not assigned,Not assigned
M5Z,Not assigned,Not assigned
M6Z,Not assigned,Not assigned
M7Z,Not assigned,Not assigned
M8Z,Etobicoke,Kingsway Park South West
M8Z,Etobicoke,Mimico NW
M8Z,Etobicoke,The Queensway West
M8Z,Etobicoke,Royal York South West
M8Z,Etobicoke,South of Bloor
M9Z,Not assigned,Not assigned
###Markdown
Writing our data into as .csv file for further use
###Code
file=open("toronto.csv","wb")
#file.write(bytes(headers,encoding="ascii",errors="ignore"))
file.write(bytes(table1,encoding="ascii",errors="ignore"))
###Output
_____no_output_____
###Markdown
Converting into dataframe and assigning columnnames
###Code
import pandas as pd
df = pd.read_csv('toronto.csv',header=None)
df.columns=["Postalcode","Borough","Neighbourhood"]
df.head(10)
###Output
_____no_output_____
###Markdown
Only processing the cells that have an assigned borough. Ignoring the cells with a borough that is Not assigned. Droping row where borough is "Not assigned"
###Code
# Get names of indexes for which column Borough has value "Not assigned"
indexNames = df[ df['Borough'] =='Not assigned'].index
# Delete these row indexes from dataFrame
df.drop(indexNames , inplace=True)
df.head(10)
###Output
_____no_output_____
###Markdown
If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough
###Code
df.loc[df['Neighbourhood'] =='Not assigned' , 'Neighbourhood'] = df['Borough']
df.head(10)
###Output
_____no_output_____
###Markdown
rows will be same postalcode will combined into one row with the neighborhoods separated with a comma
###Code
result = df.groupby(['Postalcode','Borough'], sort=False).agg( ', '.join)
df_new=result.reset_index()
df_new.head(15)
###Output
_____no_output_____
###Markdown
use the .shape method to print the number of rows of your dataframe
###Code
df_new.shape
###Output
_____no_output_____
###Markdown
Question 2 Use the Geocoder package or the csv file to create dataframe with longitude and latitude values We will be using a csv file that has the geographical coordinates of each postal code: http://cocl.us/Geospatial_data
###Code
!wget -q -O 'Toronto_long_lat_data.csv' http://cocl.us/Geospatial_data
df_lon_lat = pd.read_csv('Toronto_long_lat_data.csv')
df_lon_lat.head()
df_lon_lat.columns=['Postalcode','Latitude','Longitude']
df_lon_lat.head()
Toronto_df = pd.merge(df_new,
df_lon_lat[['Postalcode','Latitude', 'Longitude']],
on='Postalcode')
Toronto_df
###Output
_____no_output_____
###Markdown
Question 3 Explore and cluster the neighborhoods in Toronto Use geopy library to get the latitude and longitude values of New York City.
###Code
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# import k-means from clustering stage
from sklearn.cluster import KMeans
#!conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven't completed the Foursquare API lab
import folium # map rendering library
print('Libraries imported.')
address = 'Toronto, ON'
geolocator = Nominatim(user_agent="Toronto")
location = geolocator.geocode(address)
latitude_toronto = location.latitude
longitude_toronto = location.longitude
print('The geograpical coordinate of Toronto are {}, {}.'.format(latitude_toronto, longitude_toronto))
map_toronto = folium.Map(location=[latitude_toronto, longitude_toronto], zoom_start=10)
# add markers to map
for lat, lng, borough, Neighbourhood in zip(Toronto_df['Latitude'], Toronto_df['Longitude'], Toronto_df['Borough'], Toronto_df['Neighbourhood']):
label = '{}, {}'.format(Neighbourhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_toronto)
map_toronto
###Output
_____no_output_____
###Markdown
Define Foursquare Credentials and Version
###Code
# The code was removed by Watson Studio for sharing.
# defining radius and limit of venues to get
radius=500
LIMIT=100
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighbourhood',
'Neighbourhood Latitude',
'Neighbourhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
toronto_venues = getNearbyVenues(names=Toronto_df['Neighbourhood'],
latitudes=Toronto_df['Latitude'],
longitudes=Toronto_df['Longitude']
)
###Output
Parkwoods
Victoria Village
Harbourfront, Regent Park
Lawrence Heights, Lawrence Manor
Queen's Park
Islington Avenue
Rouge, Malvern
Don Mills North
Woodbine Gardens, Parkview Hill
Ryerson, Garden District
Glencairn
Cloverdale, Islington, Martin Grove, Princess Gardens, West Deane Park
Highland Creek, Rouge Hill, Port Union
Flemingdon Park, Don Mills South
Woodbine Heights
St. James Town
Humewood-Cedarvale
Bloordale Gardens, Eringate, Markland Wood, Old Burnhamthorpe
Guildwood, Morningside, West Hill
The Beaches
Berczy Park
Caledonia-Fairbanks
Woburn
Leaside
Central Bay Street
Christie
Cedarbrae
Hillcrest Village
Bathurst Manor, Downsview North, Wilson Heights
Thorncliffe Park
Adelaide, King, Richmond
Dovercourt Village, Dufferin
Scarborough Village
Fairview, Henry Farm, Oriole
Northwood Park, York University
East Toronto
Harbourfront East, Toronto Islands, Union Station
Little Portugal, Trinity
East Birchmount Park, Ionview, Kennedy Park
Bayview Village
CFB Toronto, Downsview East
The Danforth West, Riverdale
Design Exchange, Toronto Dominion Centre
Brockton, Exhibition Place, Parkdale Village
Clairlea, Golden Mile, Oakridge
Silver Hills, York Mills
Downsview West
The Beaches West, India Bazaar
Commerce Court, Victoria Hotel
Maple Leaf Park, North Park, Upwood Park
Humber Summit
Cliffcrest, Cliffside, Scarborough Village West
Newtonbrook, Willowdale
Downsview Central
Studio District
Bedford Park, Lawrence Manor East
Del Ray, Keelesdale, Mount Dennis, Silverthorn
Emery, Humberlea
Birch Cliff, Cliffside West
Willowdale South
Downsview Northwest
Lawrence Park
Roselawn
The Junction North, Runnymede
Weston
Dorset Park, Scarborough Town Centre, Wexford Heights
York Mills West
Davisville North
Forest Hill North, Forest Hill West
High Park, The Junction South
Westmount
Maryvale, Wexford
Willowdale West
North Toronto West
The Annex, North Midtown, Yorkville
Parkdale, Roncesvalles
Canada Post Gateway Processing Centre
Kingsview Village, Martin Grove Gardens, Richview Gardens, St. Phillips
Agincourt
Davisville
Harbord, University of Toronto
Runnymede, Swansea
Clarks Corners, Sullivan, Tam O'Shanter
Moore Park, Summerhill East
Chinatown, Grange Park, Kensington Market
Agincourt North, L'Amoreaux East, Milliken, Steeles East
Deer Park, Forest Hill SE, Rathnelly, South Hill, Summerhill West
CN Tower, Bathurst Quay, Island airport, Harbourfront West, King and Spadina, Railway Lands, South Niagara
Humber Bay Shores, Mimico South, New Toronto
Albion Gardens, Beaumond Heights, Humbergate, Jamestown, Mount Olive, Silverstone, South Steeles, Thistletown
L'Amoreaux West, Steeles West
Rosedale
Stn A PO Boxes 25 The Esplanade
Alderwood, Long Branch
Northwest
Upper Rouge
Cabbagetown, St. James Town
First Canadian Place, Underground city
The Kingsway, Montgomery Road, Old Mill North
Church and Wellesley
Business Reply Mail Processing Centre 969 Eastern
Humber Bay, King's Mill Park, Kingsway Park South East, Mimico NE, Old Mill South, The Queensway East, Royal York South East, Sunnylea
Kingsway Park South West, Mimico NW, The Queensway West, Royal York South West, South of Bloor
###Markdown
Let's check the size of the resulting dataframe
###Code
toronto_venues.head(10)
toronto_venues.shape
###Output
_____no_output_____
###Markdown
Let's check how many venues were returned for each neighborhood
###Code
toronto_venues.groupby('Neighbourhood').count()
###Output
_____no_output_____
###Markdown
Analysing Each Neighborhood
###Code
# one hot encoding
toronto_onehot = pd.get_dummies(toronto_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
toronto_onehot['Neighbourhood'] = toronto_venues['Neighbourhood']
# move neighborhood column to the first column
fixed_columns = [toronto_onehot.columns[-1]] + list(toronto_onehot.columns[:-1])
toronto_onehot.head()
###Output
_____no_output_____
###Markdown
And let's examine the new dataframe size.
###Code
toronto_onehot.shape
###Output
_____no_output_____
###Markdown
Next, let's group rows by neighborhood and by taking the mean of the frequency of occurrence of each category
###Code
toronto_grouped = toronto_onehot.groupby('Neighbourhood').mean().reset_index()
toronto_grouped
###Output
_____no_output_____
###Markdown
Let's print each neighborhood along with the top 5 most common venues
###Code
num_top_venues = 5
for hood in toronto_grouped['Neighbourhood']:
print("----"+hood+"----")
temp = toronto_grouped[toronto_grouped['Neighbourhood'] == hood].T.reset_index()
temp.columns = ['venue','freq']
temp = temp.iloc[1:]
temp['freq'] = temp['freq'].astype(float)
temp = temp.round({'freq': 2})
print(temp.sort_values('freq', ascending=False).reset_index(drop=True).head(num_top_venues))
print('\n')
###Output
----Adelaide, King, Richmond----
venue freq
0 Coffee Shop 0.06
1 Café 0.05
2 Thai Restaurant 0.04
3 American Restaurant 0.04
4 Steakhouse 0.04
----Agincourt----
venue freq
0 Sandwich Place 0.25
1 Breakfast Spot 0.25
2 Lounge 0.25
3 Skating Rink 0.25
4 Modern European Restaurant 0.00
----Agincourt North, L'Amoreaux East, Milliken, Steeles East----
venue freq
0 Playground 0.5
1 Park 0.5
2 Mobile Phone Shop 0.0
3 Moving Target 0.0
4 Movie Theater 0.0
----Albion Gardens, Beaumond Heights, Humbergate, Jamestown, Mount Olive, Silverstone, South Steeles, Thistletown----
venue freq
0 Grocery Store 0.2
1 Pharmacy 0.1
2 Pizza Place 0.1
3 Fast Food Restaurant 0.1
4 Coffee Shop 0.1
----Alderwood, Long Branch----
venue freq
0 Pizza Place 0.2
1 Gym 0.1
2 Skating Rink 0.1
3 Sandwich Place 0.1
4 Dance Studio 0.1
----Bathurst Manor, Downsview North, Wilson Heights----
venue freq
0 Coffee Shop 0.12
1 Pharmacy 0.06
2 Grocery Store 0.06
3 Bridal Shop 0.06
4 Fast Food Restaurant 0.06
----Bayview Village----
venue freq
0 Café 0.25
1 Japanese Restaurant 0.25
2 Chinese Restaurant 0.25
3 Bank 0.25
4 Movie Theater 0.00
----Bedford Park, Lawrence Manor East----
venue freq
0 Juice Bar 0.08
1 Italian Restaurant 0.08
2 Sushi Restaurant 0.08
3 Coffee Shop 0.08
4 Fast Food Restaurant 0.08
----Berczy Park----
venue freq
0 Coffee Shop 0.07
1 Restaurant 0.06
2 Cocktail Bar 0.06
3 Seafood Restaurant 0.04
4 Beer Bar 0.04
----Birch Cliff, Cliffside West----
venue freq
0 College Stadium 0.25
1 Café 0.25
2 Skating Rink 0.25
3 General Entertainment 0.25
4 Pet Store 0.00
----Bloordale Gardens, Eringate, Markland Wood, Old Burnhamthorpe----
venue freq
0 Pizza Place 0.17
1 Pharmacy 0.17
2 Beer Store 0.17
3 Liquor Store 0.17
4 Café 0.17
----Brockton, Exhibition Place, Parkdale Village----
venue freq
0 Café 0.11
1 Breakfast Spot 0.11
2 Coffee Shop 0.11
3 Grocery Store 0.05
4 Gym 0.05
----Business Reply Mail Processing Centre 969 Eastern----
venue freq
0 Yoga Studio 0.06
1 Auto Workshop 0.06
2 Light Rail Station 0.06
3 Garden Center 0.06
4 Garden 0.06
----CFB Toronto, Downsview East----
venue freq
0 Playground 0.25
1 Airport 0.25
2 Bus Stop 0.25
3 Park 0.25
4 Metro Station 0.00
----CN Tower, Bathurst Quay, Island airport, Harbourfront West, King and Spadina, Railway Lands, South Niagara----
venue freq
0 Airport Service 0.14
1 Airport Terminal 0.14
2 Airport Lounge 0.14
3 Boat or Ferry 0.07
4 Sculpture Garden 0.07
----Cabbagetown, St. James Town----
venue freq
0 Coffee Shop 0.09
1 Restaurant 0.09
2 Bakery 0.04
3 Pizza Place 0.04
4 Market 0.04
----Caledonia-Fairbanks----
venue freq
0 Park 0.33
1 Women's Store 0.17
2 Pharmacy 0.17
3 Market 0.17
4 Fast Food Restaurant 0.17
----Canada Post Gateway Processing Centre----
venue freq
0 Hotel 0.18
1 Coffee Shop 0.18
2 Mediterranean Restaurant 0.09
3 Burrito Place 0.09
4 Sandwich Place 0.09
----Cedarbrae----
venue freq
0 Caribbean Restaurant 0.12
1 Bakery 0.12
2 Bank 0.12
3 Athletics & Sports 0.12
4 Thai Restaurant 0.12
----Central Bay Street----
venue freq
0 Coffee Shop 0.16
1 Café 0.07
2 Italian Restaurant 0.05
3 Burger Joint 0.04
4 Bar 0.04
----Chinatown, Grange Park, Kensington Market----
venue freq
0 Café 0.07
1 Bar 0.07
2 Vegetarian / Vegan Restaurant 0.05
3 Coffee Shop 0.04
4 Dumpling Restaurant 0.04
----Christie----
venue freq
0 Grocery Store 0.20
1 Café 0.20
2 Park 0.13
3 Coffee Shop 0.07
4 Nightclub 0.07
----Church and Wellesley----
venue freq
0 Japanese Restaurant 0.07
1 Coffee Shop 0.06
2 Sushi Restaurant 0.06
3 Gay Bar 0.05
4 Restaurant 0.03
----Clairlea, Golden Mile, Oakridge----
venue freq
0 Bakery 0.2
1 Bus Line 0.2
2 Metro Station 0.1
3 Soccer Field 0.1
4 Fast Food Restaurant 0.1
----Clarks Corners, Sullivan, Tam O'Shanter----
venue freq
0 Pizza Place 0.22
1 Noodle House 0.11
2 Pharmacy 0.11
3 Fast Food Restaurant 0.11
4 Thai Restaurant 0.11
----Cliffcrest, Cliffside, Scarborough Village West----
venue freq
0 American Restaurant 0.33
1 Intersection 0.33
2 Motel 0.33
3 Music Venue 0.00
4 Museum 0.00
----Cloverdale, Islington, Martin Grove, Princess Gardens, West Deane Park----
venue freq
0 Bank 0.5
1 Golf Course 0.5
2 Accessories Store 0.0
3 Music Store 0.0
4 Moving Target 0.0
----Commerce Court, Victoria Hotel----
venue freq
0 Coffee Shop 0.10
1 Café 0.07
2 Restaurant 0.06
3 Hotel 0.06
4 American Restaurant 0.04
----Davisville----
venue freq
0 Pizza Place 0.08
1 Dessert Shop 0.08
2 Sandwich Place 0.08
3 Coffee Shop 0.06
4 Seafood Restaurant 0.06
----Davisville North----
venue freq
0 Grocery Store 0.1
1 Park 0.1
2 Burger Joint 0.1
3 Clothing Store 0.1
4 Gym 0.1
----Deer Park, Forest Hill SE, Rathnelly, South Hill, Summerhill West----
venue freq
0 Pub 0.14
1 Coffee Shop 0.14
2 Convenience Store 0.07
3 American Restaurant 0.07
4 Sushi Restaurant 0.07
----Del Ray, Keelesdale, Mount Dennis, Silverthorn----
venue freq
0 Sandwich Place 0.2
1 Convenience Store 0.2
2 Check Cashing Service 0.2
3 Restaurant 0.2
4 Coffee Shop 0.2
----Design Exchange, Toronto Dominion Centre----
venue freq
0 Coffee Shop 0.14
1 Hotel 0.08
2 Café 0.08
3 Restaurant 0.04
4 American Restaurant 0.04
----Don Mills North----
venue freq
0 Caribbean Restaurant 0.2
1 Pool 0.2
2 Japanese Restaurant 0.2
3 Gym / Fitness Center 0.2
4 Café 0.2
----Dorset Park, Scarborough Town Centre, Wexford Heights----
venue freq
0 Indian Restaurant 0.29
1 Pet Store 0.14
2 Furniture / Home Store 0.14
3 Vietnamese Restaurant 0.14
4 Latin American Restaurant 0.14
----Dovercourt Village, Dufferin----
venue freq
0 Pharmacy 0.11
1 Supermarket 0.11
2 Discount Store 0.11
3 Bakery 0.11
4 Fast Food Restaurant 0.05
----Downsview Central----
venue freq
0 Baseball Field 0.33
1 Korean Restaurant 0.33
2 Food Truck 0.33
3 Accessories Store 0.00
4 Moving Target 0.00
----Downsview Northwest----
venue freq
0 Grocery Store 0.2
1 Gym / Fitness Center 0.2
2 Athletics & Sports 0.2
3 Liquor Store 0.2
4 Discount Store 0.2
----Downsview West----
venue freq
0 Moving Target 0.25
1 Bank 0.25
2 Hotel 0.25
3 Shopping Mall 0.25
4 Accessories Store 0.00
----East Birchmount Park, Ionview, Kennedy Park----
venue freq
0 Discount Store 0.33
1 Coffee Shop 0.17
2 Chinese Restaurant 0.17
3 Department Store 0.17
4 Train Station 0.17
----East Toronto----
venue freq
0 Convenience Store 0.5
1 Park 0.5
2 Accessories Store 0.0
3 Modern European Restaurant 0.0
4 Museum 0.0
----Emery, Humberlea----
venue freq
0 Baseball Field 1.0
1 Accessories Store 0.0
2 Moving Target 0.0
3 Movie Theater 0.0
4 Motel 0.0
----Fairview, Henry Farm, Oriole----
venue freq
0 Clothing Store 0.13
1 Fast Food Restaurant 0.08
2 Coffee Shop 0.06
3 Toy / Game Store 0.05
4 Restaurant 0.05
----First Canadian Place, Underground city----
venue freq
0 Coffee Shop 0.08
1 Café 0.08
2 Hotel 0.06
3 Restaurant 0.05
4 American Restaurant 0.04
----Flemingdon Park, Don Mills South----
venue freq
0 Coffee Shop 0.10
1 Asian Restaurant 0.10
2 Beer Store 0.10
3 Gym 0.10
4 Restaurant 0.05
----Forest Hill North, Forest Hill West----
venue freq
0 Jewelry Store 0.25
1 Sushi Restaurant 0.25
2 Bus Line 0.25
3 Trail 0.25
4 Accessories Store 0.00
----Glencairn----
venue freq
0 Japanese Restaurant 0.25
1 Asian Restaurant 0.25
2 Park 0.25
3 Pub 0.25
4 Accessories Store 0.00
----Guildwood, Morningside, West Hill----
venue freq
0 Medical Center 0.17
1 Breakfast Spot 0.17
2 Rental Car Location 0.17
3 Electronics Store 0.17
4 Pizza Place 0.17
----Harbord, University of Toronto----
venue freq
0 Café 0.12
1 Bar 0.06
2 Japanese Restaurant 0.06
3 Bookstore 0.06
4 Coffee Shop 0.06
----Harbourfront East, Toronto Islands, Union Station----
venue freq
0 Coffee Shop 0.14
1 Hotel 0.05
2 Aquarium 0.05
3 Pizza Place 0.04
4 Café 0.04
----Harbourfront, Regent Park----
venue freq
0 Coffee Shop 0.16
1 Café 0.06
2 Bakery 0.06
3 Park 0.06
4 Pub 0.06
----High Park, The Junction South----
venue freq
0 Mexican Restaurant 0.09
1 Café 0.09
2 Bookstore 0.04
3 Arts & Crafts Store 0.04
4 Bar 0.04
----Highland Creek, Rouge Hill, Port Union----
venue freq
0 Moving Target 0.5
1 Bar 0.5
2 Accessories Store 0.0
3 Modern European Restaurant 0.0
4 Movie Theater 0.0
----Hillcrest Village----
venue freq
0 Mediterranean Restaurant 0.25
1 Pool 0.25
2 Golf Course 0.25
3 Dog Run 0.25
4 Mexican Restaurant 0.00
----Humber Bay Shores, Mimico South, New Toronto----
venue freq
0 Café 0.13
1 Flower Shop 0.07
2 Bakery 0.07
3 Pharmacy 0.07
4 Restaurant 0.07
----Humber Bay, King's Mill Park, Kingsway Park South East, Mimico NE, Old Mill South, The Queensway East, Royal York South East, Sunnylea----
venue freq
0 Baseball Field 1.0
1 Accessories Store 0.0
2 Moving Target 0.0
3 Movie Theater 0.0
4 Motel 0.0
----Humber Summit----
venue freq
0 Pizza Place 0.5
1 Empanada Restaurant 0.5
2 Men's Store 0.0
3 Metro Station 0.0
4 Mexican Restaurant 0.0
----Humewood-Cedarvale----
venue freq
0 Trail 0.25
1 Hockey Arena 0.25
2 Field 0.25
3 Park 0.25
4 Accessories Store 0.00
----Kingsview Village, Martin Grove Gardens, Richview Gardens, St. Phillips----
venue freq
0 Pizza Place 0.25
1 Park 0.25
2 Bus Line 0.25
3 Mobile Phone Shop 0.25
4 Mexican Restaurant 0.00
----Kingsway Park South West, Mimico NW, The Queensway West, Royal York South West, South of Bloor----
venue freq
0 Social Club 0.09
1 Fast Food Restaurant 0.09
2 Sandwich Place 0.09
3 Supplement Shop 0.09
4 Discount Store 0.09
----L'Amoreaux West, Steeles West----
venue freq
0 Fast Food Restaurant 0.15
1 Chinese Restaurant 0.15
2 Grocery Store 0.08
3 Cosmetics Shop 0.08
4 Pharmacy 0.08
----Lawrence Heights, Lawrence Manor----
venue freq
0 Clothing Store 0.29
1 Furniture / Home Store 0.18
2 Accessories Store 0.06
3 Coffee Shop 0.06
4 Miscellaneous Shop 0.06
----Lawrence Park----
venue freq
0 Bus Line 0.25
1 Dim Sum Restaurant 0.25
2 Swim School 0.25
3 Park 0.25
4 Accessories Store 0.00
----Leaside----
venue freq
0 Coffee Shop 0.09
1 Sporting Goods Shop 0.09
2 Burger Joint 0.06
3 Grocery Store 0.06
4 Breakfast Spot 0.03
----Little Portugal, Trinity----
venue freq
0 Bar 0.12
1 Men's Store 0.06
2 Restaurant 0.05
3 Asian Restaurant 0.05
4 Coffee Shop 0.05
----Maple Leaf Park, North Park, Upwood Park----
venue freq
0 Construction & Landscaping 0.25
1 Bakery 0.25
2 Park 0.25
3 Basketball Court 0.25
4 Accessories Store 0.00
----Maryvale, Wexford----
venue freq
0 Auto Garage 0.17
1 Smoke Shop 0.17
2 Bakery 0.17
3 Shopping Mall 0.17
4 Sandwich Place 0.17
----Moore Park, Summerhill East----
venue freq
0 Playground 0.25
1 Tennis Court 0.25
2 Gym 0.25
3 Park 0.25
4 Accessories Store 0.00
----North Toronto West----
venue freq
0 Sporting Goods Shop 0.14
1 Coffee Shop 0.10
2 Clothing Store 0.10
3 Yoga Studio 0.05
4 Furniture / Home Store 0.05
----Northwest----
venue freq
0 Drugstore 0.5
1 Rental Car Location 0.5
2 Accessories Store 0.0
3 Molecular Gastronomy Restaurant 0.0
4 Moving Target 0.0
----Northwood Park, York University----
venue freq
0 Massage Studio 0.2
1 Furniture / Home Store 0.2
2 Coffee Shop 0.2
3 Miscellaneous Shop 0.2
4 Bar 0.2
----Parkdale, Roncesvalles----
venue freq
0 Breakfast Spot 0.12
1 Gift Shop 0.12
2 Dessert Shop 0.06
3 Eastern European Restaurant 0.06
4 Dog Run 0.06
----Parkwoods----
venue freq
0 Food & Drink Shop 0.33
1 Fast Food Restaurant 0.33
2 Park 0.33
3 Accessories Store 0.00
4 Mobile Phone Shop 0.00
----Queen's Park----
venue freq
0 Coffee Shop 0.23
1 Japanese Restaurant 0.05
2 Gym 0.05
3 Diner 0.05
4 Sushi Restaurant 0.05
----Rosedale----
venue freq
0 Park 0.50
1 Trail 0.25
2 Playground 0.25
3 Accessories Store 0.00
4 Moving Target 0.00
----Roselawn----
venue freq
0 Music Venue 0.33
1 Garden 0.33
2 Pool 0.33
3 Modern European Restaurant 0.00
4 Moving Target 0.00
----Rouge, Malvern----
venue freq
0 Fast Food Restaurant 1.0
1 Accessories Store 0.0
2 Modern European Restaurant 0.0
3 Museum 0.0
4 Moving Target 0.0
----Runnymede, Swansea----
venue freq
0 Coffee Shop 0.08
1 Café 0.08
2 Pizza Place 0.08
3 Sushi Restaurant 0.05
4 Italian Restaurant 0.05
----Ryerson, Garden District----
venue freq
0 Coffee Shop 0.09
1 Clothing Store 0.09
2 Café 0.04
3 Cosmetics Shop 0.03
4 Middle Eastern Restaurant 0.03
----Scarborough Village----
venue freq
0 Playground 0.5
1 Construction & Landscaping 0.5
2 Modern European Restaurant 0.0
3 Museum 0.0
4 Moving Target 0.0
----St. James Town----
venue freq
0 Coffee Shop 0.07
1 Restaurant 0.06
2 Hotel 0.05
3 Café 0.05
4 Clothing Store 0.04
----Stn A PO Boxes 25 The Esplanade----
venue freq
0 Coffee Shop 0.09
1 Restaurant 0.05
2 Café 0.04
3 Italian Restaurant 0.03
4 Hotel 0.03
----Studio District----
venue freq
0 Café 0.10
1 Coffee Shop 0.08
2 Italian Restaurant 0.05
3 Bakery 0.05
4 American Restaurant 0.05
----The Annex, North Midtown, Yorkville----
venue freq
0 Sandwich Place 0.12
1 Coffee Shop 0.12
2 Café 0.12
3 Pizza Place 0.08
4 Pharmacy 0.04
----The Beaches----
venue freq
0 Coffee Shop 0.50
1 Pub 0.25
2 Neighborhood 0.25
3 Metro Station 0.00
4 Mexican Restaurant 0.00
----The Beaches West, India Bazaar----
venue freq
0 Park 0.10
1 Pizza Place 0.05
2 Pub 0.05
3 Fast Food Restaurant 0.05
4 Burger Joint 0.05
----The Danforth West, Riverdale----
venue freq
0 Greek Restaurant 0.24
1 Coffee Shop 0.10
2 Ice Cream Shop 0.07
3 Bookstore 0.05
4 Italian Restaurant 0.05
----The Junction North, Runnymede----
venue freq
0 Pizza Place 0.25
1 Convenience Store 0.25
2 Bus Line 0.25
3 Grocery Store 0.25
4 Mexican Restaurant 0.00
----The Kingsway, Montgomery Road, Old Mill North----
venue freq
0 River 0.33
1 Pool 0.33
2 Park 0.33
3 Accessories Store 0.00
4 Modern European Restaurant 0.00
----Thorncliffe Park----
venue freq
0 Indian Restaurant 0.12
1 Yoga Studio 0.06
2 Grocery Store 0.06
3 Pharmacy 0.06
4 Park 0.06
----Victoria Village----
venue freq
0 Hockey Arena 0.25
1 Coffee Shop 0.25
2 Portuguese Restaurant 0.25
3 Intersection 0.25
4 Accessories Store 0.00
----Westmount----
venue freq
0 Pizza Place 0.29
1 Sandwich Place 0.14
2 Middle Eastern Restaurant 0.14
3 Chinese Restaurant 0.14
4 Coffee Shop 0.14
----Weston----
venue freq
0 Park 0.67
1 Convenience Store 0.33
2 Accessories Store 0.00
3 Modern European Restaurant 0.00
4 Museum 0.00
----Willowdale South----
venue freq
0 Ramen Restaurant 0.09
1 Pizza Place 0.06
2 Japanese Restaurant 0.06
3 Coffee Shop 0.06
4 Restaurant 0.06
----Willowdale West----
venue freq
0 Pizza Place 0.2
1 Pharmacy 0.2
2 Coffee Shop 0.2
3 Grocery Store 0.2
4 Butcher 0.2
----Woburn----
venue freq
0 Coffee Shop 0.50
1 Pharmacy 0.25
2 Korean Restaurant 0.25
3 Accessories Store 0.00
4 Modern European Restaurant 0.00
----Woodbine Gardens, Parkview Hill----
venue freq
0 Pizza Place 0.15
1 Fast Food Restaurant 0.15
2 Gastropub 0.08
3 Pharmacy 0.08
4 Pet Store 0.08
----Woodbine Heights----
venue freq
0 Skating Rink 0.2
1 Spa 0.1
2 Athletics & Sports 0.1
3 Cosmetics Shop 0.1
4 Park 0.1
----York Mills West----
venue freq
0 Bank 0.33
1 Park 0.33
2 Electronics Store 0.33
3 Accessories Store 0.00
4 Molecular Gastronomy Restaurant 0.00
###Markdown
Let's put that into a *pandas* dataframe First, let's write a function to sort the venues in descending order.
###Code
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
###Output
_____no_output_____
###Markdown
Now let's create the new dataframe and display the top 10 venues for each neighborhood.
###Code
import numpy as np
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighbourhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighbourhoods_venues_sorted = pd.DataFrame(columns=columns)
neighbourhoods_venues_sorted['Neighbourhood'] = toronto_grouped['Neighbourhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighbourhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighbourhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
Cluster Neighborhoods Run *k*-means to cluster the neighborhood into 5 clusters.
###Code
# set number of clusters
kclusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighbourhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(toronto_grouped_clustering)
# check cluster labels generated for each row in the dataframe
kmeans.labels_
# to change use .astype()
###Output
_____no_output_____
###Markdown
Let's create a new dataframe that includes the cluster as well as the top 10 venues for each neighborhood.
###Code
# add clustering labels
neighbourhoods_venues_sorted.insert(0, 'Cluster_Labels', kmeans.labels_)
toronto_merged = Toronto_df
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
toronto_merged = toronto_merged.join(neighbourhoods_venues_sorted.set_index('Neighbourhood'), on='Neighbourhood')
toronto_merged.head() # check the last columns!
###Output
_____no_output_____
###Markdown
We find that there is no data available for some neighbourhood droping that row
###Code
toronto_merged=toronto_merged.dropna()
toronto_merged['Cluster_Labels'] = toronto_merged.Cluster_Labels.astype(int)
# create map
map_clusters = folium.Map(location=[latitude_toronto, longitude_toronto], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged['Latitude'], toronto_merged['Longitude'], toronto_merged['Neighbourhood'], toronto_merged['Cluster_Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[cluster-1],
fill=True,
fill_color=rainbow[cluster-1],
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Examine Clusters Cluster 1
###Code
toronto_merged.loc[toronto_merged['Cluster_Labels'] == 0, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 2
###Code
toronto_merged.loc[toronto_merged['Cluster_Labels'] == 1, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 3
###Code
toronto_merged.loc[toronto_merged['Cluster_Labels'] == 2, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 4
###Code
toronto_merged.loc[toronto_merged['Cluster_Labels'] == 3, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 5
###Code
toronto_merged.loc[toronto_merged['Cluster_Labels'] == 4, toronto_merged.columns[[1] + list(range(5, toronto_merged.shape[1]))]]
###Output
_____no_output_____
###Markdown
Segmenting and Clustering Neighborhoods in Toronto This notebook provides codes for webscrapping Wiki
###Code
from bs4 import BeautifulSoup
import requests
import pandas as pd
import numpy as np
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
import json # library to handle JSON files
#!conda install -c conda-forge geopy --yes
from geopy.geocoders import Nominatim # convert an address into latitude and longitude values
import requests # library to handle requests
from pandas.io.json import json_normalize # tranform JSON file into a pandas dataframe
# Matplotlib and associated plotting modules
import matplotlib.cm as cm
import matplotlib.colors as colors
# import k-means from clustering stage
from sklearn.cluster import KMeans
#!conda install -c conda-forge folium=0.5.0 --yes # uncomment this line if you haven't completed the Foursquare API lab
import folium # map rendering library
print('Libraries imported.')
#WebScraping
# Here, we're just importing both Beautiful Soup and the Requests library
page_link = 'https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M'
# this is the url that we've already determined is safe and legal to scrape from.
page_response = requests.get(page_link, timeout=5).text
# here, we fetch the content from the url, using the requests library
page_content = BeautifulSoup(page_response,'lxml' )
#Make DataFrame
data = []
table_body=page_content.find('tbody')
rows = table_body.find_all('tr')
for row in rows:
cols=row.find_all('td')
cols=[x.text.strip() for x in cols]
data.append(cols)
#Only process the cells that have an assigned borough. Ignore cells with a borough that is Not assigned.
col_name = ['PostalCode', 'Borough', 'Neighborhood']
df = pd.DataFrame(data, columns = col_name)
df = df[df['Borough'] != 'Not assigned']
#Check
df.head()
#Drop blank header
df = df.drop(0, axis=0)
###Output
_____no_output_____
###Markdown
More than one neighborhood can exist in one postal code area. For example, in the table on the Wikipedia page, you will notice that M5A is listed twice and has two neighborhoods: Harbourfront and Regent Park. These two rows will be combined into one row with the neighborhoods separated with a comma as shown in row 11 in the above table.
###Code
dfgroupby = df.groupby(['PostalCode', 'Borough'])['Neighborhood'].apply(', '.join).reset_index()
dfgroupby.head()
###Output
_____no_output_____
###Markdown
If a cell has a borough but a Not assigned neighborhood, then the neighborhood will be the same as the borough. So for the 9th cell in the table on the Wikipedia page, the value of the Borough and the Neighborhood columns will be Queen's Park.
###Code
dfgroupby[dfgroupby['Neighborhood'] == 'Not assigned']
dfgroupby.loc[85,'Neighborhood'] = "Queen's Park"
###Output
_____no_output_____
###Markdown
Only Select Borough contains Toronto
###Code
dfgroupby[dfgroupby['Borough'].str.contains('Toronto')].head()
###Output
_____no_output_____
###Markdown
Print the number of rows of dataframe.
###Code
dfgroupby.shape
###Output
_____no_output_____
###Markdown
Load GeoCoder csv
###Code
geo = pd.read_csv("Geospatial_Coordinates.csv")
geo.head()
df_joined = dfgroupby.merge(geo, left_on='PostalCode', right_on='Postal Code', how='inner')
###Output
_____no_output_____
###Markdown
Del Duplicated postal code col
###Code
df_joined = df_joined.drop('Postal Code', axis = 1)
df_joined.head()
###Output
_____no_output_____
###Markdown
Clustering and Segementing Use geopy library to get the latitude and longitude values of Toronto.
###Code
address = 'Toronto, Ontario'
geolocator = Nominatim(user_agent="tor_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The geograpical coordinate of Toronto are {}, {}.'.format(latitude, longitude))
###Output
The geograpical coordinate of Toronto are 43.653963, -79.387207.
###Markdown
Create a map of Toronto with neighborhoods superimposed on top,
###Code
# create map of New York using latitude and longitude values
map_tor = folium.Map(location=[latitude, longitude], zoom_start=10)
# add markers to map
for lat, lng, borough, neighborhood in zip(df_joined['Latitude'], df_joined['Longitude'], df_joined['Borough'], df_joined['Neighborhood']):
label = '{}, {}'.format(neighborhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[lat, lng],
radius=5,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_tor)
map_tor
###Output
_____no_output_____
###Markdown
Define Foursquare Credentials and Version
###Code
CLIENT_ID = 'WQ2140I1XRGB3NN4OGJY2SZWCXYCUM41KLT2CVVXZUJ5GPFN' # your Foursquare ID
CLIENT_SECRET = 'DPUGQRNSEA0IEVGCFMX12EVF51DJTQ1Z3NALFTF0MEEQIUIG' # your Foursquare Secret
VERSION = '20180605' # Foursquare API version
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
###Output
Your credentails:
CLIENT_ID: WQ2140I1XRGB3NN4OGJY2SZWCXYCUM41KLT2CVVXZUJ5GPFN
CLIENT_SECRET:DPUGQRNSEA0IEVGCFMX12EVF51DJTQ1Z3NALFTF0MEEQIUIG
###Markdown
create a function to repeat the same process to all the neighborhoods in Manhattan
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius,
LIMIT)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['location']['lat'],
v['venue']['location']['lng'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighborhood',
'Neighborhood Latitude',
'Neighborhood Longitude',
'Venue',
'Venue Latitude',
'Venue Longitude',
'Venue Category']
return(nearby_venues)
LIMIT = 100 # limit of number of venues returned by Foursquare API
radius = 500
tor_venues = getNearbyVenues(names=df_joined['Neighborhood'],
latitudes=df_joined['Latitude'],
longitudes=df_joined['Longitude']
)
print(tor_venues.shape)
tor_venues.head()
###Output
(2250, 7)
###Markdown
check how many venues were returned for each neighborhood
###Code
tor_venues.groupby('Neighborhood').count()
print('There are {} uniques categories.'.format(len(tor_venues['Venue Category'].unique())))
###Output
There are 277 uniques categories.
###Markdown
Analyze Neighborhood
###Code
tor_onehot = pd.get_dummies(tor_venues[['Venue Category']], prefix="", prefix_sep="")
# add neighborhood column back to dataframe
tor_onehot['Neighborhood'] = tor_venues['Neighborhood']
# move neighborhood column to the first column
fixed_columns = [tor_onehot.columns[-1]] + list(tor_onehot.columns[:-1])
tor_onehot = tor_onehot[fixed_columns]
tor_onehot.head()
tor_grouped = tor_onehot.groupby('Neighborhood').mean().reset_index()
tor_grouped.head()
###Output
_____no_output_____
###Markdown
Most Common Venues in Each Neiborhood
###Code
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighborhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighborhood'] = tor_grouped['Neighborhood']
for ind in np.arange(tor_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(tor_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
# set number of clusters
kclusters = 5
tor_grouped_cluster = tor_grouped.drop('Neighborhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(tor_grouped_cluster)
# check cluster labels generated for each row in the dataframe
kmeans.labels_[0:10]
# add clustering labels
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
tor_merged = df_joined
# merge toronto_grouped with toronto_data to add latitude/longitude for each neighborhood
tor_merged = tor_merged.join(neighborhoods_venues_sorted.set_index('Neighborhood'), on='Neighborhood')
tor_merged.head() # check the last columns!
# create map
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(kclusters)
ys = [i + x + (i*x)**2 for i in range(kclusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(tor_merged['Latitude'], tor_merged['Longitude'], tor_merged['Neighborhood'], tor_merged['Cluster Labels']):
label = folium.Popup(str(poi) + ' Cluster ' + str(cluster), parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
fill_opacity=0.7).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Exam Cluster
###Code
tor_merged.loc[tor_merged['Cluster Labels'] == 0, tor_merged.columns[[1] + list(range(5, tor_merged.shape[1]))]]
###Output
_____no_output_____ |
Janacare_User-Segmentation_dataset_Aug2014-Apr2016.ipynb | ###Markdown
Hello World!This notebook describes the decision tree based Machine Learning model I have createdto segment the users of Habits app. Looking around the data set
###Code
# This to clear all variable values
%reset
# Import the required modules
import pandas as pd
import numpy as np
#import scipy as sp
# simple function to read in the user data file.
# the argument parse_dates takes in a list of colums, which are to be parsed as date format
user_data_raw = pd.read_csv("janacare_user-engagement_Aug2014-Apr2016.csv", parse_dates = [-3,-2,-1])
# data metrics
user_data_raw.shape # Rows , colums
# data metrics
user_data_raw.dtypes # data type of colums
###Output
_____no_output_____
###Markdown
The column name *watching_videos (binary - 1 for yes, blank/0 for no)* is too long and has special chars, lets change it to *watching_videos*
###Code
user_data_to_clean = user_data_raw.rename(columns = {'watching_videos (binary - 1 for yes, blank/0 for no)':'watching_videos'})
# Some basic statistical information on the data
user_data_to_clean.describe()
###Output
_____no_output_____
###Markdown
Data Clean up In the last section of looking around, I saw that a lot of rows do not have any values or have garbage values(see first row of the table above).This can cause errors when computing anything using the values in these rows, hence a clean up is required. We will clean up only those columns, that are being used for features.* **num_modules_consumed*** **num_glucose_tracked*** **num_of_days_food_tracked*** **watching_videos**The next two colums will not be cleaned, as they contain time data which in my opinion should not be imputed* **first_login*** **last_activity**
###Code
# Lets check the health of the data set
user_data_to_clean.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 372 entries, 0 to 371
Data columns (total 19 columns):
user_id 371 non-null float64
num_modules_consumed 69 non-null float64
num_glucose_tracked 91 non-null float64
num_of_days_steps_tracked 120 non-null float64
num_of_days_food_tracked 78 non-null float64
num_of_days_weight_tracked 223 non-null float64
insulin_a1c_count 47 non-null float64
cholesterol_count 15 non-null float64
hemoglobin_count 0 non-null float64
watching_videos 97 non-null float64
weight 372 non-null float64
height 372 non-null int64
bmi 372 non-null int64
age 372 non-null int64
gender 372 non-null object
has_diabetes 39 non-null float64
first_login 372 non-null datetime64[ns]
last_activity 302 non-null datetime64[ns]
age_on_platform 372 non-null object
dtypes: datetime64[ns](2), float64(12), int64(3), object(2)
memory usage: 55.3+ KB
###Markdown
As is visible from the last column (*age_on_platform*) data type, Pandas is not recognising it as date type format. This will make things difficult, so I delete this particular column and add a new one.Since the data in *age_on_platform* can be recreated by doing *age_on_platform* = *last_activity* - *first_login*
###Code
# Lets first delete the last column
user_data_to_clean_del_last_col = user_data_to_clean.drop("age_on_platform", 1)
# Check if colums has been deleted. Number of column changed from 19 to 18
user_data_to_clean_del_last_col.shape
# Copy data frame 'user_data_del_last_col' into a new one
user_data_to_clean = user_data_to_clean_del_last_col
###Output
_____no_output_____
###Markdown
But on eyeballing I noticed some, cells of column *first_login* have greater value than corresponding cell of *last_activity*. These cells need to be swapped, since its not possible to have *first_login* > *last_activity*
###Code
# Run a loop through the data frame and check each row for this anamoly, if found swap
for index, row in user_data_to_clean.iterrows():
if row.first_login > row.last_activity:
temp_date_var = row.first_login
user_data_to_clean.set_value(index, 'first_login', row.last_activity)
user_data_to_clean.set_value(index, 'last_activity', temp_date_var)
#print "\tSw\t" + "first\t" + row.first_login.isoformat() + "\tlast\t" + row.last_activity.isoformat()
# Create new column 'age_on_platform' which has the corresponding value in date type format
user_data_to_clean["age_on_platform"] = user_data_to_clean["last_activity"] - user_data_to_clean["first_login"]
# Check the result in first few rows
user_data_to_clean["age_on_platform"].head(5)
# Lets check the health of the data set
user_data_to_clean.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 372 entries, 0 to 371
Data columns (total 19 columns):
user_id 371 non-null float64
num_modules_consumed 69 non-null float64
num_glucose_tracked 91 non-null float64
num_of_days_steps_tracked 120 non-null float64
num_of_days_food_tracked 78 non-null float64
num_of_days_weight_tracked 223 non-null float64
insulin_a1c_count 47 non-null float64
cholesterol_count 15 non-null float64
hemoglobin_count 0 non-null float64
watching_videos 97 non-null float64
weight 372 non-null float64
height 372 non-null int64
bmi 372 non-null int64
age 372 non-null int64
gender 372 non-null object
has_diabetes 39 non-null float64
first_login 372 non-null datetime64[ns]
last_activity 302 non-null datetime64[ns]
age_on_platform 302 non-null timedelta64[ns]
dtypes: datetime64[ns](2), float64(12), int64(3), object(1), timedelta64[ns](1)
memory usage: 55.3+ KB
###Markdown
The second column of the above table describes, the number of non-null values in the respective column.As is visible for the columns of interest for us,eg. *num_modules_consumed* has ONLY 69 values out of possible 371 total
###Code
# Lets remove all columns from the data set that do not have to be imputed -
user_data_to_impute = user_data_to_clean.drop(["user_id", "watching_videos", "num_of_days_steps_tracked", "num_of_days_weight_tracked", "insulin_a1c_count", "weight", "height", "bmi", "age", "gender", "has_diabetes", "first_login", "last_activity", "age_on_platform", "hemoglobin_count", "cholesterol_count"], 1 )
user_data_to_impute.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 372 entries, 0 to 371
Data columns (total 3 columns):
num_modules_consumed 69 non-null float64
num_glucose_tracked 91 non-null float64
num_of_days_food_tracked 78 non-null float64
dtypes: float64(3)
memory usage: 8.8 KB
###Markdown
The next 3 cells describes the steps to Impute data using KNN strategy, sadly this is not working well for our data set! One possible reason could be that the column is too sparse to find a neighbourer !In future this method could be combined with the mean imputation method, so the values not covered by KNN get replaced with mean values. [Github repo and Documentation for fancyimpute](https://github.com/hammerlab/fancyimpute)
###Code
# Import Imputation method KNN
##from fancyimpute import KNN
# First lets convert the Pandas Dataframe into a Numpy array. We do this since the data frame needs to be transposed,
# which is only possible if the format is an Numpy array.
##user_data_to_impute_np_array = user_data_to_impute.as_matrix()
# Lets Transpose it
##user_data_to_impute_np_array_transposed = user_data_to_impute_np_array.T
# Run the KNN method on the data. function usage X_filled_knn = KNN(k=3).complete(X_incomplete)
##user_data_imputed_knn_np_array = KNN(k=5).complete(user_data_to_impute_np_array_transposed)
###Output
_____no_output_____
###Markdown
The above 3 steps are for KNN based Imputation, did not work well. As visible 804 items could not be imputed for and get replaced with zero Lets use simpler method that is provided by Scikit Learn itself
###Code
# Lets use simpler method that is provided by Scikit Learn itself
# import the function
from sklearn.preprocessing import Imputer
# Create an object of class Imputer, with the relvant parameters
imputer_object = Imputer(missing_values='NaN', strategy='mean', axis=0, copy=False)
# Impute the data and save the generated Numpy array
user_data_imputed_np_array = imputer_object.fit_transform(user_data_to_impute)
###Output
_____no_output_____
###Markdown
the *user_data_imputed_np_array* is a NumPy array, we need to convert it back to Pandas data frame
###Code
# create a list of tuples, with the column name and data type for all existing columns in the Numpy array.
# exact order of columns has to be maintained
column_names_of_imputed_np_array = ['num_modules_consumed', 'num_glucose_tracked', 'num_of_days_food_tracked']
# create the Pandas data frame from the Numpy array
user_data_imputed_data_frame = pd.DataFrame(user_data_imputed_np_array, columns=column_names_of_imputed_np_array)
# Check if the data frame created now is proper
user_data_imputed_data_frame.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 372 entries, 0 to 371
Data columns (total 3 columns):
num_modules_consumed 372 non-null float64
num_glucose_tracked 372 non-null float64
num_of_days_food_tracked 372 non-null float64
dtypes: float64(3)
memory usage: 8.8 KB
###Markdown
Now lets add back the useful colums that we had removed from data set, these are* *last_activity** *age_on_platform** *watching_videos*
###Code
# using the Series contructor from Pandas
user_data_imputed_data_frame['last_activity'] = pd.Series(user_data_to_clean['last_activity'])
user_data_imputed_data_frame['age_on_platform'] = pd.Series(user_data_to_clean['age_on_platform'])
# Check if every thing is Ok
user_data_imputed_data_frame.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 372 entries, 0 to 371
Data columns (total 5 columns):
num_modules_consumed 372 non-null float64
num_glucose_tracked 372 non-null float64
num_of_days_food_tracked 372 non-null float64
last_activity 302 non-null datetime64[ns]
age_on_platform 302 non-null timedelta64[ns]
dtypes: datetime64[ns](1), float64(3), timedelta64[ns](1)
memory usage: 14.6 KB
###Markdown
As mentioned in column description for *watching_videos* a blank or no value, means '0' also know as 'Not watching' Since Scikit Learn models can ONLY deal with numerical values, lets convert all blanks to '0'
###Code
# fillna(0) function will fill all blank cells with '0'
user_data_imputed_data_frame['watching_videos'] = pd.Series(user_data_to_clean['watching_videos'].fillna(0))
user_data_imputed_data_frame.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 372 entries, 0 to 371
Data columns (total 6 columns):
num_modules_consumed 372 non-null float64
num_glucose_tracked 372 non-null float64
num_of_days_food_tracked 372 non-null float64
last_activity 302 non-null datetime64[ns]
age_on_platform 302 non-null timedelta64[ns]
watching_videos 372 non-null float64
dtypes: datetime64[ns](1), float64(4), timedelta64[ns](1)
memory usage: 17.5 KB
###Markdown
Finally the columns *last_activity*, *age_on_platform* have missing values, as evident from above table. Since this is time data, that in my opinion should not be imputed, we will drop/delete the columns.
###Code
# Since only these two columns are having null values, we can run the function *dropna()* on the whole data frame
# All rows with missing data get dropped
user_data_imputed_data_frame.dropna(axis=0, inplace=True)
user_data_imputed_data_frame.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 302 entries, 0 to 370
Data columns (total 6 columns):
num_modules_consumed 302 non-null float64
num_glucose_tracked 302 non-null float64
num_of_days_food_tracked 302 non-null float64
last_activity 302 non-null datetime64[ns]
age_on_platform 302 non-null timedelta64[ns]
watching_videos 302 non-null float64
dtypes: datetime64[ns](1), float64(4), timedelta64[ns](1)
memory usage: 16.5 KB
###Markdown
Labelling the Raw data Now comes the code that will based on the rules mentioned below label the provided data, so it can be used as trainning data for the classifer. This tables defines the set of rules used to assign labels for Traning data | label | age_on_platform | last_activity | num_modules_comsumed | num_of_days_food_tracked | num_glucose_tracked | watching_videos ||---------------------|----------------------|---------------------------|-----------------------------|--------------------------|-----------------------------|------------------|| Generic (ignore) | Converted to days | to be Measured from 16Apr | Good >= 3/week Bad = 30 Bad = 4/week Bad < 4/week | Good = 1 Bad = 0 || good_new_user = **1** | >= 30 days && = 12 | >= 20 | >= 16 | Good = 1 || bad_new_user = **2** | >= 30 days && 2 days | < 12 | < 20 | < 16 | Bad = 0 || good_mid_term_user = **3** | >= 180 days && = 48 | >= 30 | >= 96 | Good = 1 || bad_mid_term_user = **4** | >= 180 days && 7 days | < 48 | < 30 | < 96 | Bad = 0 || good_long_term_user = **5** | >= 360 days | = 48 | >= 30 | >= 192 | Good = 1 || bad_long_term_user = **6** | >= 360 days | > 14 days | < 48 | < 30 | < 192 | Bad = 0 |
###Code
# This if else section will bin the rows based on the critiria for labels mentioned in the table above
user_data_imputed_data_frame_labeled = user_data_imputed_data_frame
for index, row in user_data_imputed_data_frame.iterrows():
if row["age_on_platform"] >= np.timedelta64(30, 'D') and row["age_on_platform"] < np.timedelta64(180, 'D'):
if row['last_activity'] <= np.datetime64(2, 'D') and\
row['num_modules_consumed'] >= 12 and\
row['num_of_days_food_tracked'] >= 20 and\
row['num_glucose_tracked'] >= 16 and\
row['watching_videos'] == 1:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 1)
else:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 2)
elif row["age_on_platform"] >= np.timedelta64(180, 'D') and row["age_on_platform"] < np.timedelta64(360, 'D'):
if row['last_activity'] <= np.datetime64(7, 'D') and\
row['num_modules_consumed'] >= 48 and\
row['num_of_days_food_tracked'] >= 30 and\
row['num_glucose_tracked'] >= 96 and\
row['watching_videos'] == 1:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 3)
else:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 4)
elif row["age_on_platform"] >= np.timedelta64(360, 'D'):
if row['last_activity'] <= np.datetime64(14, 'D') and\
row['num_modules_consumed'] >= 48 and\
row['num_of_days_food_tracked'] >= 30 and\
row['num_glucose_tracked'] >= 192 and\
row['watching_videos'] == 1:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 5)
else:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 6)
else:
user_data_imputed_data_frame_labeled.set_value(index, 'label', 0)
user_data_imputed_data_frame_labeled['label'].unique()
###Output
_____no_output_____
###Markdown
The output above for the array says only **2,4,6,0** were selected as labels. Which means there are no good users in all three **new, mid, long - term** categories. Consequently either I change the label selection model or get better data (which has good users) :P
###Code
# Look at basic info for this Labeled data frame
user_data_imputed_data_frame_labeled.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 302 entries, 0 to 370
Data columns (total 7 columns):
num_modules_consumed 302 non-null float64
num_glucose_tracked 302 non-null float64
num_of_days_food_tracked 302 non-null float64
last_activity 302 non-null datetime64[ns]
age_on_platform 302 non-null timedelta64[ns]
watching_videos 302 non-null float64
label 302 non-null float64
dtypes: datetime64[ns](1), float64(5), timedelta64[ns](1)
memory usage: 18.9 KB
###Markdown
One major limitation with Sci Kit Learn is with the datatypes it can deal with for features the data type of *last_activity* is *datetime64* and of *age_on_platform* is *timedelta64*These we need to convert to a numerical type.
###Code
# Lets start with the column last_activity
# ts = (dt64 - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's')
# This function takes a datetime64 value and converts it into float value that represents time from epoch
def convert_datetime64_to_from_epoch(dt64):
ts = (dt64 - np.datetime64('1970-01-01T00:00:00Z')) / np.timedelta64(1, 's')
return ts
# Lets apply this function on last_activity column
user_data_imputed_data_frame_labeled_datetime64_converted = user_data_imputed_data_frame_labeled
user_data_imputed_data_frame_labeled_datetime64_converted['last_activity'] = user_data_imputed_data_frame_labeled['last_activity'].apply(convert_datetime64_to_from_epoch)
user_data_imputed_data_frame_labeled_datetime64_converted.info()
# Now its time to convert the timedelta64 column named age_on_platform
def convert_timedelta64_to_sec(td64):
ts = (td64 / np.timedelta64(1, 's'))
return ts
user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted = user_data_imputed_data_frame_labeled_datetime64_converted
user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted['age_on_platform'] = user_data_imputed_data_frame_labeled_datetime64_converted['age_on_platform'].apply(convert_timedelta64_to_sec)
user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted.info()
user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted.describe()
# Save the labeled data frame as excel file
from pandas import options
options.io.excel.xlsx.writer = 'xlsxwriter'
user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted.to_excel('user_data_imputed_data_frame_labeled.xlsx')
###Output
_____no_output_____
###Markdown
Training and Testing the ML algorithm Lets move on to the thing we all have been waiting for: model training and testing For the training the model we need two lists, one list with only the Labels column. Second list is actually a list of lists with each sub list containing the full row of feature columns. Before we do anything we need to seprate out 30% of the data for testing purpose
###Code
# Total number of rows is 302; 30% of that is ~90
user_data_imputed_data_frame_labeled_training = user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted.ix[90:]
user_data_imputed_data_frame_labeled_training.info()
# Lets first make our list of Labels column
#for index, row in user_data_imputed_data_frame.iterrows():
label_list = user_data_imputed_data_frame_labeled_training['label'].values.tolist()
# Check data type of elements of the list
type(label_list[0])
# Lets convert the data type of all elements of the list to int
label_list_training = map(int, label_list)
# Check data type of elements of the list
type(label_list_training[5])
###Output
_____no_output_____
###Markdown
Here we remove the *datetime64* & *timedelta64* columns too, the issue is Sci Kit learn methods can only deal with numerical and string features. I am trying to sort this issue
###Code
# Now to create the other list of lists with features as elements
# before that we will have to remove the Labels column
user_data_imputed_data_frame_UNlabeled_training = user_data_imputed_data_frame_labeled_training.drop(['label'] ,1)
user_data_imputed_data_frame_UNlabeled_training.info()
# As you may notice, the data type of watching_videos is float, while it should be int
user_data_imputed_data_frame_UNlabeled_training['watching_videos'] = user_data_imputed_data_frame_UNlabeled_training['watching_videos'].apply(lambda x: int(x))
user_data_imputed_data_frame_UNlabeled_training.info()
# Finally lets create the list of list from the row contents
features_list_training = map(list, user_data_imputed_data_frame_UNlabeled_training.values)
###Output
_____no_output_____
###Markdown
Its time to train the model
###Code
from sklearn import tree
classifier = tree.DecisionTreeClassifier() # We create an instance of the Decision tree object
classifier = classifier.fit(features_list_training, label_list_training) # Train the classifier
# Testing data is the first 90 rows
user_data_imputed_data_frame_labeled_testing = user_data_imputed_data_frame_labeled_datetime64_timedelta64_converted.ix[:90]
# take the labels in seprate list
label_list_test = user_data_imputed_data_frame_labeled_testing['label'].values.tolist()
label_list_test = map(int, label_list_test)
# Drop the time and Label columns
user_data_imputed_data_frame_UNlabeled_testing = user_data_imputed_data_frame_labeled_testing.drop(['label'] ,1)
# Check if every thing looks ok
user_data_imputed_data_frame_UNlabeled_testing.info()
# Finally lets create the list of list from the row contents for testing
features_list_test = map(list, user_data_imputed_data_frame_UNlabeled_testing.values)
len(features_list_test)
# the prediction results for first ten values of test data set
print list(classifier.predict(features_list_test[:20]))
# The labels for test data set as labeled by code
print label_list_test[:20]
###Output
[2, 2, 4, 4, 0, 4, 2, 2, 4, 2, 4, 2, 2, 4, 4, 2, 2, 6, 4, 2]
|
notebooks/logging-examples/logging-training-metadata.ipynb | ###Markdown
Logging Training MetadataWe can't train a model without a lot of data. Keeping track of where that data isand how to get it can be difficult. ``rubicon_ml`` isn't in the business of storingfull training datasets, but it can store metadata about our training datasets onboth **projects** (for high level datasource configuration) and **experiments**(for indiviual model runs). Below, we'll use ``rubicon_ml`` to reference a dataset stored in S3.
###Code
s3_config = {
"region_name": "us-west-2",
"signature_version": "v4",
"retries": {
"max_attempts": 10,
"mode": "standard",
}
}
bucket_name = "my-bucket"
key = "path/to/my/data.parquet"
###Output
_____no_output_____
###Markdown
We could use the following function to pull training data locally from S3.**Note:** We're reading the user's account credentials from an externalsource rather than exposing them in the ``s3_config`` we created.``rubicon_ml`` **is not intended for storing secrets**.
###Code
def read_from_s3(config, bucket, key, local_output_path):
import boto3
from botocore.config import Config
config = Config(**config)
# assuming credentials are correct in `~/.aws` or set in environment variables
client = boto3.client("s3", config=config)
with open(local_output_path, "wb") as f:
s3.download_fileobj(bucket, key, f)
###Output
_____no_output_____
###Markdown
But we don't actually need to reach out to S3 for this example, so we'll use a no-op.
###Code
def read_from_s3(config, bucket, key, local_output_path):
return None
###Output
_____no_output_____
###Markdown
Let's create a **project** for the **experiments** we'll run in this example. We'll usein-memory persistence so we don't need to clean up after ourselves when we're done!
###Code
from rubicon_ml import Rubicon
rubicon = Rubicon(persistence="memory")
project = rubicon.get_or_create_project("Storing Training Metadata")
project
###Output
_____no_output_____
###Markdown
Experiment level training metadataBefore we create an **experiment**, we'll construct some training metadata to passalong so future collaborators, reviewers, or even future us can reference the sametraining dataset later.
###Code
training_metadata = (s3_config, bucket_name, key)
experiment = project.log_experiment(
training_metadata=training_metadata,
tags=["S3", "training metadata"]
)
# then run the experiment and log everything to rubicon!
experiment.training_metadata
###Output
_____no_output_____
###Markdown
We can come back any time and use the **experiment's** training metadata to pull the same dataset.
###Code
experiment = project.experiments(tags=["S3", "training metadata"], qtype="and")[0]
training_metadata = experiment.training_metadata
read_from_s3(
training_metadata[0],
training_metadata[1],
training_metadata[2],
"./local_output.parquet",
)
###Output
_____no_output_____
###Markdown
If we're referencing multiple keys within the bucket, we can send a list of training metadata.
###Code
training_metadata = [
(s3_config, bucket_name, "path/to/my/data_0.parquet"),
(s3_config, bucket_name, "path/to/my/data_1.parquet"),
(s3_config, bucket_name, "path/to/my/data_2.parquet"),
]
experiment = project.log_experiment(training_metadata=training_metadata)
experiment.training_metadata
###Output
_____no_output_____
###Markdown
``training_metadata`` is simply a tuple or an array of tuples, so we can decide how tobest store our metadata. The config and prefix are the same for each piece of metadata,so no need to duplicate!
###Code
training_metadata = (
s3_config,
bucket_name,
[
"path/to/my/data_0.parquet",
"path/to/my/data_1.parquet",
"path/to/my/data_2.parquet",
],
)
experiment = project.log_experiment(training_metadata=training_metadata)
experiment.training_metadata
###Output
_____no_output_____
###Markdown
Since it's just an array of tuples, we can even use a `namedtuple` to represent the structure we decide to go with.
###Code
from collections import namedtuple
S3TrainingMetadata = namedtuple("S3TrainingMetadata", "config bucket keys")
training_metadata = S3TrainingMetadata(
s3_config,
bucket_name,
[
"path/to/my/data_0.parquet",
"path/to/my/data_1.parquet",
"path/to/my/data_2.parquet",
],
)
experiment = project.log_experiment(training_metadata=training_metadata)
experiment.training_metadata
###Output
_____no_output_____
###Markdown
Projects for complex training metadataEach **experiment** on the *S3 Training Metadata* project below uses the same config toconnect to S3, so no need to duplicate it. We'll only log it to the **project**. Thenwe'll run three experiments, with each one using a different key to load data from S3.We can represent that training metadata as a different ``namedtuple`` and log one toeach experiment.
###Code
S3Config = namedtuple("S3Config", "region_name signature_version retries")
S3DatasetMetadata = namedtuple("S3DatasetMetadata", "bucket key")
project = rubicon.get_or_create_project(
"S3 Training Metadata",
training_metadata=S3Config(**s3_config),
)
for key in [
"path/to/my/data_0.parquet",
"path/to/my/data_1.parquet",
"path/to/my/data_2.parquet",
]:
experiment = project.log_experiment(
training_metadata=S3DatasetMetadata(bucket=bucket_name, key=key)
)
# then run the experiment and log everything to rubicon!
###Output
_____no_output_____
###Markdown
Later, we can use the **project** and **experiments** to reconnect to the same datasets!
###Code
project = rubicon.get_project("S3 Training Metadata")
s3_config = S3Config(*project.training_metadata)
print(s3_config)
for experiment in project.experiments():
s3_dataset_metadata = S3DatasetMetadata(*experiment.training_metadata)
print(s3_dataset_metadata)
training_data = read_from_s3(
s3_config._asdict(),
s3_dataset_metadata.bucket,
s3_dataset_metadata.key,
"./local_output.parquet"
)
###Output
S3Config(region_name='us-west-2', signature_version='v4', retries={'max_attempts': 10, 'mode': 'standard'})
S3DatasetMetadata(bucket='my-bucket', key='path/to/my/data_2.parquet')
S3DatasetMetadata(bucket='my-bucket', key='path/to/my/data_0.parquet')
S3DatasetMetadata(bucket='my-bucket', key='path/to/my/data_1.parquet')
|
18.06.21 - Project1/Archive (Delete)/Candidate - Copy.ipynb | ###Markdown
neils_text = []for x in range(1,5): api.user_timeline("neiltyson", page=x) for i in range(len(api.user_timeline("neiltyson"))): neils_text.append(api.user_timeline("neiltyson")[i]["text"]) neils_text
###Code
testla = []
for x in range(1,5):
api.search("tesla", rpp=100, page=x)
for i in range(len(api.search("testla"))):
neils_text.append(api.search("testla")[i]["text"])
tesla
# general loop
for i in range(len(api.search("travis allen")["statuses"])):
print(api.search("travis allen")["statuses"][i]["created_at"])
print(api.search("travis allen")["statuses"][i]["text"])
print("---------------------------------------------")
# page parameter
for x in range(1,5):
api.search("testla",page=x)
for i in range(len(api.search("travis allen"))):
print(api.search("travis allen")["statuses"][i]["created_at"])
print(api.search("travis allen")["statuses"][i]["text"])
print("---------------------------------------------")
/api/open/v1/DisasterDeclarationsSummaries
import urllib
###Output
_____no_output_____ |
playbook/tactics/privilege-escalation/T1546.007.ipynb | ###Markdown
T1546.007 - Event Triggered Execution: Netsh Helper DLLAdversaries may establish persistence by executing malicious content triggered by Netsh Helper DLLs. Netsh.exe (also referred to as Netshell) is a command-line scripting utility used to interact with the network configuration of a system. It contains functionality to add helper DLLs for extending functionality of the utility. (Citation: TechNet Netsh) The paths to registered netsh.exe helper DLLs are entered into the Windows Registry at HKLM\SOFTWARE\Microsoft\Netsh.Adversaries can use netsh.exe helper DLLs to trigger execution of arbitrary code in a persistent manner. This execution would take place anytime netsh.exe is executed, which could happen automatically, with another persistence technique, or if other software (ex: VPN) is present on the system that executes netsh.exe as part of its normal functionality. (Citation: Github Netsh Helper CS Beacon)(Citation: Demaske Netsh Persistence) Atomic Tests
###Code
#Import the Module before running the tests.
# Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts.
Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
###Output
_____no_output_____
###Markdown
Atomic Test 1 - Netsh Helper DLL RegistrationNetsh interacts with other operating system components using dynamic-link library (DLL) files**Supported Platforms:** windows Attack Commands: Run with `command_prompt````command_promptnetsh.exe add helper C:\Path\file.dll```
###Code
Invoke-AtomicTest T1546.007 -TestNumbers 1
###Output
_____no_output_____ |
Code/Ecommerce_Customers.ipynb | ###Markdown
Linear Regression ProjectYou just got some contract work with an Ecommerce company based in New York City that sells clothing online but they also have in-store style and clothing advice sessions. Customers come in to the store, have sessions/meetings with a personal stylist, then they can go home and order either on a mobile app or website for the clothes they want.The company is trying to decide whether to focus their efforts on their mobile app experience or their website. They've hired you on contract to help them figure it out! Let's get started!Just follow the steps below to analyze the customer data (it's fake, don't worry I didn't give you real credit card numbers or emails). Imports** Import pandas, numpy, matplotlib,and seaborn. Then set %matplotlib inline
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
Get the DataWe'll work with the Ecommerce Customers csv file from the company. It has Customer info, suchas Email, Address, and their color Avatar. Then it also has numerical value columns:* Avg. Session Length: Average session of in-store style advice sessions.* Time on App: Average time spent on App in minutes* Time on Website: Average time spent on Website in minutes* Length of Membership: How many years the customer has been a member. ** Read in the Ecommerce Customers csv file as a DataFrame called customers.**
###Code
df = pd.read_csv('Ecommerce Customers')
###Output
_____no_output_____
###Markdown
**Check the head of customers, and check out its info() and describe() methods.**
###Code
df.head()
df.describe()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 500 entries, 0 to 499
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Email 500 non-null object
1 Address 500 non-null object
2 Avatar 500 non-null object
3 Avg. Session Length 500 non-null float64
4 Time on App 500 non-null float64
5 Time on Website 500 non-null float64
6 Length of Membership 500 non-null float64
7 Yearly Amount Spent 500 non-null float64
dtypes: float64(5), object(3)
memory usage: 31.4+ KB
###Markdown
Exploratory Data Analysis**Let's explore the data!**For the rest of the exercise we'll only be using the numerical data of the csv file.
###Code
sns.set_palette('GnBu_d')
sns.set_style('whitegrid')
sns.jointplot(x='Time on Website',y='Yearly Amount Spent',data=df, palette='GnBu_d')
sns.set_palette('GnBu_d')
sns.set_style('whitegrid')
sns.jointplot(x='Time on App',y='Yearly Amount Spent',data=df, palette='GnBu_d')
###Output
_____no_output_____
###Markdown
** Use jointplot to create a 2D hex bin plot comparing Time on App and Length of Membership.**
###Code
sns.jointplot(x='Time on App',y='Length of Membership', kind= 'hex', data=df)
###Output
_____no_output_____
###Markdown
**Let's explore these types of relationships across the entire data set. Use [pairplot](https://stanford.edu/~mwaskom/software/seaborn/tutorial/axis_grids.htmlplotting-pairwise-relationships-with-pairgrid-and-pairplot) to recreate the plot below.(Don't worry about the the colors)**
###Code
sns.pairplot(df)
###Output
_____no_output_____
###Markdown
**Create a linear model plot (using seaborn's lmplot) of Yearly Amount Spent vs. Length of Membership. **
###Code
sns.lmplot(x='Length of Membership', y='Yearly Amount Spent',data=df)
###Output
_____no_output_____
###Markdown
Training and Testing DataNow that we've explored the data a bit, let's go ahead and split the data into training and testing sets.** Set a variable X equal to the numerical features of the customers and a variable y equal to the "Yearly Amount Spent" column. **
###Code
X = df[['Avg. Session Length', 'Time on App','Time on Website', 'Length of Membership']]
y = df['Yearly Amount Spent']
###Output
_____no_output_____
###Markdown
** Use model_selection.train_test_split from sklearn to split the data into training and testing sets. Set test_size=0.3 and random_state=101**
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.3, random_state=101)
###Output
_____no_output_____
###Markdown
Training the ModelNow its time to train our model on our training data!** Import LinearRegression from sklearn.linear_model **
###Code
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
**Create an instance of a LinearRegression() model named lm.**
###Code
lm = LinearRegression()
###Output
_____no_output_____
###Markdown
** Train/fit lm on the training data.**
###Code
lm.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
**Print out the coefficients of the model**
###Code
print('Cofficients: \n',lm.coef_)
###Output
Cofficients:
[25.98154972 38.59015875 0.19040528 61.27909654]
###Markdown
Predicting Test DataNow that we have fit our model, let's evaluate its performance by predicting off the test values!** Use lm.predict() to predict off the X_test set of the data.**
###Code
predictions = lm.predict(X_test)
###Output
_____no_output_____
###Markdown
** Create a scatterplot of the real test values versus the predicted values. **
###Code
plt.scatter(y_test,predictions)
plt.xlabel('Y test')
plt.ylabel('Predicted Y')
###Output
_____no_output_____
###Markdown
Evaluating the ModelLet's evaluate our model performance by calculating the residual sum of squares and the explained variance score (R^2).** Calculate the Mean Absolute Error, Mean Squared Error, and the Root Mean Squared Error. Refer to the lecture or to Wikipedia for the formulas**
###Code
from sklearn import metrics
print('MAE:', metrics.mean_absolute_error(y_test, predictions))
print('MSE:', metrics.mean_squared_error(y_test, predictions))
print('RMSE:', np.sqrt(metrics.mean_squared_error(y_test, predictions)))
###Output
MAE: 7.228148653430828
MSE: 79.81305165097427
RMSE: 8.933815066978624
###Markdown
ResidualsYou should have gotten a very good model with a good fit. Let's quickly explore the residuals to make sure everything was okay with our data. **Plot a histogram of the residuals and make sure it looks normally distributed. Use either seaborn distplot, or just plt.hist().**
###Code
sns.distplot(y_test-predictions, bins=50)
###Output
/home/argha/.local/lib/python3.8/site-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
warnings.warn(msg, FutureWarning)
###Markdown
ConclusionWe still want to figure out the answer to the original question, do we focus our efforst on mobile app or website development? Or maybe that doesn't even really matter, and Membership Time is what is really important. Let's see if we can interpret the coefficients at all to get an idea.** Recreate the dataframe below. **
###Code
coeffecients = pd.DataFrame(lm.coef_,X.columns)
coeffecients.columns = ['Coeffecient']
coeffecients
###Output
_____no_output_____ |
app/IS18.ipynb | ###Markdown
Automated 3D Reconstruction from Satellite Images _SIAM IS18 MINITUTORIAL - 08/06/2018_ Gabriele Facciolo, Carlo de Franchis, and Enric Meinhardt-Llopis -----------------------------------------------------This tutorial is a hands-on introduction to the manipulation of optical satellite images. The objective is to provide all the tools needed to process and exploit the images for 3D reconstruction. We will present the essential modeling elements needed for building a stereo pipeline for satellite images. This includes the specifics of satellite imaging such as pushbroom sensor modeling, coordinate systems, and localization functions. This notebook is divided in three sections.1. **Coordinate Systems and Geometric Modeling of Optical Satellites.** Introduces geographic coordinates, and sensor models needed to manipulate satellite images. 2. **Epipolar Rectification and Stereo Matching.** Introduces an approximated sensor model which is used to rectify pairs of satellite images and compute correspondences between them.3. **Triangulation and Digital Elevation Models.** Creates a point cloud by triangulating the correspondences then projects them on an UTM reference system.First we setup the tools needed for rest of the notebook. Jupyter notebook usage: press SHIFT+ENTER to run one cell and go to the next one
###Code
# Standard modules used through the notebook
import numpy as np
import matplotlib.pyplot as plt
# Tools specific for this tutorial
# They are in the .py files accompaining this notebook
import vistools # display tools
import utils # IO tools
import srtm4 # SRTM tools
import rectification # rectification tools
import stereo # stereo tools
import triangulation # triangulation tools
from vistools import printbf # boldface print
# Display and interface settings (just for the notebook interface)
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
np.set_printoptions(linewidth=150)
###Output
_____no_output_____
###Markdown
Section 1. Coordinate Systems and Geometric ModelingIn this first section we'll learn:* about geodetic (longitude, latitude) and projected (UTM) coordinates * to manipulate large satellite images * RPC camera model for localization and projection----------------------------------------------------- Coordinate systemsCoordinate reference systems (CRS) provide a standardized way of describing geographic locations.Determining the shape of the earth is the first step in developing a CRS.A natural choice for describing points in 3d relative to the **ellipsoid**, is using [latitude, longitude, and altitude](https://en.wikipedia.org/wiki/World_Geodetic_SystemA_new_World_Geodetic_System:_WGS_84). These are unprojected (or geographic) reference systems. Projected systems, on the other hand, are used for referencing locations on 2drepresentations of the Earth. Geodetic Longitude, Latitude, and WGS84The [World Geodetic System (WGS84)](https://en.wikipedia.org/wiki/World_Geodetic_SystemWGS84) is a standard for use in cartography, geodesy, navigation, GPS. It comprises a standard coordinate system for the Earth, a standard reference ellipsoid to express altitude data, and a gravitational equipotential surface (the geoid) that defines the nominal sea level. - [The geodetic latitude](https://en.wikipedia.org/wiki/Latitude)(usually denoted as φ) is the **angle between the equatorial plane** and a line that is **normal to the reference ellipsoid**.Note that the normal to the ellipsoid does not pass through the center, except at the equator and at the poles. - [The longitude](https://en.wikipedia.org/wiki/Longitude) of a point on Earth's surface is the angle east or west of a reference Greenwich meridian to another meridian that passes through that point. Projections: Mercator and UTMProjections transform the elliptical earth into a flat surface.It is impossible to flatten a round objectwithout distortion. This results in trade-offs between area,direction, shape, and distance. - [**The Mercator projection**](https://en.wikipedia.org/wiki/Mercator_projection) (used in Google maps) is a cylindrical map projection that is conformal so it preserves angles (which is usefull for navigation).The Mercator projection does not preserve areas, but **it is most accurate around the equator, where it is tangent to the globe**. - [**The Universal Transverse Mercator (UTM)**](https://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system) system is not a single map projection. The system instead divides the Earth into sixty **zones, each being a six-degree band of longitude**, and uses a secant transverse Mercator projection in each zone. Within an UTM zone the coordinates are expressed as easting and northing.The **easting** coordinate refers to the eastward-measured distance (in meters) from the central meridian of the UTM zone. While the **northing** coordinate refers to the distance to the equator. The northing of a point south of the equator is equal to 10000000m minus its distance from the equator (this way there are no negative coordinates). Data available for this tutorialSince high-resolution WorldView-3 images are not in general freely downloadable (you have to buy them), a [sample set of publicly available images](http://www.jhuapl.edu/pubgeo/satellite-benchmark.html) is provided in a remote folder. The content of that folder can be listed with the `listFD` function of the `utils` module.
###Code
# list the tiff images available in the remote folder
IARPAurl = 'http://menthe.ovh.hw.ipol.im:80/IARPA_data/cloud_optimized_geotif'
myimages = utils.listFD(IARPAurl, 'TIF')
# sort the images by acquisition date
myimages = sorted(myimages, key=utils.acquisition_date)
print('Found {} images'.format(len(myimages)))
# select the two images to start working
idx_a, idx_b = 0, 5
print("Images Used:")
print(myimages[idx_a])
print(myimages[idx_b])
###Output
_____no_output_____
###Markdown
Images geographic footprintsThe longitude, latitude bounding box of a GeoTIFF image is described in its metadata. The `get_image_longlat_polygon` of the `utils` module can read it. Let's use it to display on a map the footprints of the selected images.
###Code
# creates an interactive map and returns a map handle to interact with it.
mymap = vistools.clickablemap(zoom=12)
display(mymap)
# display the footprint polygons of the satellite images
for f in [idx_a, idx_b]:
footprint = utils.get_image_longlat_polygon(myimages[f])
mymap.add_GeoJSON(footprint)
# center the map on the center of the footprint
mymap.center = np.mean(footprint['coordinates'][0][:4], axis=0).tolist()[::-1]
###Output
_____no_output_____
###Markdown
Coordinates of the area of interest (AOI)
###Code
## set the coordinates of the area of interest as a GeoJSON polygon
# Buenos aires AOI
aoi_buenos_aires = {'coordinates': [[[-58.585185, -34.490883],
[-58.585185, -34.48922],
[-58.583104, -34.48922],
[-58.583104, -34.490883],
[-58.585185, -34.490883]]],
'type': 'Polygon'}
# add center field
aoi_buenos_aires['center'] = np.mean(aoi_buenos_aires['coordinates'][0][:4], axis=0).tolist()
# add a polygon and center the map
mymap.add_GeoJSON(aoi_buenos_aires) # this draws the polygon described by aoi
mymap.center = aoi_buenos_aires['center'][::-1] # aoi_buenos_aires['coordinates'][0][0][::-1]
mymap.zoom = 15
###Output
_____no_output_____
###Markdown
Geometric modeling of optical satellites The Rational Polynomial Camera ModelImage vendors usually provide the orientation parameters of the cameras along with the images.To save their customers the tedious task of understanding andimplementing each specific geometric camera model, they provide instead the *localization* and *projection* functions $L$ and $P$ associated to each image.These functions allow converting from image coordinates to coordinateson the globe and back. - The projection function $P:\mathbb{R}^3\to\mathbb{R}^2$,$(\lambda, \theta, h) \mapsto \textbf{x}$ returns the image coordinates, in pixels, of a given 3-spacepoint represented by its spheroidal coordinates in the World GeodeticSystem (WGS 84) identified by itslongitude, latitude andaltitude $h$ (in meters) above the reference ellipsoid. - The localization function $L:\mathbb{R}^3\to\mathbb{R}^2$, $(\textbf{x}, h) \mapsto (\lambda, \theta)$ is itsinverse with respect to the first two components. It takes a point $\textbf{x}= (x, y)^\top$ in the image domain together with an altitude $h$, andreturns the geographic coordinates of the unique 3-space point$\textbf{X} = (\lambda, \theta, h)$.-->***The *Rational Polynomial Coefficient* ($\scriptsize{\text{RPC}}$) camera model is ananalytic description of the projection and localization functions*** [(Baltsavias & Stallmann'92)](http://dx.doi.org/10.3929/ethz-a-004336038), [(Tao & Hu'01)](http://eserv.asprs.org/PERS/2001journal/dec/2001_dec_1347-1357.pdf). Projection andlocalization functions are expressed as ratio of multivariate cubicpolynomials. For example, the latitude component of the localizationfunction for the image point $(x, y)$ at altitude $h$ is\begin{equation}\theta = \frac{\sum_{i=1}^{20} C^{\theta, \tiny{\text{NUM}}}_i \rho_i(x, y, h)}{\sum_{i=1}^{20} C^{\theta, \tiny{\text{DEN}}}_i \rho_i(x, y, h)}\end{equation}where $C^{\theta, \tiny{\text{NUM}}}_i$ (resp.$C^{\theta, \tiny{\text{DEN}}}_i$) is the $i^{\text{th}}$ coefficient of thenumerator (resp. denominator) polynomial and $\rho_{i}$ produces the$i^{\text{th}}$ factor of the three variables cubic polynomial. A cubic polynomial in three variables has 20 coefficients, thus eachcomponent of the localization and projection functions requires 40coefficients. Ten additional parameters specify the scale andoffset for the five variables $x, y, \lambda, \theta$ and $h$. $\scriptsize{\text{RPC}}$ localization and projection functionsare not exact inverses of each other. The errors due toconcatenating the projection and inverse functions are negligible, beingof the order of $10^{-7}$ degrees in longitude and latitude, i.e. about 1 cmon the ground or $\frac{1}{100}$ of pixel in the image. Images RPC coefficientsThe 90 coefficients (20 \* 2 \* 2 + 10) of the RPC projection function associated to each image are stored in the image GeoTIFF header. They can be read with the `rpc_from_geotiff` function of the `utils` module. This function returns an instance of the class `rpc_model.RPCModel` which contains the RPC coefficients and a `projection` method.
###Code
myrpcs = [utils.rpc_from_geotiff(x) for x in myimages]
rpc = myrpcs[idx_a]
print(rpc)
# let's try the projection method
lon, lat = aoi_buenos_aires['center']
x, y = rpc.projection(lon, lat, 0)
print("\n\nThe pixel coordinates (in image idx_a) of our AOI center\n"
"(lon=%.4f, lat=%.4f) at altitude 0 are: (%f, %f)" % (lon, lat, x, y))
###Output
_____no_output_____
###Markdown
**Exercise 1** Complete the implementation of the `crop_aoi` function below. This function crops an area of interest (AOI) defined with geographic coordinates in a GeoTIFF image using its RPC functions.It takes as input arguments:* `geotiff`: path to the input GeoTIFF image file* `aoi`: GeoJSON polygon* `z`: ground altitude with respect to the WGS84 ellipsoidIt returns:* `crop`: a numpy array containing the image crop* `x, y`: integer pixel coordinates of the top left corner of the crop in the input imageTo complete this function you need to use:* `utils.rpc_from_geotiff` to read the RPC coefficients and get an `rpc_model.RPCModel` object* the `projection` method of the `rpc_model.RPCModel` object* `utils.bounding_box2D` to compute a horizontal/vertical rectangular bounding box* `utils.rio_open` to open the image with the `rasterio` package* the `read(window=())` method of a `rasterio` object to read a window of the imageThe `projection` function needs an altitude coordinate `z`, which **is not** contained in the `aoi` GeoJSON polygon. We may **assume that `z` is zero**, or alternatively **get `z` from an external Digital Elevation Model (DEM) such as SRTM**. The SRTM altitude at a given `longitude, latitude` obtained using the `srtm4` module.**The code below calls your `crop_aoi` function** to crop the area selected in the map from image idx_a and displays the crop. The altitude is evaluated using the the `srtm4` function. Vefify that the image corresponds to the area selected above.
###Code
def crop_aoi(geotiff, aoi, z=0):
"""
Crop a geographic AOI in a georeferenced image using its RPC functions.
Args:
geotiff (string): path or url to the input GeoTIFF image file
aoi (geojson.Polygon): GeoJSON polygon representing the AOI
z (float): base altitude with respect to WGS84 ellipsoid (0 by default)
Return:
bbox: x, y, w, h image coordinates of the crop. x, y are the
coordinates of the top-left corner, while w, h are the dimensions
of the crop.
"""
# extract the rpc from the geotiff file
rpc = utils.rpc_from_geotiff(geotiff)
# put the aoi corners in an array
Clonlat = np.array(aoi['coordinates'][0]) # 4 coordinates (lon,lat)
# project Clonlat into the image
# INSERT A LINE FOR COMPUTING THE FOUR x,y IMAGE COORDINATES
# STARTING FROM THE longitude and latitude in: Clonlat[:,0] Clonlat[:,1]
#x, y = rpc.projection(Clonlat[:,0], Clonlat[:,1], z)
# convert the list into array
pts = np.array([x, y]) # all coordinates (pixels)
# compute the bounding box in pixel coordinates
bbox = utils.bounding_box2D(pts.transpose())
x0, y0, w, h = np.round(bbox).astype(int)
# crop the computed bbox from the large GeoTIFF image
with utils.rio_open(geotiff, 'r') as src:
crop = src.read(window=((y0, y0 + h), (x0, x0 + w)))
return crop, x0, y0
# get the altitude of the center of the AOI
lon, lat = aoi_buenos_aires['center']
z = srtm4.srtm4(lon, lat)
# crop the selected AOI in image number 10
crop, x, y = crop_aoi(myimages[idx_a], aoi_buenos_aires, z)
# display the crop
vistools.display_imshow(utils.simple_equalization_8bit(crop))
###Output
_____no_output_____
###Markdown
Localization functionThe _localization_ function is the inverse of the _projection_ function with respect to the image coordinates. It takes as input a triplet `x, y, z`, where `x` and `y` are pixel coordinates and `z` is the altitude of the corresponding 3D point above the WGS84 ellipsoid. It returns the longitude `lon` and latitude `lat` of the 3D point.The code below projects a 3D point on the image, localizes this image point on the ground, and then **computes the distance to the original point**.
###Code
from numpy.linalg import norm as l2norm
# get the altitude of the center of the AOI
z = srtm4.srtm4(lon, lat)
# project a 3D point on the image
x, y = rpc.projection(lon, lat, z)
# localize this image point on the ground
new_lon, new_lat = rpc.localization(x, y, z)
# compute the distance to the original point
print( "Error of the inverse: {} pixels".format( l2norm([new_lon - lon, new_lat - lat]) ) )
###Output
_____no_output_____
###Markdown
Section 2. Epipolar Rectification and Stereo MatchingIn this section we will learn to compute correspondences between a pair of images.These correspondences will be used in the next section for computing 3D models.The basic scheme is the following:1. extract, rotate, rescale, and shear a portion of each image so that epipolar lines are horizontal and coincident 2. apply a standard stereo-matching algorithm such as SGM using a robust matching cost----------------------------------------------------- Epipolar curvesThe following illustration displays the epipolar curve corresponding to a point in the first image. The function samples the epipolar curve of a pair of images by composing the _localization_ function of the first image with the _projection_ function of the second image.**Note that the resulting line is practically a straight line!**
###Code
rectification.trace_epipolar_curve(myimages[37], myimages[38], aoi_buenos_aires, x0=220, y0=200)
###Output
_____no_output_____
###Markdown
Affine approximation of the camera modelLet $P: \mathbb{R}^3\longrightarrow \mathbb{R}^2$ be the _projection_ function. The first order Taylor approximation of $P$ around point $X_0$ is $P(X) = P(X_0) + \nabla P(X_0)(X - X_0)$, which can be rewritten as$$P(X) = \nabla P(X_0)X + T$$with $\nabla P(X_0)$ the jacobian matrix of size (2, 3) and $T = P(X_0) - \nabla P(X_0) X_0$ a vector of size 2. This can be rewritten as a linear operation by using homogeneous coordinates: with $X = (\lambda, \varphi, h, 1)$ the previous formula becomes $P(X) = AX$, where the (3, 4) matrix $A$ is the _affine approximation_ of the RPC _projection_ function $P$ at point $X_0$. The code below calls the `rpc_affine_approximation` function to compute the affine camera matrix approximating the RPC _projection_ function around the center $X_0$ of the area selected in the map. Then it evaluates the approximation error away from the center.
###Code
# get the altitude of the center of the AOI
lon, lat = aoi_buenos_aires['center']
z = srtm4.srtm4(lon, lat)
# compute the affine projection matrix
A = rectification.rpc_affine_approximation(rpc, (lon, lat, z)) # affine projection matrix for first image
# approximation error at the center
err = l2norm( (A @ [lon, lat, z, 1])[:2] - np.array(rpc.projection(lon, lat, z)) )
print("Error at the center: {} pixels".format(err))
# compute the projection in the image
x, y = rpc.projection(lon, lat, z)
lon1, lat1 = rpc.localization(x + 500, y + 500, z)
# approximation error at center +500,+500
err = l2norm( (A @ [lon1, lat1, z, 1])[:2] - np.array(rpc.projection(lon1, lat1, z)) )
print("Error away from the center: {} pixels".format(err))
###Output
_____no_output_____
###Markdown
Affine rectificationThe operation of resampling a pair of images such that the epipolar lines become horizontal and aligned is called _stereo rectification_ or _epipolar resampling_. Using the affine camera approximation, this rectification reduces to computing two planar affine transformations that map the epipolar lines to a set of matching horizontal lines.The code below defines the function `rectify_aoi` that computes two rectifying affine transforms for the two images. The affine transforms are composed of a rotation and a zoom (to ensure aligned horizontal epipolar lines) plus an extra affine term to ensure that the ground (horizontal plane at altitude `z`) is registered. An extra translation ensures that the rectified images contain the whole area of interest and nothing more.The function `rectify_aoi` then resamples the two images according to the rectifying affine transforms, and computes sift keypoint matches to estimate the disparity range. This will be needed as an input for the stereo-matching algorithm in the next section.The rectified images are displayed in a gallery. Flip between the images to see how the buildings move!
###Code
def rectify_aoi(file1, file2, aoi, z=None):
"""
Args:
file1, file2 (strings): file paths or urls of two satellite images
aoi (geojson.Polygon): area of interest
z (float, optional): base altitude with respect to WGS84 ellipsoid. If
None, z is retrieved from srtm.
Returns:
rect1, rect2: numpy arrays with the images
S1, S2: transformation matrices from the coordinate system of the original images
disp_min, disp_max: horizontal disparity range
P1, P2: affine rpc approximations of the two images computed during the rectification
"""
# read the RPC coefficients
rpc1 = utils.rpc_from_geotiff(file1)
rpc2 = utils.rpc_from_geotiff(file2)
# get the altitude of the center of the AOI
if z is None:
lon, lat = np.mean(aoi['coordinates'][0][:4], axis=0)
z = srtm4.srtm4(lon, lat)
# compute rectifying affine transforms
S1, S2, w, h, P1, P2 = rectification.rectifying_affine_transforms(rpc1, rpc2, aoi, z=z)
# compute sift keypoint matches
q1, q2 = rectification.sift_roi(file1, file2, aoi, z)
# transform the matches to the domain of the rectified images
q1 = utils.points_apply_homography(S1, q1)
q2 = utils.points_apply_homography(S2, q2)
# CODE HERE: insert a few lines to correct the vertical shift
y_shift = 0
#y_shift = np.median(q2 - q1, axis=0)[1]
S2 = rectification.matrix_translation(-0, -y_shift) @ S2
# rectify the crops
rect1 = rectification.affine_crop(file1, S1, w, h)
rect2 = rectification.affine_crop(file2, S2, w, h)
# disparity range bounds
kpts_disps = (q2 - q1)[:, 0]
disp_min = np.percentile(kpts_disps, 2)
disp_max = np.percentile(kpts_disps, 100 - 2)
return rect1, rect2, S1, S2, disp_min, disp_max, P1, P2
rect1, rect2, S1, S2, disp_min, disp_max, P1, P2 = rectify_aoi(myimages[idx_a],
myimages[idx_b],
aoi_buenos_aires, z=14)
# display the rectified crops
vistools.display_gallery([utils.simple_equalization_8bit(rect1),
utils.simple_equalization_8bit(rect2)])
###Output
_____no_output_____
###Markdown
The rectification above has failed! The images are not "vertically aligned" **Exercise 2** Improve the implementation of the `rectify_aoi` function above so that it corrects the vertical alignement observed in this rectified pair. Use the SIFT keypoint matches to estimate the required vertical correction.After correcting the rectification you should see only horizontal displacements!The relative pointing error is particularly visible in image pairs (0, 5) and (0, 11). In other image pairs, such as (27, 28), the error is very small and almost invisible.**The corrected stereo-rectified pairs of image crops will be the input for the stereo matching algorithm.** Stereo matching Stereo matching computes the correspondences between a pair of rectified images. We use the [Semi Global Matching (SGM) algorithm (Hirschmüller'06)](https://ieeexplore.ieee.org/document/1467526/). SGM is an approximate energy minimization algorithm based on Dynamic Programming. Two critical components of the matching algorithms are:* **Choice of matching cost.** The usual squared differences cost (sd) is not robust to illumination changes or artifacts often present in satellite images. For this reason the Hamming distance between [Census Transforms (Zabih & Woodfill'94)](https://link.springer.com/chapter/10.1007/BFb0028345) is preferred. * **Disparity post-processing.** To remove the spurious matches the disparity map must be filtered. First by applying a left-right consistency test, then removing speckes (small connected disparity components that have a disparity inconsistent with neighborhood).The function ```compute_disparity_map(im1, im2, dmin, dmax, cost='census', lam=10)```computes disparity maps from two rectified images (`im1`, `im2`) using SGM,cost selects the matching cots (sd or census), and the result is filtered for mismatches using left-right and speckle filters. The code below calls the `stereo.compute_disparity_map` function and compares the results obtained with `sd` and `census` costs with and without filtering. **From now on we use a different image pair (idx_a, idx_b) as it yields more striking results.**
###Code
#### select a new pair of images (but the same aoi)
idx_a=37
idx_b=38
aoi = aoi_buenos_aires
# crop and rectigy the images
rect1, rect2, S1, S2, dmin, dmax, PA, PB = rectification.rectify_aoi(myimages[idx_a],
myimages[idx_b],
aoi)
# add some margin to the estimated disparity range
dmin, dmax = dmin-20, dmax+20
# EXTRA: set True if you want to try with a standard stereo pair
if False:
dmin, dmax = -60,0
rect1=utils.readGTIFF('data/im2.png')
rect2=utils.readGTIFF('data/im6.png')
# compute left and right disparity maps comparing SD and CENSUS
print('Disparity range: [%d, %d]'%(dmin,dmax))
lambdaval=10
LRSsd, dLsd, _ = stereo.compute_disparity_map(rect1,rect2,dmin,dmax,cost='sd', lam=lambdaval*10)
LRS , dL , _ = stereo.compute_disparity_map(rect1,rect2,dmin,dmax,cost='census', lam=lambdaval)
# compare with sd and results without filtering results
print('Comparison with sd cost and results without filtering results')
vistools.display_gallery([utils.simple_equalization_8bit(LRS),
utils.simple_equalization_8bit(LRSsd),
utils.simple_equalization_8bit(dL),
utils.simple_equalization_8bit(dLsd),
utils.simple_equalization_8bit(rect1),
utils.simple_equalization_8bit(rect2)
],
['census filtered', 'sd filtered','census',
'sd','ref','sec'])
# display the main result
vistools.display_imshow(LRS, cmap='jet')
###Output
_____no_output_____
###Markdown
Section 3. Triangulation and Digital Elevation ModelsThe extraction of 3D points from image correspondences is called *triangulation* (because the position of a point is found by trigonometry) or *intersection* (because it corresponds to the intersection of two light rays in space). The goal of this section is to produce 3D a point cloud from two satellite images, and then project it on a geographic grid to produce a 2.5D model.In the context of geographic imaging, these 2.5D models are called *digital elevation model* (DEM).The plan is the following1. triangulate a single 3D point from one correspondence between two images2. triangulate a dense set of 3D points from two images3. project a 3D point cloud into a DEM----------------------------------------------------- Triangulation of a single pointA pixel **p** in a satellite image *A* defines a line in space by means of the localization function $h\mapsto L_A(\mathbf{p},h)$. This line is parametrized by the height *h*, and it is the set of all points in space that are projected into the pixel **p**:$$P_A(L_A(\mathbf{p}),h),h)=\mathbf{p} \qquad \forall h\in\mathbf{R}$$Now, when a point $\mathbf{x}=(x,y,h)$ in space is projected into pixels **p**, **q** on images *A*,*B*, we will have the relations$$\begin{cases}P_A(\mathbf{x})=\mathbf{p} \\P_B(\mathbf{x})=\mathbf{q} \\\end{cases}$$Since **p** and **q** are pixel coordinates in the image domains, this is a system of four equations. We can use this system of equations to find the 3D point **x** from the correspondence $\mathbf{p}\sim\mathbf{q}$ by solving this system. Notice that the system is over-determined, so in practice it will not have an exact solution and we may have to find a "solution" that has minimal error in some sense (e.g., least-squares).Another way to express the same relationship is via the localization functions:$$L_A(\mathbf{p},h)=L_B(\mathbf{p},h)$$Now this is a system of two equations and a single unknown $h$. This system can be interpreted as the intersection of two lines in 3D space.In practice, the projection and localization functions are approximated using affine maps, thus all the systems above are linear overdetermined and can be solved readily using the Moore-Penrose pseudo-inverse (or, equivalently, least squares). This algorithm is implemented in the function ``triangulation_affine`` on file ``triangulation.py``. As a sanity check, we start by triangulating an artificial point:
###Code
# select a point in the center of the region of interest
Ra = myrpcs[idx_a]
Rb = myrpcs[idx_b]
x = [Ra.lon_offset, Ra.lat_offset, Ra.alt_offset]
print("x = %s"%(x))
# project the point x into each image
p = Ra.projection(*x)
q = Rb.projection(*x)
print("p = %s\nq = %s"%(p, q))
# extract the affine approximations of each projection function
Pa = rectification.rpc_affine_approximation(Ra, x)
Pb = rectification.rpc_affine_approximation(Rb, x)
# triangulate the correspondence (p,q)
lon, lat, alt, err = triangulation.triangulation_affine(Pa, Pb, p[0], p[1], q[0], q[1])
print("lon, lat, alt, err = %s, %s, %s, %s"%(lon, lat, alt, err))
###Output
_____no_output_____
###Markdown
Notice that the point **x** is recovered exactly and the error (given in meters) is essentially zero.Now, we select the same point, by hand, in two different images
###Code
# extract a crop of each image, and SAVE THE CROP OFFSETS
crop_a, offx_a, offy_a = crop_aoi(myimages[idx_a], aoi_buenos_aires, x[2])
crop_b, offx_b, offy_b = crop_aoi(myimages[idx_b], aoi_buenos_aires, x[2])
print("x0_a, y0_a = %s, %s"%(offx_a, offy_a))
print("x0_b, y0_b = %s, %s"%(offx_b, offy_b))
# coordinates at the top of the tower, chosen by visual inspection of the images below
p = [179, 274]
q = [188, 296]
# plot each image with the selected point as a red dot
_,f = plt.subplots(1, 2, figsize=(13,10))
f[0].imshow(np.log(crop_a.squeeze()), cmap="gray")
f[1].imshow(np.log(crop_b.squeeze()), cmap="gray")
f[0].plot(*p, "ro")
f[1].plot(*q, "ro")
# extract a base point for affine approximations
base_lon, base_lat = aoi_buenos_aires["center"]
base_z = srtm4.srtm4(base_lon,base_lat)
base_x = [base_lon, base_lat, base_z]
# extract the affine approximations of each projection function
Pa = rectification.rpc_affine_approximation(myrpcs[idx_a], base_x)
Pb = rectification.rpc_affine_approximation(myrpcs[idx_b], base_x)
# triangulate the top of the tower (notice that the OFFSETS of each point are corrected)
triangulation.triangulation_affine(Pa, Pb, p[0] + offx_a, p[1] + offy_a, q[0] + offx_b, q[1] + offy_b)
###Output
_____no_output_____
###Markdown
Thus, the height of the tower is 52 meters above the Earth ellipsoid. Notice that to obtain a meaningful result, the offset of the crop has to be corrected. Triangulation of many pointsIn practice, instead of finding the correspondences by hand we can use a stereo correlator on the rectified images. In that case, the disparities have to be converted back to coordinates in the original image domain, by applying the inverse of the rectification map. This is what the function ``triangulate_disparities`` does:
###Code
def triangulate_disparities(dmap, rpc1, rpc2, S1, S2, PA, PB,):
"""
Triangulate a disparity map
Arguments:
dmap : a disparity map between two rectified images
rpc1, rpc2 : calibration data of each image
S1, S2 : rectifying affine maps (from the domain of the original, full-size images)
PA, PB : the affine approximations of rpc1 and rpc2 (not always used)
Return:
xyz : a matrix of size Nx3 (where N is the number of finite disparites in dmap)
this matrix contains the coordinates of the 3d points
in "lon,lat,h" or "easting,northing,h"
"""
from utils import utm_from_lonlat
# 1. unroll all the valid (finite) disparities of dmap into a vector
m = np.isfinite(dmap.flatten())
x = np.argwhere(np.isfinite(dmap))[:,1] # attention to order of the indices
y = np.argwhere(np.isfinite(dmap))[:,0]
d = dmap.flatten()[m]
# 2. for all disparities
# 2.1. produce a pair of points in the original image domain by composing with S1 and S2
p = np.linalg.inv(S1) @ np.vstack( (x+0, y, np.ones(len(d))) )
q = np.linalg.inv(S2) @ np.vstack( (x+d, y, np.ones(len(d))) )
# 2.2. triangulate the pair of image points to find a 3D point (in UTM coordinates)
lon, lat, h, err = triangulation.triangulation_affine(PA, PB, p[0,:], p[1,:], q[0,:], q[1,:])
# 2.3. append points to the output vector
# "a meter is one tenth-million of the distance from the North Pole to the Equator"
# cf. Lagrange, Laplace, Monge, Condorcet
factor = 1 # 1e7 / 90.0
xyz = np.vstack((lon*factor, lat*factor, h)).T
#east, north = utm_from_lonlat(lon, lat)
#xyz = np.vstack((east, north, h)).T
return xyz
xyz = triangulate_disparities(LRS, myrpcs[idx_a], myrpcs[idx_b], S1, S2, PA, PB)
xyz
# display the point cloud
display(vistools.display_cloud(xyz))
###Output
_____no_output_____
###Markdown
This point cloud is all wrong! The point cloud must be represented using cartesian coordinates (each coordinate using the same units) **Exercise 3** Modify the `triangulate_disparities` function to return points with coordinates in a cartesian system such as UTM. Use the function `utils.utm_from_lonlat`, wich can process vectors of longitudes (lon) latitudes (lat): east, north = utils.utm_from_lonlat(lon, lat) Digital elevation model projectionThe following call projects the point cloud represented in UTM coordinates into an grid to produce a DEM. The algorithm averages all the points that fall into each square of the grid.
###Code
emin, emax, nmin, nmax = utils.utm_bounding_box_from_lonlat_aoi(aoi_buenos_aires)
dem = triangulation.project_cloud_into_utm_grid(xyz, emin, emax, nmin, nmax, resolution=0.5)
vistools.display_imshow(dem, cmap='jet')
###Output
_____no_output_____
###Markdown
Bonus Section. Complete Satellite Stereo Pipeline-----------------------------------------------------
###Code
import vistools # display tools
import utils # IO tools
import rectification # rectification tools
import stereo # stereo tools
import triangulation # triangulation tools
%matplotlib inline
# list images and their rpcs
IARPAurl = 'http://menthe.ovh.hw.ipol.im:80/IARPA_data/cloud_optimized_geotif'
myimages = sorted(utils.listFD(IARPAurl, 'TIF'), key=utils.acquisition_date)
myrpcs = [ utils.rpc_from_geotiff(x) for x in myimages ]
print('Found {} images'.format(len(myimages)))
# select an AOI
aoi = {'coordinates': [[[-58.585185, -34.490883],
[-58.585185, -34.48922], [-58.583104, -34.48922],
[-58.583104, -34.490883],[-58.585185, -34.490883]]],
'type': 'Polygon'}
# select an image pair
idx_a, idx_b = 38, 39
# run the whole pipeline
rect1, rect2, S1, S2, dmin, dmax, PA, PB = rectification.rectify_aoi(myimages[idx_a], myimages[idx_b], aoi)
LRS, _, _ = stereo.compute_disparity_map(rect1, rect2, dmin-20, dmax+20 , cost='census')
xyz = triangulation.triangulate_disparities(LRS, myrpcs[idx_a], myrpcs[idx_b], S1, S2, PA, PB)
emin, emax, nmin, nmax = utils.utm_bounding_box_from_lonlat_aoi(aoi)
dem2 = triangulation.project_cloud_into_utm_grid(xyz, emin, emax, nmin, nmax, resolution=0.5)
# display the input, the intermediate results and the output
a, _, _ = utils.crop_aoi(myimages[idx_a], aoi)
b, _, _ = utils.crop_aoi(myimages[idx_b], aoi)
vistools.display_gallery([a/8,b/8]) # show the original images
vistools.display_gallery([rect1/8,rect2/8]) # show the rectified images
vistools.display_imshow(LRS, cmap='jet') # show the disparity map
display(vistools.display_cloud(xyz)) # show the point cloud
vistools.display_imshow(dem2, cmap='jet') # show the DEM
###Output
_____no_output_____ |
notes/MainNB.ipynb | ###Markdown
Conda Environment ManagementManaging `conda` environments in **VSCode** is a pain in the ass because it seems to do whatever it wants.
###Code
from pprint import pprint
pprint(notes)
"""
Data Types
"""
my_set = {1, 2, 3}
my_list = [1, 2, 3]
###Output
_____no_output_____ |
notebooks/8-fine-tune-rock-paper-scissors.ipynb | ###Markdown
Demo: Transfer learning=======================*Fraida Fund* In practice, for most machine learning problems, you wouldn’t design ortrain a convolutional neural network from scratch - you would use anexisting model that suits your needs (does well on ImageNet, size isright) and fine-tune it on your own data. Note: for faster training, use Runtime \> Change Runtime Type to runthis notebook on a GPU. Import dependencies-------------------
###Code
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
import numpy as np
import platform
import datetime
import os
import math
import random
print('Python version:', platform.python_version())
print('Tensorflow version:', tf.__version__)
print('Keras version:', tf.keras.__version__)
###Output
_____no_output_____
###Markdown
Import data----------- The “rock paper scissors” dataset is available directly from theTensorflow package. In the cells that follow, we’l get the data, plot afew examples, and also do some preprocessing.
###Code
import tensorflow_datasets as tfds
(ds_train, ds_test), ds_info = tfds.load(
'rock_paper_scissors',
split=['train', 'test'],
shuffle_files=True,
with_info=True
)
fig = tfds.show_examples(ds_info, ds_train)
classes = np.array(['rock', 'paper', 'scissors'])
###Output
_____no_output_____
###Markdown
Pre-process dataset-------------------
###Code
INPUT_IMG_SIZE = 224
INPUT_IMG_SHAPE = (224, 224, 3)
def preprocess_image(sample):
sample['image'] = tf.cast(sample['image'], tf.float32)
sample['image'] = sample['image'] / 255.
sample['image'] = tf.image.resize(sample['image'], [INPUT_IMG_SIZE, INPUT_IMG_SIZE])
return sample
ds_train = ds_train.map(preprocess_image)
ds_test = ds_test.map(preprocess_image)
fig = tfds.show_examples(ds_train, ds_info, )
###Output
_____no_output_____
###Markdown
We’l convert to `numpy` format again:
###Code
train_numpy = np.vstack(tfds.as_numpy(ds_train))
test_numpy = np.vstack(tfds.as_numpy(ds_test))
X_train = np.array(list(map(lambda x: x[0]['image'], train_numpy)))
y_train = np.array(list(map(lambda x: x[0]['label'], train_numpy)))
X_test = np.array(list(map(lambda x: x[0]['image'], test_numpy)))
y_test = np.array(list(map(lambda x: x[0]['label'], test_numpy)))
###Output
_____no_output_____
###Markdown
Upload custom test sample-------------------------This code expects a PNG image.
###Code
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
from PIL import Image
# Edit the filename here as needed
filename = 'scissors.png'
# pre-process image
image = Image.open(filename).convert('RGB')
image_resized = image.resize((INPUT_IMG_SIZE, INPUT_IMG_SIZE), Image.BICUBIC)
test_sample = np.array(image_resized)/255.0
test_sample = test_sample.reshape(1, INPUT_IMG_SIZE, INPUT_IMG_SIZE, 3)
import seaborn as sns
plt.figure(figsize=(4,4));
plt.imshow(test_sample.reshape(INPUT_IMG_SIZE, INPUT_IMG_SIZE, 3));
###Output
_____no_output_____
###Markdown
Classify with MobileNetV2------------------------- [Keras Applications](https://keras.io/api/applications/) are pre-trainedmodels with saved weights, that you can download and use without anyadditional training. Here's a table of the models available as Keras Applications.In this table, the top-1 and top-5 accuracy refer to the model'sperformance on the ImageNet validation dataset, and depth is the depthof the network including activation layers, batch normalization layers,etc. ModelSizeTop-1 AccuracyTop-5 AccuracyParametersDepthXception88 MB0.7900.94522,910,480126VGG16528 MB0.7130.901138,357,54423VGG19549 MB0.7130.900143,667,24026ResNet5098 MB0.7490.92125,636,712-ResNet101171 MB0.7640.92844,707,176-ResNet152232 MB0.7660.93160,419,944-ResNet50V298 MB0.7600.93025,613,800-ResNet101V2171 MB0.7720.93844,675,560-ResNet152V2232 MB0.7800.94260,380,648-InceptionV392 MB0.7790.93723,851,784159InceptionResNetV2215 MB0.8030.95355,873,736572MobileNet16 MB0.7040.8954,253,86488MobileNetV214 MB0.7130.9013,538,98488DenseNet12133 MB0.7500.9238,062,504121DenseNet16957 MB0.7620.93214,307,880169DenseNet20180 MB0.7730.93620,242,984201NASNetMobile23 MB0.7440.9195,326,716-NASNetLarge343 MB0.8250.96088,949,818-EfficientNetB029 MB--5,330,571-EfficientNetB131 MB--7,856,239-EfficientNetB2>36 MB--9,177,569-EfficientNetB348 MB--12,320,535-EfficientNetB475 MB--19,466,823-EfficientNetB5118 MB--30,562,527-EfficientNetB6166 MB--43,265,143-EfficientNetB7256 MB--66,658,687- (A variety of other models is available from other sources - forexample, the [Tensorflow Hub](https://tfhub.dev/).) I'm going to use MobileNetV2, which is designed specifically to be smalland fast (so it can run on mobile devices!)MobileNets come in various sizes controlled by a multiplier for thedepth (number of features), and trained for various sizes of inputimages. We will use the 224x224 input image size.
###Code
base_model = tf.keras.applications.MobileNetV2(
input_shape=INPUT_IMG_SHAPE
)
base_model.summary()
base_probs = base_model.predict(test_sample)
base_probs.shape
url = tf.keras.utils.get_file(
'ImageNetLabels.txt',
'https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt')
imagenet_classes = np.array(open(url).read().splitlines())[1:]
imagenet_classes.shape
###Output
_____no_output_____
###Markdown
Let’s see what the top 5 predicted classes are for my test image:
###Code
most_likely_classes = np.argsort(base_probs.squeeze())[-5:]
plt.figure(figsize=(10,4));
plt.subplot(1,2,1)
plt.imshow(test_sample.reshape(INPUT_IMG_SIZE, INPUT_IMG_SIZE, 3));
plt.subplot(1,2,2)
p = sns.barplot(x=imagenet_classes[most_likely_classes],y=base_probs.squeeze()[most_likely_classes]);
plt.ylabel("Probability");
p.set_xticklabels(p.get_xticklabels(), rotation=45);
###Output
_____no_output_____
###Markdown
MobileNetV2 is trained on a specific task: classifying the images in theImageNet dataset by selecting the most appropriate of 1000 class labels.It is not trained for our specific task: classifying an image of a handas rock, paper, or scissors. Background: fine-tuning a model------------------------------- A typical convolutional neural network looks something like this: ](https://raw.githubusercontent.com/LongerVision/Resource/master/AI/Visualization/PlotNeuralNet/vgg16.png) We have a sequence of convolutional layers followed by pooling layers.These layers are *feature extractors* that “learn” key features of ourinput images.Then, we have one or more fully connected layers followed by a fullyconnected layer with a softmax activation function. This part of thenetwork is for *classification*. The key idea behind transfer learning is that the *feature extractor*part of the network can be re-used across different tasks and differentdomains. This is especially useful when we don’t have a lot of task-specificdata. We can get a pre-trained feature extractor trained on a lot ofdata from another task, then train the classifier on task-specific data. The general process is:- Get a pre-trained model, without the classification layer.- Freeze the base model.- Add a classification layer.- Train the model (only the weights in your classification layer will be updated).- (Optional) Un-freeze some of the last layers in your base model.- (Optional) Train the model again, with a smaller learning rate. Train our own classification head--------------------------------- This time, we will get the MobileNetV2 model *without* the fullyconnected layer at the top of the network.
###Code
import tensorflow.keras.backend as K
K.clear_session()
base_model = tf.keras.applications.MobileNetV2(
input_shape=INPUT_IMG_SHAPE,
include_top=False,
pooling='avg'
)
base_model.summary()
###Output
_____no_output_____
###Markdown
Then, we will *freeze* the model. We're not going to train theMobileNetV2 part of the model, we're just going to use it to extractfeatures from the images.
###Code
base_model.trainable = False
###Output
_____no_output_____
###Markdown
We’l make a *new* model out of the “headless” already-fittedMobileNetV2, with a brand-new, totally untrained classification head ontop:
###Code
model = tf.keras.models.Sequential()
model.add(base_model)
model.add(tf.keras.layers.Dropout(0.5))
model.add(tf.keras.layers.Dense(
units=3,
activation=tf.keras.activations.softmax
))
model.summary()
###Output
_____no_output_____
###Markdown
We’l compile the model:
###Code
opt = tf.keras.optimizers.Adam(learning_rate=0.001)
model.compile(
optimizer=opt,
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy']
)
###Output
_____no_output_____
###Markdown
Also, we’l use data augmentation:
###Code
BATCH_SIZE=256
from keras.preprocessing.image import ImageDataGenerator
train_gen = ImageDataGenerator(rotation_range=8, width_shift_range=0.08, shear_range=0.3,
height_shift_range=0.08, zoom_range=0.08)
train_generator = train_gen.flow(X_train, y_train, batch_size=BATCH_SIZE)
val_gen = ImageDataGenerator()
val_generator = val_gen.flow(X_test, y_test, batch_size=BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Now we can start training our model. Remember, we are *only* updatingthe weights in the classification head.
###Code
n_epochs = 20
hist = model.fit(
train_generator,
epochs=n_epochs,
steps_per_epoch=X_train.shape[0]//BATCH_SIZE,
validation_data=val_generator,
validation_steps=X_test.shape[0]//BATCH_SIZE
)
loss = hist.history['loss']
val_loss = hist.history['val_loss']
accuracy = hist.history['accuracy']
val_accuracy = hist.history['val_accuracy']
plt.figure(figsize=(14, 4))
plt.subplot(1, 2, 1)
plt.title('Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.plot(loss, label='Training set')
plt.plot(val_loss, label='Test set', linestyle='--')
plt.legend()
plt.subplot(1, 2, 2)
plt.title('Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.plot(accuracy, label='Training set')
plt.plot(val_accuracy, label='Test set', linestyle='--')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Fine-tune model--------------- We have fitted our own classification head, but there's one more step wecan attempt to customize the model for our particular application.We are going to “un-freeze” the later parts of the model, and train itfor a few more epochs on our data, so that the high-level features arebetter suited for our specific classification task.
###Code
base_model.trainable = True
len(base_model.layers)
###Output
_____no_output_____
###Markdown
Note that we are *not* creating a new model. We're just going tocontinue training the model we already started training.
###Code
fine_tune_at = 149
# freeze first layers
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
# use a smaller training rate for fine-tuning
opt = tf.keras.optimizers.Adam(learning_rate=0.00001)
model.compile(
optimizer = opt,
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy']
)
model.summary()
n_epochs_fine = 20
hist_fine = model.fit(
train_generator,
epochs=n_epochs + n_epochs_fine,
initial_epoch=n_epochs,
steps_per_epoch=X_train.shape[0]//BATCH_SIZE,
validation_data=val_generator,
validation_steps=X_test.shape[0]//BATCH_SIZE
)
loss = hist.history['loss'] + hist_fine.history['loss']
val_loss = hist.history['val_loss'] + hist_fine.history['val_loss']
accuracy = hist.history['accuracy'] + hist_fine.history['accuracy']
val_accuracy = hist.history['val_accuracy'] + hist_fine.history['val_accuracy']
plt.figure(figsize=(14, 4))
plt.subplot(1, 2, 1)
plt.title('Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.plot(loss, label='Training set')
plt.plot(val_loss, label='Test set', linestyle='--')
plt.plot([n_epochs, n_epochs], plt.ylim(),label='Fine Tuning',linestyle='dotted')
plt.legend()
plt.subplot(1, 2, 2)
plt.title('Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.plot(accuracy, label='Training set')
plt.plot(val_accuracy, label='Test set', linestyle='dotted')
plt.plot([n_epochs, n_epochs], plt.ylim(), label='Fine Tuning', linestyle='--')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Classify custom test sample---------------------------
###Code
test_probs = model.predict(test_sample)
plt.figure(figsize=(10,4));
plt.subplot(1,2,1)
plt.imshow(test_sample.reshape(INPUT_IMG_SIZE, INPUT_IMG_SIZE, 3));
plt.subplot(1,2,2)
p = sns.barplot(x=classes,y=test_probs.squeeze());
plt.ylabel("Probability");
###Output
_____no_output_____ |
4-Machine_Learning/Feature Engineering/Numericas/Practica/Notas_2_Ejercicios_Feature_Engineering_NumericData - clase.ipynb | ###Markdown
Import necessary dependencies and settings
###Code
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import scipy.stats as spstats
%matplotlib inline
mpl.style.reload_library()
mpl.style.use('classic')
mpl.rcParams['figure.facecolor'] = (1, 1, 1, 0)
mpl.rcParams['figure.figsize'] = [6.0, 4.0]
mpl.rcParams['figure.dpi'] = 100
###Output
_____no_output_____
###Markdown
Raw Measures Values
###Code
# Lee Pokemon.csv en un DataFrame
poke_df = pd.read_csv('Ficheros/Pokemon.csv', encoding='latin-1')
# Muestra las columnas HP, Attack y Defense
# Muestra una descripción de esas columnas
###Output
_____no_output_____
###Markdown
CountsLoad the song_views.csv dataset and understand the features.
###Code
# Lee song_views.csv y visualízalo en un DataFrame
songs_df = pd.read_csv('Ficheros/song_views.csv')
###Output
_____no_output_____
###Markdown
BinarizationOften raw frequencies or counts may not be relevant for building a model based on the problem which is being solved. For instance if I’m building a recommendation system for song recommendations, I would just want to know if a person is interested or has listened to a particular song. This doesn’t require the number of times a song has been listened to since I am more concerned about the various songs he\she has listened to. In this case, a binary feature is preferred as opposed to a count based feature. Add a column that includes this information, with a new column watched, that takes the value 1, when the listen count is >0
###Code
# en el DataFrame de canciones, añade una columna que indique con el valor 1 si esa canción se ha escuchado alguna vez
songs_df['listened'] = songs_df['listen_count'] >0
# Muestra un head para ver tus resultados
songs_df.head()
###Output
_____no_output_____
###Markdown
Binarization with sklearnLook at the documentation of sklearn preprecessing. Specifically to the Binarizer method. Try to use this method to obtainn a binarization of the song_views dataset.
###Code
# Busca documentación sobre el preprocesado de sklearn (en concreto, Binarizer)
from sklearn.preprocessing import Binarizer
transformer = Binarizer(threshold=0)
transformer
songs_df['listen_count']
songs_df['listen_count'].values;
binario_sklearn = transformer.transform(songs_df['listen_count'].values.reshape(-1,1))
songs_df['binario_sklearn'] = binario_sklearn
songs_df.head()
###Output
_____no_output_____
###Markdown
RoundingLoad the item_popularity.csv dataset and understand the features.
###Code
item_df = pd.read_csv('Ficheros/item_popularity.csv', encoding='latin-1')
item_df.head()
###Output
_____no_output_____
###Markdown
Include new columns in the dataset showing a popularity scale of 100 and 1000, being those 2 columns integer numbers.
###Code
item_df['pop_100'] = item_df['pop_percent']*100
item_df['pop_1000'] = item_df['pop_percent']*1000
item_df
###Output
_____no_output_____
###Markdown
InteractionsLoad the pokemon dataset. Build a new data set including only 'Attack' and 'Defense'.
###Code
poke_df_ad = pokemon_df[['Attack', 'Defense']]
poke_df_ad.head()
#Queremos saber como es de bueno un pokemon, creando una nueva columna que combine ataque y defensa
from sklearn.preprocessing import PolynomialFeatures
# poly es un objeto para hacer extensiones polinómicas
# le hemos indicado que sea de grado dos
# con fit_transform aprende de los datos que le hemos pasado
poly = PolynomialFeatures(2, interaction_only = True)
poly.fit_transform(poke_df_ad);
###Output
_____no_output_____
###Markdown
Build a new dataframe using the PolynomialFeatures method in sklearn.preprocesing. Use a degree 2 polynomic function. Try to understand what is happening.
###Code
# La primera columna es todo unos; para asegurarnos que w0 participa en el cálculo
# w0 es el intercept
# w0*1 + w1*x0 + w2*x1
# a * 1 + b x0 + (Fórmula del polinomio de segundo grado)
poly.get_feature_names_out()
poke_df_ad_poly = pd.DataFrame(poly.fit_transform(poke_df_ad.values), columns = poly.get_feature_names_out())
poke_df_ad_poly.head()
# Lo que estamos calculando es el Ataque x Defensa, es decir, una medida de fortaleza del pokemon
###Output
_____no_output_____
###Markdown
Binning Import the dataset in fcc_2016_coder_survey_subset.csv
###Code
# Nos interesan solo 'ID.x', 'EmploymentField', 'Age', 'Income'
###Output
_____no_output_____
###Markdown
Fixed-width binningCreate an histogram with the Age of the developers
###Code
fcc_survey_df = pd.read_csv('Ficheros/fcc_2016_coder_survey_subset.csv', encoding='latin-1')
fcc_survey_df.head()
from matplotlib.pyplot import hist
hist(fcc_survey_df.Age)
fig, ax = plt.subplots()
fcc_survey_df['Age'].hist(color='#A9C5D3')
ax.set_title('Developer Age Histogram', fontsize=12)
ax.set_xlabel('Age', fontsize= 12)
ax.set_ylabel('Frequency', fontsize=12)
###Output
_____no_output_____
###Markdown
Developer age distribution Binning based on custom rangesCreate two new columns in the dataframe. The first one should include the custom age range. The second one should include the bin_label. You should use the cut() function.``` Age Range : Bin--------------- 0 - 15 : 116 - 30 : 231 - 45 : 346 - 60 : 461 - 75 : 575 - 100 : 6```
###Code
fcc_survey_df['Age_bin_round'] = np.floor(fcc_survey_df['Age']/10)
fcc_survey_df[['ID.x', 'Age', 'Age_bin_round']].iloc[1071:1076]
bin_ranges = [0, 15, 30, 45, 60, 75, 100]
bin_names = [1, 2, 3, 4, 5, 6]
fcc_survey_df['Age_bin_custom_range'] = pd.cut(np.array(fcc_survey_df['Age']),
bins=bin_ranges)
fcc_survey_df['Age_bin_custom_label'] = pd.cut(np.array(fcc_survey_df['Age']),
bins=bin_ranges, labels=bin_names)
fcc_survey_df[['ID.x', 'Age', 'Age_bin_round',
'Age_bin_custom_range', 'Age_bin_custom_label']].iloc[1071:1076]
###Output
_____no_output_____
###Markdown
Quantile based binning Now we will work with the salaries of the dataset Plot an histogram with the developers income, with 30 bins.
###Code
fig, ax = plt.subplots()
fcc_survey_df['Income'].hist(bins=30, color='#A9C5D3')
ax.set_title('Developer Income Histogram', fontsize=12)
ax.set_xlabel('Developer Income', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
###Output
_____no_output_____
###Markdown
Calculate the [0, .25, .5, .75, 1.] qunatiles, and plot them as lines in the histogram
###Code
quantile_list = [0, .25, .5, .75, 1.]
quantiles = fcc_survey_df['Income'].quantile(quantile_list)
quantiles
###Output
_____no_output_____
###Markdown
In the original dataframe create 2 columns. One that indicates the income range values, and a second one with the following labels: ['0-25Q', '25-50Q', '50-75Q', '75-100Q']
###Code
quantile_labels = ['0-25Q', '25-50Q', '50-75Q', '75-100Q']
fcc_survey_df['Income_quantile_range'] = pd.qcut(fcc_survey_df['Income'],
q=quantile_list)
fcc_survey_df['Income_quantile_label'] = pd.qcut(fcc_survey_df['Income'],
q=quantile_list, labels=quantile_labels)
fcc_survey_df[['ID.x', 'Age', 'Income',
'Income_quantile_range', 'Income_quantile_label']].iloc[4:9]
income_log_mean = np.round(np.mean(fcc_survey_df['Income_log']), 2)
fig, ax = plt.subplots()
fcc_survey_df['Income_log'].hist(bins=30, color='#A9C5D3')
plt.axvline(income_log_mean, color='r')
ax.set_title('Developer Income Histogram after Log Transform', fontsize=12)
ax.set_xlabel('Developer Income (log scale)', fontsize=12)
ax.set_ylabel('Frequency', fontsize=12)
ax.text(11.5, 450, r'$\mu$='+str(income_log_mean), fontsize=10);
###Output
_____no_output_____ |
3-object-tracking-and-localization/activities/6-matrices-and-transformation-state/8. Guide to mathematical notation.ipynb | ###Markdown
Becoming "Wikipedia proficient"The goal of this course is **not** for you to memorize how to calculate a dot product or multiply matrices. The goal is for you to be able to do something useful with a wikipedia page like their [article on Kalman Filters](https://en.wikipedia.org/wiki/Kalman_filter), even if requires some additional research and review from you.But these pages are usually written in the notation of **linear algebra** and not the notation of computer programming. In this notebook you will learn something about how to navigate the notation of linear algebra and how to translate it into computer code. Analyzing The Dot Product EquationAt the time I'm writing this, the wikipedia article on the [dot product](https://en.wikipedia.org/wiki/Dot_product) begins with a section called **Algebraic Definition**, which starts like this:> The dot product of two vectors $\mathbf{a} = [a_1, a_2, \ldots, a_n]$ and $\mathbf{b} = [b_1, b_2, \ldots, b_n]$ is defined as: > > $$\mathbf{a} \cdot \mathbf{b} = \sum _{i=1}^{n}a_{i}b_{i}=a_{1}b_{1}+a_{2}b_{2}+\cdots +a_{n}b_{n}$$If you don't know what to look for, this can be pretty unhelfpul. Let's take a look at three features of this equation which can be helpful to understand... Feature 1 - Lowercase vs uppercase variablesThis equation only uses lowercase variables. In general, lowercase variables are used when discussing **vectors** or **scalars** (regular numbers like 3, -2.5, etc...) while UPPERCASE variables are reserved for matrices. Feature 2 - Bold vs regular typeface for variablesA variable in **bold** typeface indicates a vector or a matrix. A variable in regular typeface is a scalar. Feature 3 - "..." in equationsWhen you see three dots $\ldots$ in an equation it means "this pattern could continue any number of times" EXAMPLE 1 - APPLYING FEATURES 1, 2, and 3When you see something like $\mathbf{a} = [a_1, a_2, \ldots, a_n]$ you can infer the following:1. **$\mathbf{a}$ is a vector**: since a is bold it's either a vector OR a matrix, but since it's also lowercase, we know it can only be a vector.2. **$\mathbf{a}$ can have any length**: since there's a $\ldots$ in the definition for $\mathbf{a}$, we know that in addition to $a_1$ and $a_2$ there could also be $a_3$, $a_4$, and so on... 3. **The values in the $\mathbf{a}$ vector are scalars**: since $a_1$ is lowercase and non-bold we know that it must be a scalar (regular number) as opposed to being a vector or matrix. Feature 4 - $\Sigma$ NotationThe symbol $\Sigma$ is the uppercase version of the greek letter "sigma" and it is an instruction to perform a sum.**When you see a $\Sigma$ you should think "for loop!"**In the case of the dot product, the sigma instructs us to sum $a_ib_i$ for $i=1,2, \ldots, n$. And in this case $n$ is just the length of the $\mathbf{a}$ and $\mathbf{b}$ vectors.How this for loop works is best explained with an example. Take a look at the `dot_product` function defined below. Try to read through the comments and really understand how the code connects to math. **The MATH**The dot product of two vectors $\mathbf{a} = [a_1, a_2, \ldots, a_n]$ and $\mathbf{b} = [b_1, b_2, \ldots, b_n]$ is defined as: $$\mathbf{a} \cdot \mathbf{b} = \sum _{i=1}^{n}a_{i}b_{i}=a_{1}b_{1}+a_{2}b_{2}+\cdots +a_{n}b_{n}$$
###Code
# The CODE
def dot_product(a, b):
# start by checking that a and b have the same length.
# I know they SHOULD have the same length because they
# each are DEFINED (in the first line above) to have n
# elements. Even though n isn't specified, the fact that
# a goes from 0 to n AND b does the same (instead of going
# from 0 to m for example) implies that these vectors
# always should have the same length.
if len(a) != len(b):
print("Error! Vectors must have the same length!")
return None
# let's call the length of these vectors "n" so we can
# be consistent with the mathematical notation
n = len(a)
# Since we want to add up a bunch of terms, we should
# start by setting the total to zero and then add to
# this total n times.
total = 0
# now we are going to perform the multiplication!
# note that the algebraic version goes from 1 to n.
# The Python version of this indexing will go from
# 0 to n-1 (recall that range(3) returns [0,1,2] for example).
for i in range(n):
a_i = a[i]
b_i = b[i]
total = total + a_i * b_i
return total
# let's see if it works
a = [3,2,4]
b = [2,5,9]
# a*b should be 3*2 + 2*5 + 4*9
# or... 6 + 10 + 36
# 52
a_dot_b = dot_product(a,b)
print(a_dot_b)
###Output
52
|
[Kaggle] Jigsaw_Unintended_Bias_in_Toxicity_Classification/src/Main.ipynb | ###Markdown
Load Pretrained Embedding Model
###Code
# emb_model = Pipeline.load_emb_model('./emb_model/crawl-300d-2M.vec') # FastText Embeddings
emb_model = Pipeline.load_emb_model('./emb_model/glove.840B.300d.txt') # Glove Embeddings
###Output
_____no_output_____
###Markdown
Hyper parameter
###Code
### classes names
list_classes = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
### preprocessing parameter
maxlen = 180
max_features = 100000
embed_size = 300
### model parameter
cell_size = 64 ### Cell unit size
cell_type_GRU = True ### Cell Type: GRU/LSTM
filter_size = 64
kernel_size = 2
stride = 1
### K-fold cross-validation
k= 5
kf = KFold(n_splits=k, shuffle=False)
### training protocol
epochs= 13
batch_size = 128
lr_s = True ### Use of Learning Schedule
###Output
_____no_output_____
###Markdown
Load data
###Code
submission = pd.read_csv("./input/sample_submission.csv")
X_tr, Y_tr, X_te, emb_matrix = Pipeline.load_data_2path(emb_model, max_features = max_features, maxlen = maxlen)
###Output
_____no_output_____
###Markdown
Model
###Code
model_name = 'rnn'
### ================================================================== ###
oofs = []
res = np.zeros_like(submission[list_classes])
for train_index, val_index in kf.split(X_tr[0], Y_tr):
mdl = Toxic_Models.get_model_rnn(emb_matrix, cell_size=cell_size, maxlen=maxlen, cell_type_GRU=cell_type_GRU)
pred, oof = Model_trainer.model_train_cv(mdl, X_tra = [X_tr[0][train_index], X_tr[1][train_index]], X_val = [X_tr[0][val_index], X_tr[1][val_index]],
y_tra= Y_tr[train_index], y_val= Y_tr[val_index], x_test=X_te,
model_name=model_name, batch_size=batch_size, epochs=epochs, lr_schedule=lr_s)
res += pred
oofs.append(oof)
K.clear_session()
time.sleep(20)
res = res/k
### Collect result & Report
submission[list_classes] = res
submission.to_csv("submission_{}.csv".format(model_name), index = False)
np_oofs = np.array(oofs)
pd_oofs = pd.DataFrame(np.concatenate(np_oofs), columns=list_classes)
pd_oofs.to_csv("oofs_{}.csv".format(model_name), index=False)
model_name = 'rnncnn'
### ================================================================== ###
oofs = []
res = np.zeros_like(submission[list_classes])
for train_index, val_index in kf.split(X_tr[0], Y_tr):
mdl = Toxic_Models.get_model_rnn_cnn(emb_matrix, cell_size=cell_size, maxlen=maxlen, cell_type_GRU=cell_type_GRU,
filter_size=filter_size, kernel_size=kernel_size, stride=stride)
pred, oof = Model_trainer.model_train_cv(mdl, X_tra = [X_tr[0][train_index], X_tr[1][train_index]], X_val = [X_tr[0][val_index], X_tr[1][val_index]],
y_tra= Y_tr[train_index], y_val= Y_tr[val_index], x_test=X_te,
model_name=model_name, batch_size=batch_size, epochs=epochs, lr_schedule=lr_s)
res += pred
oofs.append(oof)
K.clear_session()
time.sleep(20)
res = res/k
### Collect result & Report
submission[list_classes] = res
submission.to_csv("submission_{}.csv".format(model_name), index = False)
np_oofs = np.array(oofs)
pd_oofs = pd.DataFrame(np.concatenate(np_oofs), columns=list_classes)
pd_oofs.to_csv("oofs_{}.csv".format(model_name), index=False)
model_name = 'rnn_caps'
### ================================================================== ###
oofs = []
res = np.zeros_like(submission[list_classes])
for train_index, val_index in kf.split(X_tr[0], Y_tr):
mdl = Toxic_Models.get_model_rnn_caps(emb_matrix, cell_size=cell_size, maxlen=maxlen, cell_type_GRU=cell_type_GRU)
pred, oof = Model_trainer.model_train_cv(mdl, X_tra = [X_tr[0][train_index], X_tr[1][train_index]], X_val = [X_tr[0][val_index], X_tr[1][val_index]],
y_tra= Y_tr[train_index], y_val= Y_tr[val_index], x_test=X_te,
model_name=model_name, batch_size=batch_size, epochs=epochs, lr_schedule=lr_s)
res += pred
oofs.append(oof)
K.clear_session()
time.sleep(20)
res = res/k
### Collect result & Report
submission[list_classes] = res
submission.to_csv("submission_{}.csv".format(model_name), index = False)
np_oofs = np.array(oofs)
pd_oofs = pd.DataFrame(np.concatenate(np_oofs), columns=list_classes)
pd_oofs.to_csv("oofs_{}.csv".format(model_name), index=False)
model_name = '2rnn'
### ================================================================== ###
oofs = []
res = np.zeros_like(submission[list_classes])
for train_index, val_index in kf.split(X_tr[0], Y_tr):
mdl = Toxic_Models.get_model_2rnn(emb_matrix, cell_size=cell_size, maxlen=maxlen, cell_type_GRU=cell_type_GRU)
pred, oof = Model_trainer.model_train_cv(mdl, X_tra = [X_tr[0][train_index], X_tr[1][train_index]], X_val = [X_tr[0][val_index], X_tr[1][val_index]],
y_tra= Y_tr[train_index], y_val= Y_tr[val_index], x_test=X_te,
model_name=model_name, batch_size=batch_size, epochs=epochs, lr_schedule=lr_s)
res += pred
oofs.append(oof)
K.clear_session()
time.sleep(20)
res = res/k
### Collect result & Report
submission[list_classes] = res
submission.to_csv("submission_{}.csv".format(model_name), index = False)
np_oofs = np.array(oofs)
pd_oofs = pd.DataFrame(np.concatenate(np_oofs), columns=list_classes)
pd_oofs.to_csv("oofs_{}.csv".format(model_name), index=False)
model_name = '2rnncnn'
### ================================================================== ###
oofs = []
res = np.zeros_like(submission[list_classes])
for train_index, val_index in kf.split(X_tr[0], Y_tr):
mdl = Toxic_Models.get_model_2rnn_cnn(emb_matrix, cell_size=cell_size, maxlen=maxlen, cell_type_GRU=cell_type_GRU,
filter_size=filter_size, kernel_size=kernel_size, stride=stride)
pred, oof = Model_trainer.model_train_cv(mdl, X_tra = [X_tr[0][train_index], X_tr[1][train_index]], X_val = [X_tr[0][val_index], X_tr[1][val_index]],
y_tra= Y_tr[train_index], y_val= Y_tr[val_index], x_test=X_te,
model_name=model_name, batch_size=batch_size, epochs=epochs, lr_schedule=lr_s)
res += pred
oofs.append(oof)
K.clear_session()
time.sleep(20)
res = res/k
### Collect result & Report
submission[list_classes] = res
submission.to_csv("submission_{}.csv".format(model_name), index = False)
np_oofs = np.array(oofs)
pd_oofs = pd.DataFrame(np.concatenate(np_oofs), columns=list_classes)
pd_oofs.to_csv("oofs_{}.csv".format(model_name), index=False)
###Output
_____no_output_____ |
Breast Cancer Detection/Breast_Cancer_Detection.ipynb | ###Markdown
Breast Cancer Detection*Author: Eda AYDIN* Import libraries
###Code
!pip install opendatasets --upgrade --quiet
!pip install pandas-profiling --upgrade --quiet
!conda update --all --quiet
# Import libraries
import opendatasets as od
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
import sys
# Data science tools
import pandas as pd
import numpy as np
import scipy as sp
import psutil, os
from pandas_profiling import ProfileReport
# Scikit-learn library
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn import model_selection
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
# Visualizations
import matplotlib.pyplot as plt
import matplotlib.image as mimg # images
%matplotlib inline
import seaborn as sns
from pandas.plotting import scatter_matrix
###Output
_____no_output_____
###Markdown
Getting Data Data Set InformationFeatures are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. A few of the images can be found at [Web Link](http://www.cs.wisc.edu/~street/images/)Separating plane described above was obtained using Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree Construction Via Linear Programming." Proceedings of the 4th Midwest Artificial Intelligence and Cognitive Science Society, pp. 97-101, 1992], a classification method which uses linear programming to construct a decision tree. Relevant features were selected using an exhaustive search in the space of 1-4 features and 1-3 separating planes.The actual linear program used to obtain the separating plane in the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].This database is also available through the UW CS ftp server:ftp ftp.cs.wisc.educd math-prog/cpo-dataset/machine-learn/WDBC/ Attribute Information1) ID Number2) Dianosis ( M = Malignant, B = Benign)Ten real-valued features are computed for each cell nucleus:1) radius (mean of distances from center to points on the perimeter)2) texture (standard deviation of gray-scale values)3) perimeter4) area5) smoothness (local variation in radius lengths)6) compactness (perimeter^2 / area - 1.0)7) concavity (severity of concave portions of the contour)8) concave points (number of concave portions of the contour)9) symmetry10) fractal dimension ("coastline approximation" - 1)
###Code
# Load dataset
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data"
names =["id" ,"clump_thickness", "uniform_cell_size", "uniform_cell_shape", "marginal_adhesion", "single_epithelial_size", "bare_nuclei", "bland_chromaton", "normal_nucleoli", "nutises", "class"]
df = pd.read_csv(url, names=names)
###Output
_____no_output_____
###Markdown
There are some steps to be considere:* First, our dataset contains some missing data. To deal with this we will add a df.replace method* If dp.replace method give us a question mark. It means that there is no data there. We are simply going to input the value -999999 and tell python to ignore that data* We will them perform the print (df.axes) operation so that we can see the columns. We can see that we ahve seen 699 different data points and each of those cases has 11 different columns.* Next, we will print the shape of the dataset using the print(df.shape) operation.
###Code
df.head()
df.describe(include = "all")
###Output
_____no_output_____
###Markdown
Data Preprocessing
###Code
# A Code part from Notebook of Caglar Subası
def MissingUniqueStatistics(df):
import io
import pandas as pd
import psutil
import os
import gc
import time
import seaborn as sns
from IPython.display import display, HTML
# pd.set_option('display.max_colwidth', -1)
from io import BytesIO
import base64
print("MissingUniqueStatistics process has began:\n")
proc = psutil.Process(os.getpid())
gc.collect()
mem_0 = proc.memory_info().rss
start_time = time.time()
variable_name_list = []
total_entry_list = []
data_type_list = []
unique_values_list = []
number_of_unique_values_list = []
missing_value_number_list = []
missing_value_ratio_list = []
mean_list = []
std_list = []
min_list = []
Q1_list = []
Q2_list = []
Q3_list = []
max_list = []
df_statistics = df.describe().copy()
for col in df.columns:
variable_name_list.append(col)
total_entry_list.append(df.loc[:, col].shape[0])
data_type_list.append(df.loc[:, col].dtype)
unique_values_list.append(list(df.loc[:, col].unique()))
number_of_unique_values_list.append(len(list(df.loc[:, col].unique())))
missing_value_number_list.append(df.loc[:, col].isna().sum())
missing_value_ratio_list.append(
round((df.loc[:, col].isna().sum()/df.loc[:, col].shape[0]), 4))
try:
mean_list.append(df_statistics.loc[:, col][1])
std_list.append(df_statistics.loc[:, col][2])
min_list.append(df_statistics.loc[:, col][3])
Q1_list.append(df_statistics.loc[:, col][4])
Q2_list.append(df_statistics.loc[:, col][5])
Q3_list.append(df_statistics.loc[:, col][6])
max_list.append(df_statistics.loc[:, col][7])
except:
mean_list.append('NaN')
std_list.append('NaN')
min_list.append('NaN')
Q1_list.append('NaN')
Q2_list.append('NaN')
Q3_list.append('NaN')
max_list.append('NaN')
data_info_df = pd.DataFrame({'Variable': variable_name_list,
'#_Total_Entry': total_entry_list,
'#_Missing_Value': missing_value_number_list,
'%_Missing_Value': missing_value_ratio_list,
'Data_Type': data_type_list,
'Unique_Values': unique_values_list,
'#_Unique_Values': number_of_unique_values_list,
'Mean': mean_list,
'STD': std_list,
'Min': min_list,
'Q1': Q1_list,
'Q2': Q2_list,
'Q3': Q3_list,
'Max': max_list
})
data_info_df = data_info_df.set_index("Variable", inplace=False)
# data_info_df['pdf'] = np.nan
# for col in data_info_df.index:
# data_info_df.loc[col,'pdf'] = mapping(col)
print('MissingUniqueStatistics process has been completed!')
print("--- in %s minutes ---" % ((time.time() - start_time)/60))
# , HTML(df.to_html(escape=False, formatters=dict(col=mapping)))
return data_info_df.sort_values(by='%_Missing_Value', ascending=False)
data_info = MissingUniqueStatistics(df)
data_info["Variable Structure"] = ["Cardinal","Nominal","Nominal","Nominal","Nominal","Nominal","Nominal","Nominal","Nominal","Nominal","Nominal"]
data_info
###Output
MissingUniqueStatistics process has began:
MissingUniqueStatistics process has been completed!
--- in 0.0005329529444376628 minutes ---
###Markdown
Missing Data Hnadling There are some steps to be considered* First, our dataset contains some missing data. To deal with we will add **df.replace** method.* If df.replace method gives us a question mark, it means that there is no data there. We are simply going to input the value -999999 and tell python to ignore that data.* We will perform the **print(df.axes)** operation so that we can see the columns. We can see that we have 696 different data points and each of those cases has 11 different columns.* Next, we will print the shape of the dataset using the **print(df.shape)** operation
###Code
# preprocess the data
df.replace("?",-999999, inplace = True)
print(df.axes)
df.drop(["id"],1, inplace = True)
# print the shape of the dataset
print(df.shape)
###Output
[RangeIndex(start=0, stop=699, step=1), Index(['id', 'clump_thickness', 'uniform_cell_size', 'uniform_cell_shape',
'marginal_adhesion', 'single_epithelial_size', 'bare_nuclei',
'bland_chromaton', 'normal_nucleoli', 'nutises', 'class'],
dtype='object')]
(699, 10)
###Markdown
We can detect whether the tumor is benign (which means it is non-cancerous) or malignant (which means i is cancerous) Data Visualizations We will visualize the parameters of the dataset * We will print the first point so that we can see what it entails.* We have a value of between 0 and 10 in all the different columns. In the class column, the number of 2 represents a benign tumor and the number 4 represents a malignant tumor.* There are 699 cells in the datasets.* The next step will be to do a print.describe operation, which gives us the mean, standard deviation, and other aspects for each our different parameters or features.
###Code
# Do dataset visualization
df.loc[6]
df.describe()
# plot histograms for each variable
df.hist(figsize=(10,10))
plt.show()
scatter_matrix(df, figsize=(18,18))
plt.show()
###Output
_____no_output_____
###Markdown
There are some steps will help you to better understand the machin learning algorithms:1. The first step is that we need to perform is to split our dataset into X and y datasets for training. We will not train all of the variable data as we need to save some for our validation step. This will helps us to determine how well these algorithms can generalize to new data and not just how well they know the training data. Our X data will contain all of the variables expect for class column and our Y data is going to be class column which is the classification of whether a tumor is malignant or benign.2. Next, we will use the **train_test_split** function and we will tthen split our data ito X_train, X_test, y_train and y_test.3. In the same line we will add **model_selection**, **train_test_split** and x,y,test_size. About 20% of our data is fairly standard, so we will make the test size 0.2 to the test data.
###Code
# Create X and Y datasets for training
X = np.array(df.drop(["class"],1))
y = np.array(df["class"])
X_train, X_test, y_train, y_test = model_selection.train_test_split(X,y, test_size=0.2)
###Output
_____no_output_____
###Markdown
There are several steps tp actually dening the training models1. First, make an empty list, in which we will append the KNN model.2. Enter the KNeighborsClassifier function to explore the number of neighbors* Start with n_neighbors = 5 and play around with the variable a little to see how it changes our results* Next we will add our models: the SVM and the SVC. We will evaluate each model, in turn* The next stepp will be to get a reuslt list and a names list, so that we ca print out some of the information at the end* We will then perform a for loop for each of the models defined previously, such as name or model in models* W will also do a k-fold comparision which will run each of these a couple of times and then take the best results. The number of splits or n_splits, defines how many times it runs* Since we do not want a random state, we will go from the seed. now we will get our results* We will use the model_Seletion function that we imported previously and cross_val_score* For each model we will provide the training data to X_train and then y_train* We will also add the specification scoreing which was the accuracy that we added previously.* We will also append results, name, and we will print out a msg. We will then substitude some variables* Finally we will look at the mean results and standard deviation* A k-fold training will take place wich means that this will be run 10 times. We will receive the average result adn accruracy for each of them. We will use a randome seed of 8, so that it is consistent across different reails adn runs
###Code
models = []
models.append(("KNN", KNeighborsClassifier(n_neighbors = 5)))
models.append(("SVC",SVC()))
# evaluate each model in turn
results = []
names = []
for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state = 8, shuffle = True)
cv_results = model_selection.cross_val_score(model, X_train, y_train, cv= kfold, scoring="accuracy")
results.append(cv_results)
names.append(name)
msg = "{}: mean :{:.3f} standard deviation:{:.3f}".format(str(name),cv_results.mean(), cv_results.std())
print(msg)
###Output
KNN: mean :0.975 standard deviation:0.014
SVC: mean :0.655 standard deviation:0.050
###Markdown
* First we will make predictions on the validation sets with the y_test and X_test tthat we split out earlier.* We will do another for loop in for name and model in models.* Then we will do the model.fit and it will train it once again on the X and y training data.* Since we want to make predictions we are going to use the model to actually make a prediciton about the X_test data.* Once the model has been trained, we are going to use it to make a prediction. It will print out the name, the accruracy score (based on comparision of the y_test data with the predicrions we made), and classification_report which will tell us information about the false positives and negative that we found.
###Code
# Make predictions on validation dataset
for name,model in models:
model.fit(X_train,y_train)
predictions = model.predict(X_test)
print("{}: {:.3f}".format(name, accuracy_score(y_test,predictions)))
print(classification_report(y_test, predictions))
###Output
KNN: 0.943
precision recall f1-score support
2 0.95 0.97 0.96 92
4 0.93 0.90 0.91 48
accuracy 0.94 140
macro avg 0.94 0.93 0.94 140
weighted avg 0.94 0.94 0.94 140
SVC: 0.657
precision recall f1-score support
2 0.66 1.00 0.79 92
4 0.00 0.00 0.00 48
accuracy 0.66 140
macro avg 0.33 0.50 0.40 140
weighted avg 0.43 0.66 0.52 140
###Markdown
* **Accuracy** is the ratio of corrrectly predicted observation to the total obseravations.* **Predicions (false positive)** is ratio of correctly predicted positive observations to the total predicted positive observations.* **Recall (sensitivity) (false negative)** is ratio of correctly predicted positive observations to the all observations in actual class. * **f1-score** is the weighted average of precision and recall. Therefore, this score takes positive and negative. Another example of predicting:* First, we will make the KNeighborsClassifier and get an accuracy for it based on our testing data.* Next, we will add an example. Type in np.array and pick whichever data points you want. * We will then take example and add reshape to it. We will flip it around so that we get a column vector.* We will print our predictions
###Code
clf = KNeighborsClassifier()
clf.fit(X_train, y_train)
print("Accuracy: {:.3f}".format(clf.score(X_test, y_test)))
example_measures = np.array([[4,2,1,1,1,2,3,2,1]])
example_measures = example_measures.reshape(len(example_measures), -1)
prediction = clf.predict(example_measures)
print(prediction)
###Output
Accuracy: 0.943
[2]
|
ds/practice/daily_practice/20-07/20-07-14-196-tue.ipynb | ###Markdown
20-07-14: Daily Practice ------ Daily practices* [ ] [Practice & learn](Practice-&-learn) * [ ] Coding, algorithms & data structures * [x] Data science: access, manipulation, analysis, visualization * [ ] Engineering: SQL, PySpark, APIs, TDD, OOP * [x] Machine learning: Scikit-learn, TensorFlow, PyTorch * [ ] Interview questions (out loud)* [ ] [Meta-data: reading & writing](Meta-data:-reading-&-writing) * [ ] Blog* [ ] [2-Hour Job Search](2-Hour-Job-Search) * [ ] LAMP List * [ ] Networking * [ ] Social media ------ Practice & learn --- EngineeringA quick little script to update a set of directory names.
###Code
# === Set up and open dir === #
import os
# Dir contains dataset for exercise recognition model
path = "/Users/Tobias/workshop/buildbox/self_labs/recount/exercises/exercises_clean"
os.chdir(path)
for d in os.listdir(path):
print(d) # Each one has a ".clean" appended to the end
# === Rename - remove ".clean" === #
for d in os.listdir(path):
os.rename(d, d.split(".")[0])
###Output
_____no_output_____
###Markdown
--- Machine learning ReCountI'm going to spend a little time working on my ReCount exercise and yoga pose recognition model. The goal is to write a blog post or two about comparing the process and results of the two different models: exercises vs yoga poses. Namely, I'm thinking about writing about defining a computer vision problem, and the problems that arise when it isn't defined as well, such as what is likely to be the case with the exercises. I say that because there are some exercises that look similar.Today, I'll work on simply loading and setting up the dataset. --- Data science Statistical Thinking in Python, Part 2 Chapter 1: Parameter estimation by optimization* Linear regression* Importance of EDA - Anscombe quartet
###Code
# === Imports === #
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
def pearson_r(x, y):
"""Compute Pearson correlation coefficient between two arrays."""
# Compute correlation matrix: corr_mat
corr_mat = np.corrcoef(x, y)
# Return entry [0,1]
return corr_mat[0,1]
# === Load and set up data === #
datapath = "assets/data/female_literacy_fertility.csv"
df = pd.read_csv(datapath)
df.head()
illiteracy = 100 - df["female literacy"]
fertility = df["fertility"]
# Plot the illiteracy rate versus fertility
_ = plt.plot(illiteracy, fertility, marker='.', linestyle='none')
# Set the margins and label axes
plt.margins(0.02)
_ = plt.xlabel('percent illiterate')
_ = plt.ylabel('fertility')
# Show the plot
plt.show()
# Show the Pearson correlation coefficient
print(pearson_r(illiteracy, fertility))
# Plot the illiteracy rate versus fertility
_ = plt.plot(illiteracy, fertility, marker='.', linestyle='none')
plt.margins(0.02)
_ = plt.xlabel('percent illiterate')
_ = plt.ylabel('fertility')
# Perform a linear regression using np.polyfit(): a, b
a, b = np.polyfit(illiteracy, fertility, 1)
# Print the results to the screen
print('slope =', a, 'children per woman / percent illiterate')
print('intercept =', b, 'children per woman')
# Make theoretical line to plot
x = np.array([0, 100])
y = a * x + b
# Add regression line to your plot
_ = plt.plot(x, y)
# Draw the plot
plt.show()
# Specify slopes to consider: a_vals
a_vals = np.linspace(0, 0.1, 200)
# Initialize sum of square of residuals: rss
rss = np.empty_like(a_vals)
# Compute sum of square of residuals for each value of a_vals
for i, a in enumerate(a_vals):
rss[i] = np.sum((fertility - a*illiteracy - b)**2)
# Plot the RSS
plt.plot(a_vals, rss, '-')
plt.xlabel('slope (children per woman / percent illiterate)')
plt.ylabel('sum of square of residuals')
plt.show()
# === Load and set up new data === #
anscombe = pd.read_csv("assets/data/anscombe.csv", skiprows=1)
x = anscombe["x"]
y = anscombe["y"]
anscombe.head()
# Perform linear regression: a, b
a, b = np.polyfit(x, y, 1)
# Print the slope and intercept
print(a, b)
# Generate theoretical x and y data: x_theor, y_theor
x_theor = np.array([3, 15])
y_theor = a * x_theor + b
# Plot the Anscombe data and theoretical line
_ = plt.plot(x, y, marker=".", linestyle="none")
_ = plt.plot(x_theor, y_theor)
# Label the axes
plt.xlabel('x')
plt.ylabel('y')
# Show the plot
plt.show()
anscombe_x = [
anscombe["x"],
anscombe["x.1"],
anscombe["x.2"],
anscombe["x.3"],
]
anscombe_y = [
anscombe["y"],
anscombe["y.1"],
anscombe["y.2"],
anscombe["y.3"],
]
# Iterate through x,y pairs
for x, y in zip(anscombe_x, anscombe_y):
# Compute the slope and intercept: a, b
a, b = np.polyfit(x, y, 1)
# Print the result
print('slope:', a, 'intercept:', b)
###Output
slope: 0.5000909090909095 intercept: 3.000090909090909
slope: 0.5000000000000004 intercept: 3.0009090909090896
slope: 0.4997272727272731 intercept: 3.0024545454545453
slope: 0.4999090909090908 intercept: 3.0017272727272735
###Markdown
Chapter 2: Bootstrap confidence intervals* Generating bootstrap replicates * Using resampled data to perform statistical inference* Bootstrap confidence intervals* Pairs bootstrap * Nonparametric inference * Make no assumptions about the model or probability distribution underlying the data Load and set up dataset
###Code
!head assets/data/sheffield_weather_station.csv
# === Load and set up dataset === #
sheffield = pd.read_csv("assets/data/sheffield_weather_station.csv", skiprows=8, delimiter="\t")
sheffield.head()
# === Didn't get read in correctly === #
with open("assets/data/sheffield_weather_station.csv") as f:
shef_lines = f.readlines()
shef_lines = [l.strip() for l in shef_lines[8:]]
shef_lines[:10]
shef_cols = shef_lines[0].split()
shef_cols
shef_data = [l.split() for l in shef_lines[1:]]
shef_data[:5]
shef_df = pd.DataFrame(data=shef_data, columns=shef_cols)
shef_df = shef_df.replace(to_replace="---", value=np.NaN)
for col in shef_df.columns:
shef_df[col] = pd.to_numeric(shef_df[col])
shef_df.head()
shef_df.dtypes
# === Get annual rainfall === #
rainfall = shef_df.groupby("yyyy")["rain"].sum()
def ecdf(data):
"""Compute ECDF for a one-dimensional array of measurements."""
# Number of data points: n
n = len(data)
# x-data for the ECDF: x
x = np.sort(data)
# y-data for the ECDF: y
y = np.arange(1, n + 1) / n
return x, y
###Output
_____no_output_____
###Markdown
Generating bootstrap replicats and visualizing bootstrap samples
###Code
for _ in range(50):
# Generate bootstrap sample: bs_sample
bs_sample = np.random.choice(rainfall, size=len(rainfall))
# Compute and plot ECDF from bootstrap sample
x, y = ecdf(bs_sample)
_ = plt.plot(x, y, marker='.', linestyle='none',
color='gray', alpha=0.1)
# Compute and plot ECDF from original data
x, y = ecdf(rainfall)
_ = plt.plot(x, y, marker='.', linestyle="none")
# Make margins and label axes
plt.margins(0.02)
_ = plt.xlabel('yearly rainfall (mm)')
_ = plt.ylabel('ECDF')
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Bootstrap confidence intervals
###Code
def bootstrap_replicate_1d(data, func):
"""Generate bootstrap replicate of 1D data."""
bs_sample = np.random.choice(data, len(data))
return func(bs_sample)
def draw_bs_reps(data, func, size=1):
"""Draw bootstrap replicates."""
# Initialize array of replicates: bs_replicates
bs_replicates = np.empty(size)
# Generate replicates
for i in range(size):
bs_replicates[i] = bootstrap_replicate_1d(data, func)
return bs_replicates
# Take 10,000 bootstrap replicates of the mean: bs_replicates
bs_replicates = draw_bs_reps(rainfall, np.mean, 10000)
# Compute and print SEM
sem = np.std(rainfall) / np.sqrt(len(rainfall))
print(sem)
# Compute and print standard deviation of bootstrap replicates
bs_std = np.std(bs_replicates)
print(bs_std)
# Make a histogram of the results
_ = plt.hist(bs_replicates, bins=50, density=True)
_ = plt.xlabel('mean annual rainfall (mm)')
_ = plt.ylabel('PDF')
# Show the plot
plt.show()
# === 95% confidence interval === #
np.percentile(bs_replicates, [2.5, 97.5])
# === Bootstrap replicates of other statistics === #
# Generate 10,000 bootstrap replicates of the variance: bs_replicates
bs_replicates = draw_bs_reps(rainfall, np.var, 10000)
# Put the variance in units of square centimeters
bs_replicates /= 100
# Make a histogram of the results
_ = plt.hist(bs_replicates, bins=50, density=True)
_ = plt.xlabel('variance of annual rainfall (sq. cm)')
_ = plt.ylabel('PDF')
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Confidence interval on rate of no-hittersFirst, load and set up the dataset...
###Code
# === No-hitters dataset === #
nohitters = pd.read_csv("assets/data/mlb_nohitters.csv")
nohitters.head()
# === Get difference in game_number between no hitter games === #
nohitter_times = nohitters["game_number"].diff().dropna()
nohitter_times[:5]
# Draw bootstrap replicates of the mean no-hitter time (equal to tau): bs_replicates
bs_replicates = draw_bs_reps(nohitter_times, np.mean, 10000)
# Compute the 95% confidence interval: conf_int
conf_int = np.percentile(bs_replicates, [2.5, 97.5])
# Print the confidence interval
print('95% confidence interval =', conf_int, 'games')
# Plot the histogram of the replicates
_ = plt.hist(bs_replicates, bins=50, density=True)
_ = plt.xlabel(r'$\tau$ (games)')
_ = plt.ylabel('PDF')
# Show the plot
plt.show()
def draw_bs_pairs_linreg(x, y, size=1):
"""Perform pairs bootstrap for linear regression."""
# Set up array of indices to sample from: inds
inds = np.arange(0, len(x))
# Initialize replicates: bs_slope_reps, bs_intercept_reps
bs_slope_reps = np.empty(size)
bs_intercept_reps = np.empty(size)
# Generate replicates
for i in range(size):
bs_inds = np.random.choice(inds, size=len(inds))
bs_x, bs_y = x[bs_inds], y[bs_inds]
bs_slope_reps[i], bs_intercept_reps[i] = np.polyfit(bs_x, bs_y, 1)
return bs_slope_reps, bs_intercept_reps
# Generate replicates of slope and intercept using pairs bootstrap
bs_slope_reps, bs_intercept_reps = draw_bs_pairs_linreg(illiteracy, fertility, 1000)
# Compute and print 95% CI for slope
print(np.percentile(bs_slope_reps, [2.5, 97.5]))
# Plot the histogram
_ = plt.hist(bs_slope_reps, bins=50, density=True)
_ = plt.xlabel('slope')
_ = plt.ylabel('PDF')
plt.show()
# Generate array of x-values for bootstrap lines: x
x = np.array([0, 100])
# Plot the bootstrap lines
for i in range(100):
_ = plt.plot(x,
bs_slope_reps[i]*x + bs_intercept_reps[i],
linewidth=0.5, alpha=0.2, color='red')
# Plot the data
_ = plt.plot(illiteracy, fertility, marker=".", linestyle="none")
# _ = plt.scatter(illiteracy, fertility)
# Label axes, set the margins, and show the plot
_ = plt.xlabel('illiteracy')
_ = plt.ylabel('fertility')
plt.margins(0.02)
plt.show()
###Output
_____no_output_____ |
hdp-1-STD.ipynb | ###Markdown
**Recuerde no agregar o quitar celdas en este notebook, ni modificar su tipo. Si lo hace, el sistema automaticamente lo calificará con cero punto cero (0.0)** Obtenga la cantidad de registros por letra para el siguiente archivo.
###Code
%%writefile input.txt
B 1999-08-28 14
E 1999-12-06 12
E 1993-07-21 17
C 1991-02-12 13
E 1995-04-25 16
A 1992-08-22 14
B 1999-06-11 12
E 1993-01-27 13
E 1999-09-10 11
E 1990-05-03 16
E 1994-02-14 10
A 1988-04-27 12
A 1990-10-06 10
E 1985-02-12 16
E 1998-09-14 16
B 1994-08-30 17
A 1997-12-15 13
B 1995-08-23 10
B 1998-11-22 13
B 1997-04-09 14
E 1993-12-27 18
E 1999-01-14 15
A 1992-09-19 18
B 1993-03-02 14
B 1999-10-21 13
A 1990-08-31 12
C 1994-01-25 10
E 1990-02-09 18
A 1990-09-26 14
A 1993-05-08 16
B 1995-09-06 14
E 1991-02-18 14
A 1993-01-11 14
A 1990-07-22 18
C 1994-09-09 15
C 1994-07-27 10
D 1990-10-10 15
A 1990-09-05 11
B 1991-10-01 15
A 1994-10-25 13
###Output
Writing input.txt
###Markdown
Mapper
###Code
%%writefile mapper.py
#! /usr/bin/env python
import sys
class Mapper:
def __init__(self, stream):
self.stream = stream
def emit(self, key, value):
sys.stdout.write("{},{}\n".format(key, value))
def map(self):
for word in self:
self.emit(key=word, value=1)
def __iter__(self):
for line in self.stream:
key=line.split(" ")[0]
yield key
if __name__ == "__main__":
##
## inicializa el objeto con el flujo de entrada
##
mapper = Mapper(sys.stdin)
##
## ejecuta el mapper
##
mapper.map()
###Output
Overwriting mapper.py
###Markdown
Reducer
###Code
%%writefile reducer.py
#!/usr/bin/env python
import sys
import itertools
class Reducer():
def __init__(self,stream):
self.stream=stream
def emit(self,key,value):
sys.stdout.write("{}\t{}\n".format(key,value))
def reduce(self):
for key,group in itertools.groupby(self,lambda x: x[0]):
total=0
for key,val in group:
total+=val
self.emit(key=key,value=total)
def __iter__(self):
for line in self.stream:
key=line.split(",")[0]
val=line.split(",")[1]
val=int(val)
yield(key,val)
if __name__ == '__main__':
reducer=Reducer(sys.stdin)
reducer.reduce()
###Output
Overwriting reducer.py
###Markdown
Ejecución
###Code
%%bash
rm -rf output
STREAM=$HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-*.jar
chmod +x mapper.py
chmod +x reducer.py
hadoop jar $STREAM -input input.txt -output output -mapper mapper.py -reducer reducer.py
cat output/part-00000
###Output
A 12
B 10
C 4
D 1
E 13
|
Code/.ipynb_checkpoints/Ex1 - Linear Regression-checkpoint.ipynb | ###Markdown
Linear Regression/ Hồi quy tuyến tính
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
df = pd.read_csv('../data/winequality-red.csv')
df.head()
X = np.asarray(df.drop(['quality'], axis = 1))
X
y = np.asarray(df['quality']).reshape(-1, 1)
y
###Output
_____no_output_____
###Markdown
Train Test Split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2)
len(X_train)
len(X_test)
len(y_train)
len(y_test)
###Output
_____no_output_____
###Markdown
Train
###Code
regression = LinearRegression()
regression.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Tính giá trị dự đoán cho tập test
###Code
y_test_pred = np.round(regression.predict(X_test))
y_test_pred
###Output
_____no_output_____
###Markdown
Loss Function / Hàm mất mát
###Code
from sklearn.metrics import mean_squared_error
mean_squared_error(y_test, y_test_pred, squared=False)
###Output
_____no_output_____
###Markdown
Chỉ số R bình phương
###Code
from sklearn.metrics import r2_score
r2_score(y_test, y_test_pred)
###Output
_____no_output_____ |
colabs/drive_copy.ipynb | ###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Copy ParametersCopy a drive document. 1. Specify a source URL or document name. 1. Specify a destination name. 1. If destination does not exist, source will be copied.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'source': '', # Name or URL of document to copy from.
'destination': '', # Name document to copy to.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute CopyThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'drive': {
'auth': 'user',
'copy': {
'source': {'field': {'name': 'source','kind': 'string','order': 1,'default': '','description': 'Name or URL of document to copy from.'}},
'destination': {'field': {'name': 'destination','kind': 'string','order': 2,'default': '','description': 'Name document to copy to.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Drive Copy ParametersCopy a drive document. 1. Specify a source URL or document name. 1. Specify a destination name. 1. If destination does not exist, source will be copied.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'source': '', # Name or URL of document to copy from.
'destination': '', # Name document to copy to.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Drive CopyThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import commandline_parser
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'drive': {
'auth': 'user',
'copy': {
'source': {'field': {'name': 'source','kind': 'string','order': 1,'default': '','description': 'Name or URL of document to copy from.'}},
'destination': {'field': {'name': 'destination','kind': 'string','order': 2,'default': '','description': 'Name document to copy to.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Drive Copy ParametersCopy a drive document. 1. Specify a source URL or document name. 1. Specify a destination name. 1. If destination does not exist, source will be copied.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'source': '', # Name or URL of document to copy from.
'destination': '', # Name document to copy to.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Drive CopyThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields, json_expand_includes
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'drive': {
'auth': 'user',
'copy': {
'source': {'field': {'name': 'source','kind': 'string','order': 1,'default': '','description': 'Name or URL of document to copy from.'}},
'destination': {'field': {'name': 'destination','kind': 'string','order': 2,'default': '','description': 'Name document to copy to.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
json_expand_includes(TASKS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Drive Copy ParametersCopy a drive document. 1. Specify a source URL or document name. 1. Specify a destination name. 1. If destination does not exist, source will be copied.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'source': '', # Name or URL of document to copy from.
'destination': '', # Name document to copy to.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Drive CopyThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'drive': {
'auth': 'user',
'copy': {
'source': {'field': {'name': 'source','kind': 'string','order': 1,'default': '','description': 'Name or URL of document to copy from.'}},
'destination': {'field': {'name': 'destination','kind': 'string','order': 2,'default': '','description': 'Name document to copy to.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Drive Copy ParametersCopy a drive document. 1. Specify a source URL or document name. 1. Specify a destination name. 1. If destination does not exist, source will be copied.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'source': '', # Name or URL of document to copy from.
'auth_read': 'user', # Credentials used for reading data.
'destination': '', # Name document to copy to.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Drive CopyThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'drive': {
'auth': 'user',
'copy': {
'source': {'field': {'description': 'Name or URL of document to copy from.','name': 'source','order': 1,'default': '','kind': 'string'}},
'destination': {'field': {'description': 'Name document to copy to.','name': 'destination','order': 2,'default': '','kind': 'string'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Drive Copy ParametersCopy a drive document. 1. Specify a source URL or document name. 1. Specify a destination name. 1. If destination does not exist, source will be copied.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'source': '', # Name or URL of document to copy from.
'destination': '', # Name document to copy to.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Drive CopyThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'drive': {
'auth': 'user',
'copy': {
'source': {'field': {'name': 'source','kind': 'string','order': 1,'default': '','description': 'Name or URL of document to copy from.'}},
'destination': {'field': {'name': 'destination','kind': 'string','order': 2,'default': '','description': 'Name document to copy to.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True)
project.execute()
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Drive Copy ParametersCopy a drive document. 1. Specify a source URL or document name. 1. Specify a destination name. 1. If destination does not exist, source will be copied.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'source': '', # Name or URL of document to copy from.
'destination': '', # Name document to copy to.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Drive CopyThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'drive': {
'auth': 'user',
'copy': {
'source': {'field': {'name': 'source','kind': 'string','order': 1,'default': '','description': 'Name or URL of document to copy from.'}},
'destination': {'field': {'name': 'destination','kind': 'string','order': 2,'default': '','description': 'Name document to copy to.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Drive Copy ParametersCopy a drive document. 1. Specify a source URL or document name. 1. Specify a destination name. 1. If destination does not exist, source will be copied.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'source': '', # Name or URL of document to copy from.
'destination': '', # Name document to copy to.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Drive CopyThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'drive': {
'auth': 'user',
'copy': {
'source': {'field': {'name': 'source','kind': 'string','order': 1,'default': '','description': 'Name or URL of document to copy from.'}},
'destination': {'field': {'name': 'destination','kind': 'string','order': 2,'default': '','description': 'Name document to copy to.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
Drive CopyCopy a drive document. LicenseCopyright 2020 Google LLC,Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. DisclaimerThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.This code generated (see starthinker/scripts for possible source): - **Command**: "python starthinker_ui/manage.py colab" - **Command**: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Set ConfigurationThis code is required to initialize the project. Fill in required fields and press play.1. If the recipe uses a Google Cloud Project: - Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).1. If the recipe has **auth** set to **user**: - If you have user credentials: - Set the configuration **user** value to your user credentials JSON. - If you DO NOT have user credentials: - Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).1. If the recipe has **auth** set to **service**: - Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md).
###Code
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
###Output
_____no_output_____
###Markdown
3. Enter Drive Copy Recipe Parameters 1. Specify a source URL or document name. 1. Specify a destination name. 1. If destination does not exist, source will be copied.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'source': '', # Name or URL of document to copy from.
'destination': '', # Name document to copy to.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
4. Execute Drive CopyThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'drive': {
'auth': 'user',
'copy': {
'source': {'field': {'name': 'source', 'kind': 'string', 'order': 1, 'default': '', 'description': 'Name or URL of document to copy from.'}},
'destination': {'field': {'name': 'destination', 'kind': 'string', 'order': 2, 'default': '', 'description': 'Name document to copy to.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Drive Copy ParametersCopy a drive document. 1. Specify a source URL or document name. 1. Specify a destination name. 1. If destination does not exist, source will be copied.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'source': '', # Name or URL of document to copy from.
'destination': '', # Name document to copy to.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Drive CopyThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'drive': {
'auth': 'user',
'copy': {
'source': {'field': {'name': 'source','kind': 'string','order': 1,'default': '','description': 'Name or URL of document to copy from.'}},
'destination': {'field': {'name': 'destination','kind': 'string','order': 2,'default': '','description': 'Name document to copy to.'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____ |
Subsets and Splits