path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
_archiving/contribution/seongshin/aws-ai-ml-immersionday-kr/scikit_bring_your_own/scikit_bring_your_own.ipynb | ###Markdown
Building your own algorithm container [(์๋ณธ)](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/advanced_functionality/scikit_bring_your_own/scikit_bring_your_own.ipynb)Amazon SageMaker์ ์ฌ์ฉํ๋ฉด SageMakerํ๊ฒฝ์์ ํ๋ จํ๊ณ ๋ฐฐํฌํ ์ ์๋๋ก ์์ ์ ์๊ณ ๋ฆฌ์ฆ์ ํจํค์งํ ์ ์์ต๋๋ค. ์ด ๋
ธํธ๋ถ์ SageMaker์์ Docker ์ปจํ
์ด๋๋ฅผ ๋น๋ํ๊ณ ํ๋ จ ๋ฐ ์ถ๋ก ์ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ์ ๋ํ ์์ ๋ฅผ ์ ๊ณตํ ๊ฒ์
๋๋ค. ์ปจํ
์ด๋์ ์๊ณ ๋ฆฌ์ฆ์ ํจํค์งํ๋ฉด ํ๋ก๊ทธ๋จ ์ธ์ด, ํ๊ฒฝ, ํ๋ ์์ํฌ ํน์ ์์กด์ฑ๊ณผ๋ ์๊ด์์ด, ๊ฑฐ์ ๋ชจ๋ ์ฝ๋๋ฅผ Amazon SageMakerํ๊ฒฝ์ผ๋ก ๊ฐ์ ธ์ฌ ์ ์์ต๋๋ค. _**Note:**_ SageMaker๋ ํ์ฌ [pre-built scikit container](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/scikit_learn_iris/Scikit-learn%20Estimator%20Example%20With%20Batch%20Transform.ipynb)๋ฅผ ํฌํจํ๊ณ ์์ต๋๋ค. ์ฐ๋ฆฌ๋ scikit ์๊ณ ๋ฆฌ์ฆ์ด ํ์ํ ๋๋ถ๋ถ์ ๋ชจ๋ ๊ฒฝ์ฐ์ pre-built container๋ฅผ ์ฌ์ฉํ๊ธฐ๋ฅผ ๊ถ์ฅํฉ๋๋ค. ๊ทธ๋ฌ๋ ์ด ์์ ๋ ์์ ๋ง์ ์ปจํ
์ด๋๋ฅผ ํตํด ๋ค๋ฅธ ๋ผ์ด๋ธ๋ฌ๋ฆฌ๋ค์ SageMaker๋ก ๊ฐ์ ธ์ค๊ธฐ ์ํ ์์๋ผ์ธ์ผ๋ก์ ์ ๊ณตํฉ๋๋ค. 1. [Building your own algorithm container](Building-your-own-algorithm-container) 1. [์ธ์ ์์ ๋ง์ ์๊ณ ๋ฆฌ์ฆ ์ปจํ
์ด๋๋ฅผ ๋ง๋ค์ด์ผ๋ง ํ ๊น์?](์ธ์ -์์ ๋ง์-์๊ณ ๋ฆฌ์ฆ-์ปจํ
์ด๋๋ฅผ-๋ง๋ค์ด์ผ๋ง-ํ ๊น์%3F) 1. [๊ถํ](๊ถํ) 1. [์์ ](์์ ) 1. [ํ๋ฆฌ์ ํ
์ด์
](ํ๋ฆฌ์ ํ
์ด์
)1. [ํํธ 1: Amazon SageMaker์ ํจ๊ป ์ฌ์ฉํ ์๊ณ ๋ฆฌ์ฆ ํจํค์ง๊ณผ ์
๋ก๋](ํํธ-1%3A-Amazon-SageMaker์-ํจ๊ป-์ฌ์ฉํ -์๊ณ ๋ฆฌ์ฆ-ํจํค์ง๊ณผ-์
๋ก๋) 1. [Docker ๊ฐ์](Docker-๊ฐ์) 1. [Amazon SageMaker๊ฐ Docker container๋ฅผ ์คํํ๋ ๋ฐฉ๋ฒ](Amazon-SageMaker๊ฐ-Docker-container๋ฅผ-์คํํ๋-๋ฐฉ๋ฒ) 1. [Running your container during training](Running-your-container-during-training) 1. [The input](The-input) 1. [The output](The-output) 1. [Running your container during hosting](Running-your-container-during-hosting) 1. [์ํ ์ปจํ
์ด๋ ํํธ](์ํ-์ปจํ
์ด๋-ํํธ) 1. [Dockerfile](Dockerfile) 1. [์ปจํ
์ด๋ ๋น๋ ๋ฐ ๋ฑ๋ก](์ปจํ
์ด๋-๋น๋-๋ฐ-๋ฑ๋ก) 1. [๋ก์ปฌ ๋จธ์ ์ด๋ Amazon SageMaker ๋
ธํธ๋ถ ์ธ์คํด์ค์์ ์๊ณ ๋ฆฌ์ฆ ํ
์คํธํ๊ธฐ](๋ก์ปฌ-๋จธ์ ์ด๋-Amazon-SageMaker-๋
ธํธ๋ถ-์ธ์คํด์ค์์-์๊ณ ๋ฆฌ์ฆ-ํ
์คํธํ๊ธฐ)1. [ํํธ 2: Amazon SageMaker์์ ์์ ์ ์๊ณ ๋ฆฌ์ฆ ์ฌ์ฉํ๊ธฐ](ํํธ-2%3A-Amazon-SageMaker์์-์์ ์-์๊ณ ๋ฆฌ์ฆ-์ฌ์ฉํ๊ธฐ) 1. [ํ๊ฒฝ ์ค์ ](ํ๊ฒฝ-์ค์ ) 1. [์ธ์
์์ฑ](์ธ์
-์์ฑ) 1. [ํ๋ จ์ ์ํ ๋ฐ์ดํฐ ์
๋ก๋](ํ๋ จ์-์ํ-๋ฐ์ดํฐ-์
๋ก๋) 1. [Estimator ์์ฑ ๋ฐ ๋ชจ๋ธ fit ํ๊ธฐ](Estimator-์์ฑ-๋ฐ-๋ชจ๋ธ-fit-ํ๊ธฐ) 1. [๋ชจ๋ธ ํธ์คํ
ํ๊ธฐ](๋ชจ๋ธ-ํธ์คํ
ํ๊ธฐ) 1. [๋ชจ๋ธ ๋ฐฐํฌํ๊ธฐ](๋ชจ๋ธ-๋ฐฐํฌํ๊ธฐ) 2. [์ผ๋ถ ๋ฐ์ดํฐ๋ฅผ ์ ํํ๊ณ ์์ธก์ ์ฌ์ฉํ๊ธฐ](์ผ๋ถ-๋ฐ์ดํฐ๋ฅผ-์ ํํ๊ณ -์์ธก์-์ฌ์ฉํ๊ธฐ) 3. [์ ํ์ ์ ๋ฆฌ](์ ํ์ -์ ๋ฆฌ) 1. [๋ฐฐ์น ๋ณํ Job ์คํ](๋ฐฐ์น-๋ณํ-Job-์คํ) 1. [๋ณํ-Job-์์ฑํ๊ธฐ](Create-a-Transform-Job) 2. [์ถ๋ ฅ-๋ณด๊ธฐ](View-Output)_or_ I'm impatient, just [let me see the code](The-Dockerfile)! ์ธ์ ์์ ๋ง์ ์๊ณ ๋ฆฌ์ฆ ์ปจํ
์ด๋๋ฅผ ๋ง๋ค์ด์ผ๋ง ํ ๊น์?Amazon SageMaker์ ์์ ์ ์ฝ๋๋ฅผ ๊ฐ์ ธ์์ ์ปจํ
์ด๋๋ฅผ ์์ฑํ ํ์๋ ์์์๋ ์์ต๋๋ค. SageMaker์์ธ ์ ๊ณตํ๋ Apache MXNet์ด๋ TensorFlow์ ๊ฐ์ ํ๋ ์์ํฌ๋ฅผ ์ฌ์ฉํ ๋, ํ๋ ์์ํฌ์์ ์ ๊ณตํ๋ SDK entry points๋ฅผ ์ฌ์ฉํ์ฌ ์๊ณ ๋ฆฌ์ฆ์ ๊ตฌํํ๋ Python ์ฝ๋๋ฅผ ๊ฐ๋จํ ์ฌ์ฉํ ์ ์์ต๋๋ค. ์ด ํ๋ ์์ํฌ๋ค์ ์ธํธ๋ค์ ์ง์์ ์ผ๋ก ํ์ฅํ๊ณ ์๊ธฐ ๋๋ฌธ์, ์์ ์ ์๊ณ ๋ฆฌ์ฆ์ด ์ผ๋ฐ์ ์ธ ๋จธ์ ๋ฌ๋ํ๊ฒฝ์์ ์์ฑ๋ ๊ฒฝ์ฐ ์ต๊ทผ์ ์ง์ ๋ฆฌ์คํธ๋ฅผ ํ์ธํ๋ ๊ฒ์ ๊ถ์ฅํฉ๋๋ค. ์ฌ์ฉ์ ํ๊ฒฝ์ด๋ ํ๋ ์์ํฌ๋ฅผ ์ํ SDK์ ์ง์ ์ ์ธ ์ง์์ด ์๋๋ผ๊ณ ์์ ๋ง์ ์ปจํ
์ด๋๋ฅผ ๋ง๋๋ ๊ฒ์ด ๋ ํจ๊ณผ์ ์ผ ์๋ ์์ต๋๋ค. ์์ ์ ์๊ณ ๋ฆฌ์ฆ์ ๊ตฌํํ๋ ์ฝ๋๊ฐ ์์ฒด์ ์ผ๋ก ๋งค์ฐ ๋ณต์กํ๊ฑฐ๋ ํ๋ ์์ํฌ์ ํน๋ณํ ์ถ๊ฐ๊ฐ ํ์ํ ๊ฒฝ์ฐ์๋ ์์ ๋ง์ ์ปจํ
์ด๋๋ฅผ ๋ง๋๋ ๊ฒ์ด ๋ ์ข์ ์ ํ์ผ ์ ์์ต๋๋ค. ์ฌ์ฉ์ ํ๊ฒฝ์ ์ง์ ์ ์ผ๋ก ์ง์ํ๋ SDK๊ฐ ์๋๋ผ๋ ๊ฑฑ์ ํ ํ์๊ฐ ์์ต๋๋ค. ์ด ๊ณผ์ ์ ํตํด์ ์์ ๋ง์ ์ปจํ
์ด๋๋ฅผ ๋ง๋๋ ๊ฒ์ด ๋งค์ฐ ๊ฐ๋จํ๋ ๊ฒ์ ์ ์ ์์ ๊ฒ์
๋๋ค. ๊ถํ์ด ๋
ธํธ๋ถ์ ์คํํ๊ธฐ ์ํด์๋ ์ผ๋ฐ์ ์ธ "SageMakerFullAccess"๊ถํ ์ธ์๋ ๋ค๋ฅธ ๊ถํ์ด ํ์ํฉ๋๋ค. ์ด๊ฒ์ Amazon ECR์ ์ ๊ท ๋ ํ์งํ ๋ฆฌ๋ฅผ ์์ฑํด์ผํ๊ธฐ ๋๋ฌธ์
๋๋ค. ์ด ๊ถํ์ ์ถ๊ฐํ๋ ๊ฐ์ฅ ์ฌ์ด ๋ฐฉ๋ฒ์ ๋
ธํธ๋ถ ์ธ์คํด์ค๋ฅผ ์์ํ ๋ ์ฌ์ฉํ๋ Role์ Managed Policy์ธ`AmazonEC2ContainerRegistryFullAccess`๋ฅผ ์ถ๊ฐํ๋ ๊ฒ์
๋๋ค. ์ด ์์
์ ์ํํ ๋ ๋
ธํธ๋ถ ์ธ์คํด์ค๋ฅผ ์ฌ์์ํ ํ์๋ ์์ผ๋ฉฐ ์๋ก์ด ๊ถํ์ ์ฆ์ ๋ฐ์์ด ๋ฉ๋๋ค. ์์ ์ฌ๊ธฐ์๋ ๋๋ฆฌ ์ฌ์ฉ๋๋ [scikit-learn][] ๋จธ์ ๋ฌ๋ ํจํค์ง์์ [decision tree][] ์๊ณ ๋ฆฌ์ฆ์ ๋ณด์ฌ์ฃผ๋ ๊ฐ๋จํ Python์์ ๋ฅผ ํจํค์งํ๋ ๋ฐฉ๋ฒ์ ๋ณด์ฌ์ค๋๋ค.์ด ์์ ๋ Amazon SageMaker์์ ์์ ์ ์ฝ๋๋ฅผ ํ๋ จํ๊ณ ํธ์คํ
ํ ์ ์๊ฒ ํ๋ ๊ตฌ์กฐ๋ฅผ ๋ณด์ฌ์ฃผ๊ธฐ ์ํ ๊ฒ์ผ๋ก์, ๋งค์ฐ ์ฌํํฉ๋๋ค. ์ฌ๊ธฐ์ ๋ณด์ฌ์ง๋ ์์ด๋์ด๋ค์ ์ด๋ ํ ์ธ์ด๋ ํ๊ฒฝ์์๋ ์๋ํฉ๋๋ค. ์ฌ์ฉ์๋ ์ถ๋ก ์ ์ํ HTTP ์์ฒญ๋ค์ ์ฒ๋ฆฌํ๋ ํ๊ฒฝ์ ์ํด ์ ํฉํ ํด์ ์ ํํ ํ์๊ฐ ์์ต๋๋ค. ๊ทธ๋ฌ๋ ์์ฆ์๋ ๋ชจ๋ ์ธ์ด์์ ์ข์ HTTP ํ๊ฒฝ์ ์ ๊ณตํ๊ณ ์์ต๋๋ค. ์ด ์์ ์์ ํ๋ จ๊ณผ ํธ์คํ
์ ์ง์ํ๊ธฐ ์ํด์ ๋จ์ผ ์ด๋ฏธ์ง๋ฅผ ์ฌ์ฉํฉ๋๋ค. ์ฐ๋ฆฌ๋ ์ค์ง ํ๋์ ์ด๋ฏธ์ง๋ง ๊ด๋ฆฌํ๊ณ ์ด๊ฒ์ผ๋ก ๋ชจ๋ ๊ฒ์ ํ ์ ์๋๋ก ์ค์ ํ ์ ์๊ธฐ ๋๋ฌธ์ ๋งค์ฐ ๊ฐ๋จํฉ๋๋ค. ๋๋ก๋ ๊ฐ๊ฐ ๋ค๋ฅธ ์๊ตฌ์ฌํญ์ผ๋ก ์ธํด ํ๋ จ๊ณผ ํธ์คํ
์ ์ํด ์ด๋ฏธ์ง๋ฅผ ๋ถ๋ฆฌํ๊ธฐ๋ฅผ ์ํ ์๋ ์์ต๋๋ค. ์๋์์ ์ค๋ช
ํ ๋ถ๋ถ๋ค์ ๋ณ๋์Dockerfile๋ก ๋๋๊ณ ๋๊ฐ์ ์ด๋ฏธ์ง๋ฅผ ๋ง๋์๊ธฐ ๋ฐ๋๋๋ค. ๊ฐ๋ฐ๊ณผ ๊ด๋ฆฌ๋ฅผ ์ข ๋ ํธ๋ฆฌํ๊ฒ ํ๊ธฐ ์ํด์๋ ํ ๊ฐ ํน์ ๋ ๊ฐ์ ์ด๋ฏธ์ง๋ฅผ ์ ํํ๋ ๊ฒ์ ๋งค์ฐ ์ค์ํฉ๋๋ค. ํ๋ จ์ด๋ ํธ์คํ
์ ์ํด์ Amazon SageMaker๋ง์ ์ฌ์ฉํ๊ณ ์๋ ๊ฒฝ์ฐ, ์์ ์ ์ปจํ
์ด๋์ ์ฌ์ฉํ์ง ์๋ ๊ธฐ๋ฅ์ ๋ง๋ค ํ์๋ ์์ต๋๋ค. [scikit-learn]: http://scikit-learn.org/stable/[decision tree]: http://scikit-learn.org/stable/modules/tree.html ํ๋ฆฌ์ ํ
์ด์
์ด ํ๋ฆฌ์ ํ
์ด์
์ _building_ ์ปจํ
์ด๋์ _using_ the container ์ ๋ ํํธ๋ก ๋๋ฉ๋๋ค. ํํธ 1: Amazon SageMaker์ ํจ๊ป ์ฌ์ฉํ ์๊ณ ๋ฆฌ์ฆ ํจํค์ง๊ณผ ์
๋ก๋ Docker ๊ฐ์Docker์ ์ต์ํ๋ค๋ฉด ๋ค์ ์น์
์ ๊ฑด๋๋์ด๋ ๋ฉ๋๋ค. ๋ง์ ๋ฐ์ดํฐ ๊ณผํ์๋ค์๊ฒ๋ Docker ์ปจํ
์ด๋๊ฐ ์๋ก์ด ๊ฐ๋
์ด์ง๋ง, ์ฌ๊ธฐ์์ ๋ณผ ์ ์๋ฏ์ด ์ด๋ ต์ง ์์ต๋๋ค. Docker๋ ์์์ ์ฝ๋๋ฅผ ์์ ํ ๋
๋ฆฝ์ ์ธ _์ด๋ฏธ์ง_๋ก ํจํค์งํ๋ ๊ฐ๋จํ ๋ฐฉ๋ฒ์ ์ ๊ณตํฉ๋๋ค. ์ด๋ฏธ์ง๊ฐ ์์ผ๋ฉด Docker๋ฅผ ์ฌ์ฉํ์ฌ ํด๋น ์ด๋ฏธ์ง๋ฅผ ๊ธฐ๋ฐ์ผ๋ก _์ปจํ
์ด๋_๋ฅผ ์คํํ ์ ์์ต๋๋ค. ์ปจํ
์ด๋๋ฅผ ์คํํ๋ ๊ฒ์ ์ปจํ
์ด๋๊ฐ ํ๋ก๊ทธ๋จ์ ์คํํ๊ธฐ์ํ ์์ ํ ๋
๋ฆฝ๋ ํ๊ฒฝ์ ์์ฑํ๋ค๋ ์ ์ ์ ์ธํ๊ณ ๋จธ์ ์์ ํ๋ก๊ทธ๋จ์ ์คํํ๋ ๊ฒ๊ณผ ๊ฐ์ต๋๋ค. ์ปจํ
์ด๋๋ ์๋ก ํธ์คํธ ํ๊ฒฝ๊ณผ ๋ถ๋ฆฌ๋์ด ์์ผ๋ฏ๋ก ํ๋ก๊ทธ๋จ์ ์ค์ ํ๋ ๋ฐฉ๋ฒ์ ์คํ ์์น์ ๊ด๊ณ์์ด ํ๋ก๊ทธ๋จ์ด ์คํ๋๋ ๋ฐฉ์์
๋๋ค.Docker๋ (a)์ธ์ด์ ๋
๋ฆฝ์ ์ด๋ฉฐ (b)์์ ๋ช
๋ น, ํ๊ฒฝ ๋ณ์ ๋ฑ ์ ์ฒด ์ด์ ํ๊ฒฝ์ ํฌํจํ๋ฏ๋ก conda ๋๋ virtualenv์ ๊ฐ์ ํ๊ฒฝ ๊ด๋ฆฌ์๋ณด๋ค ๊ฐ๋ ฅํฉ๋๋ค.์ด๋ค ๋ฉด์์ Docker ์ปจํ
์ด๋๋ ๊ฐ์ ๋จธ์ ๊ณผ ๋น์ทํ์ง๋ง ํจ์ฌ ๊ฐ๋ณ์ต๋๋ค. ์๋ฅผ ๋ค์ด, ์ปจํ
์ด๋์์ ์คํ๋๋ ํ๋ก๊ทธ๋จ์ 1์ด ์ด๋ด์ ์์ํ ์ ์์ผ๋ฉฐ ๋ง์ ์ปจํ
์ด๋๊ฐ ๋์ผํ ์ค์ ๋จธ์ ๋๋ ๊ฐ์ ๋จธ์ ์ธ์คํด์ค์์ ์คํ๋ ์ ์์ต๋๋ค.Docker๋ `Dockerfile`์ด๋ผ๋ ๊ฐ๋จํ ํ์ผ์ ์ฌ์ฉํ์ฌ ์ด๋ฏธ์ง๊ฐ ์ด์
๋ธ๋๋ ๋ฐฉ์์ ์ง์ ํฉ๋๋ค. ์๋์์ ๊ทธ ์๋ฅผ ๋ณผ ์ ์์ต๋๋ค. ์์ ์ด๋ ๋ค๋ฅธ ์ฌ๋์ด ๋ง๋ Docker ์ด๋ฏธ์ง๋ฅผ ๊ธฐ๋ฐ์ผ๋ก Docker ์ด๋ฏธ์ง๋ฅผ ๋ง๋ค ์ ์์ผ๋ฏ๋ก ์์
์ด ์ฝ๊ฐ ๋จ์ํ๋ฉ๋๋ค.Docker๋ ํ๋ก๊ทธ๋๋ฐ ๋ฐ ์คํ ์์ญ์์ ์ ์ฐ์ฑ๊ณผ ์ ์ ์ ๋ ์ฝ๋ ์ฌ์์ผ๋ก ์ธํด ํ๋ก๊ทธ๋๋ฐ ๋ฐ ๊ฐ๋ฐ์ ์ปค๋ฎค๋ํฐ์์ ๋งค์ฐ ์ธ๊ธฐ๊ฐ ์์ต๋๋ค. [Amazon ECS]์ ๊ฐ์ด ์ง๋ ๋ช ๋
๊ฐ ๊ตฌ์ถ๋ ๋ง์ ์๋น์ค์ ํ ๋๊ฐ ๋๊ณ ์์ต๋๋ค.Amazon SageMaker๋ Docker๋ฅผ ์ฌ์ฉํ์ฌ ์ฌ์ฉ์๊ฐ ์์์ ์๊ณ ๋ฆฌ์ฆ์ ํ๋ จํ๊ณ ๋ฐฐํฌํ ์ ์๋๋ก ํฉ๋๋ค. Amazon SageMaker์์๋ Docker ์ปจํ
์ด๋๊ฐ ํ๋ จ์ ์ํด ์ํํ๋ ํน์ ํ ๋ฐฉ๋ฒ์ด ์๊ณ ํธ์คํ
์์๋ ๋ค๋ฅธ ๋ฐฉ๋ฒ์ ์ฌ์ฉํฉ๋๋ค. ๋ค์ ์น์
์์๋ SageMaker ํ๊ฒฝ์ ์ํด ์ปจํ
์ด๋๋ฅผ ๋น๋ํ๋ ๋ฐฉ๋ฒ์ ๋ํด ๊ฐ๋ตํ๊ฒ ์ค๋ช
ํฉ๋๋ค.์ ์ฉํ ๋งํฌ:* [Docker home page](http://www.docker.com)* [Getting started with Docker](https://docs.docker.com/get-started/)* [Dockerfile reference](https://docs.docker.com/engine/reference/builder/)* [`docker run` reference](https://docs.docker.com/engine/reference/run/)[Amazon ECS]: https://aws.amazon.com/ecs/ Amazon SageMaker๊ฐ Docker container๋ฅผ ์คํํ๋ ๋ฐฉ๋ฒํ๋ จ ๋๋ ํธ์คํ
์์ ๋์ผํ ์ด๋ฏธ์ง๋ฅผ ์คํํ ์ ์๊ธฐ ๋๋ฌธ์, Amazon SageMaker๋ `train` ์ด๋ `serve` ์ธ์์ ํจ๊ป ์ปจํ
์ด๋๋ฅผ ์คํํฉ๋๋ค. ์ปจํ
์ด๋์์ ์ด ์ธ์๋ฅผ ์ฒ๋ฆฌํ๋ ๋ฐฉ๋ฒ์ ์ปจํ
์ด๋์ ๋ฐ๋ผ ๋ค๋ฆ
๋๋ค:* ์ด ์์ ์์ Dockerfile์์ `ENTRYPOINT`๋ฅผ ์ ์ํ์ง ์์ต๋๋ค. ๋ฐ๋ผ๊ฐ Docker๋ ํ๋ จ ์๊ฐ์๋ `train`๋ช
๋ น์, ์๋น์ค ์๊ฐ์๋ `serve`๋ช
๋ น์ ์คํํฉ๋๋ค. ์ด ์์ ์์ ์ฐ๋ฆฌ๋ ์คํ๊ฐ๋ฅํ Python script๋ค์ ์ ์ํ์ง๋ง, ์ด๊ฒ๋ค์ ์ฐ๋ฆฌ๊ฐ ํด๋น ํ๊ฒฝ์์ ์์ํ ์ ์๋ ๋ชจ๋ ํ๋ก๊ทธ๋จ์ด ๋ ์ ์์ต๋๋ค.* Dockerfile์ `ENTRYPOINT` ์ ํ๋ก๊ทธ๋จ์ ์ง์ ํ๋ค๋ฉด, ๊ทธ ํ๋ก๊ทธ๋จ์ ์์์์ ์ ์คํ๋๊ณ ๊ทธ๊ฒ์ ์ฒซ๋ฒ์งธ ์ธ์๋ `train`์ด๋ `serve`๊ฐ ๋ ๊ฒ์
๋๋ค. ํ๋ก๊ทธ๋จ์ ์ธ์๋ฅผ ๋ณด๊ณ ๋ฌด์์ ํ ์ง ๊ฒฐ์ ํ ์ ์์ต๋๋ค. * ํ๋ จ๊ณผ ํธ์คํ
์ ์ํด ๋ณ๋์ ์ปจํ
์ด๋๋ฅผ ์์ฑํ๋ค๋ฉด (ํน์ ํ๋๋ง ์์ฑํ๋ค๋ฉด), DockerFile์ `ENTRYPOINT`์ ํ๋ก๊ทธ๋จ์ ์ ์ํ๊ณ , ์ฒซ๋ฒ์งธ์ธ์๋ฅผ ๋ฌด์ (ํน์ ๊ฒ์ฆ)ํ๊ฒ ํ ์๋ ์์ต๋๋ค. Running your container during trainingAmazon SageMaker๊ฐ ํ๋ จ์ ์คํํ ๋, `train` ์คํฌ๋ฆฝํธ๋ ์ผ๋ฐ์ ์ธ Python ํ๋ก๊ทธ๋จ๊ณผ ๊ฐ์ด ์คํ๋ฉ๋๋ค. ์ฌ์ฉ์ ์ํด์๋ `/opt/ml` ๋๋ ํ ๋ฆฌ ์๋์ ๋ง์ ํ์ผ๋ค์ด ๋ฐฐ์น๋์ด์ผ ํฉ๋๋ค. /opt/ml โโโ input โย ย โโโ config โย ย โย ย โโโ hyperparameters.json โย ย โย ย โโโ resourceConfig.json โย ย โโโ data โย ย โโโ โย ย โโโ โโโ model โย ย โโโ โโโ output โโโ failure The input* `/opt/ml/input/config`๋ ํ๋ก๊ทธ๋จ์ ์คํ ๋ฐฉ๋ฒ์ ์ ์ดํ๊ธฐ ์ํ ์ ๋ณด๋ฅผ ํฌํจํ๊ณ ์์ต๋๋ค. `hyperparameters.json`๋ ํ์ดํผํ๋ผ๋ฏธํฐ์ ์ด๋ฆ๊ณผ ๊ฐ์ด ์ ์ฅํ๋ JSON ํ์์ Dictionary์
๋๋ค. ์ด ๊ฐ๋ค์ ๋ชจ๋ ๋ฌธ์์ด์ด์ด์ผ ํ๋ฏ๋ก, ๊ฐ๋ค์ ๋ณํํด์ผ ํ ์๋ ์์ต๋๋ค. `resourceConfig.json`์ ๋ถ์ฐ ํ๋ จ์์ ์ฌ์ฉํ๋ ๋คํธ์ํฌ ๋ ์ด์์์ ์ค๋ช
ํ๊ธฐ ์ํ JSON ํ์์ ํ์ผ์
๋๋ค. scikit-learn์ ๋ถ์ฐ ํ๋ จ์ ์ง์ํ์ง ์์ผ๋ฏ๋ก, ์ฌ๊ธฐ์์๋ ์ด๊ฒ์ ๋ฌด์ํฉ๋๋ค. * `/opt/ml/input/data//` (for File mode)๋ ํด๋น ์ฑ๋์ ์
๋ ฅ ๋ฐ์ดํฐ๋ฅผ ํฌํจํฉ๋๋ค. ์ฑ๋์ CreateTrainingJob๋ฅผ ํธ์ถํ ๋ ์์ฑ์ด ๋์ง๋ง, ์ผ๋ฐ์ ์ผ๋ก ์ฑ๋์ด ์๊ณ ๋ฆฌ์ฆ์ด ์์ํ๋ ๊ฒ๊ณผ ์ผ์นํ๋ ๊ฒ์ด ์ค์ํฉ๋๋ค. ๊ฐ ์ฑ๋์ ํ์ผ๋ค์ S3๋ก๋ถํฐ ์ด ๋๋ ํ ๋ฆฌ๋ก ๋ณต์ฌ๋๊ณ S3 Key๊ตฌ์กฐ๋ก ํ์๋ ํธ๋ฆฌ ๊ตฌ์กฐ๋ฅผ ์ ์งํฉ๋๋ค. * `/opt/ml/input/data/_` (for Pipe mode)๋ ์ฃผ์ด์ง epoch์ ์ํ pipe ์
๋๋ค. Epoch์ 0์์ ์์ํ์ฌ ์ฝ์ ๋๋ง๋ค ํ๋์ฉ ์ฌ๋ผ๊ฐ๋๋ค. ์คํํ ์ ์๋ epoch์ ์๋ ์ ํ์ด ์์ง๋ง, ๋ค์ epoch์ ์ฝ๊ธฐ ์ ์๋ ๊ฐ pipe๋ฅผ ๋ซ์์ผ ํฉ๋๋ค. The output* `/opt/ml/model/`๋ ์๊ณ ๋ฆฌ์ฆ์ด ์์ฑํ ๋ชจ๋ธ์ ์ฐ๋ ๋๋ ํ ๋ฆฌ์
๋๋ค. ๋ชจ๋ธ์ ๋น์ ์ด ์ํ๋ ์ด๋ค ํ์์ด๋ ๋ ์ ์์ต๋๋ค. ๊ทธ๊ฒ์ ๋จ์ผ ํ์ผ ํน์ ์ ์ฒด ๋๋ ํ ๋ฆฌ ํธ๋ฆฌ์ผ ์๋ ์๋๋ค. SageMaker๋ ์ด ๋๋ ํ ๋ฆฌ์์ ์ด๋ค ํ์ผ์ด๋ tar๋ก ์์ถ ํ์ผ์ ๋ง๋ค์ด ํจํค์งํฉ๋๋ค. ์ด ํ์ผ์ `DescribeTrainingJob` ๊ฒฐ๊ณผ์์ ๋ฆฌํดํ S3์์น์์ ์ฌ์ฉํ ์ ์์ต๋๋ค. * `/opt/ml/output`๋ ์๊ณ ๋ฆฌ์ฆ์ด Job ์คํจ์ด์ ๋ฅผ ์ค๋ช
ํ๋ `failure` ํ์ผ์ ์์ฑํ๊ธฐ ์ํ ๋๋ ํ ๋ฆฌ์
๋๋ค. ์ด ํ์ผ์ ๋ด์ฉ์ `DescribeTrainingJob`์ `FailureReason` ํ๋ ๋ฆฌํด๋ฉ๋๋ค. ์ฑ๊ณตํ Job์ ์ด ํ์ผ์ ์ธ ์ด์ ๊ฐ ์์ผ๋ฏ๋ก ๋ฌด์๋ฉ๋๋ค. Running your container during hostingํธ์คํ
์ HTTP๋ฅผ ํตํด ๋ค์ด์ค๋ ์ถ๋ก ์์ฒญ์ ์๋ตํ๊ธฐ ๋๋ฌธ์ ํ๋ จ๊ณผ๋ ๋งค์ฐ ๋ค๋ฅธ ๋ชจ๋ธ์
๋๋ค. ์ด ์์ ์์, ์ฐ๋ฆฌ๋ Python serving ์คํ์ ์ฌ์ฉํ์ฌ ๊ฐ๋ ฅํ๊ณ ํ์ฅ ๊ฐ๋ฅํ ์ถ๋ก ์์ฒญ ์๋น์ค๋ฅผ ์ ๊ณตํฉ๋๋ค: ์ด ์คํ์ ์ํ ์ฝ๋์์ ๊ตฌํ๋์๊ณ ๋๋ถ๋ถ ๊ทธ๋ฅ ๋ก๋๋ค.Amazon SageMaker๋ ์ปจํ
์ด๋์์ ๋๊ฐ์ URL์ ์ฌ์ฉํฉ๋๋ค: * `/ping` ๋ ์ธํ๋ผ๋ก๋ถํฐ `GET` ์์ฒญ์ ๋ฐ์ต๋๋ค. ์ปจํ
์ด๋๊ฐ ๊ฐ๋๋๊ณ ์์ฒญ์ ๋ฐ์๋ค์ด๋ฉด ํ๋ก๊ทธ๋จ์ 200์ ๋ฆฌํดํฉ๋๋ค. * `/invocations`๋ ํด๋ผ์ด์ธํธ์ ์ถ๋ก `POST` ์์ฒญ์ ๋ฐ๋ ์๋ํฌ์ธํธ์
๋๋ค. ์์ฒญ๊ณผ ์๋ต์ ํ์์ ์๊ณ ๋ฆฌ์ฆ์ ๋ฐ๋ผ ๋ค๋ฆ
๋๋ค. ํด๋ผ์ด์ธํธ๋ `ContentType`์ `Accept` ํค๋๋ฅผ ์ ๊ณตํ ๊ฒฝ์ฐ, ์ด๊ฒ๋ค๋ ์ญ์ ์ ๋ฌ์ด ๋ฉ๋๋ค.์ปจํ
์ดํฐ๋ ํ๋ จํ๋ ๋์ ์์ฑ๋ ๊ฒ๊ณผ ๊ฐ์ ์ฅ์์ ๋ชจ๋ธ ํ์ผ์ด ์ ์ฅ๋ฉ๋๋ค: /opt/ml โโโ model ย ย โโโ ์ํ ์ปจํ
์ด๋ ํํธ`container` ๋๋ ํ ๋ฆฌ์๋ Amazon SageMaker์์ ์ํ ์๊ณ ๋ฆฌ์ฆ์ ํจํค์งํ ๋ ํ์ํ ๋ชจ๋ ๊ตฌ์ฑ์์๊ฐ ์์ต๋๋ค: . โโโ Dockerfile โโโ build_and_push.sh โโโ decision_trees โโโ nginx.conf โโโ predictor.py โโโ serve โโโ train โโโ wsgi.py๊ฐ ํญ๋ชฉ์ ๋ํด ์ฐจ๋ก๋ก ์๊ธฐํด ๋ณด๋๋ก ํ๊ฒ ์ต๋๋ค:* __`Dockerfile`__ ๋ Docker ์ปจํ
์ด๋ ์ด๋ฏธ์ง๋ฅผ ์์ฑํ๋ ๋ฐฉ๋ฒ์ ๋ํด ๊ธฐ์ ํฉ๋๋ค. ์์ธํ ๋ด์ฉ์ ์๋๋ฅผ ์ฐธ์กฐํ์ธ์.* __`build_and_push.sh`__ ๋ Dockerfile์ ์ฌ์ฉํ์ฌ ์ปจํ
์ด๋ ์ด๋ฏธ์ง๋ฅผ ์์ฑํ๊ณ ECR๋ก ์ด๊ฒ์ ํธ์ํ๋ ์คํฌ๋ฆฝํธ์
๋๋ค. ์ด ๋
ธํธ๋ถ์ ๋ท๋ถ๋ถ์์ ๋ช
๋ น์ ์ง์ ํธ์ถํ์ง๋ง, ์์ ์ ์๊ณ ๋ฆฌ์ฆ์ ๋ง๊ฒ ๋ณต์ฌํ๊ณ ์คํํ ์ ์์ต๋๋ค. * __`decision_trees`__ ๋ ์ปจํ
์ด๋์ ์ค์น๋ ํ์ผ๋ค์ ํฌํจํ๋ ๋๋ ํ ๋ฆฌ์
๋๋ค. * __`local_test`__ ๋ Amazon SageMaker ๋
ธํธ๋ถ ์ธ์คํด์ค๋ฅผ ํฌํจํ Docker๋ฅผ ์คํํ ์ ์๋ ์ด๋ ํ ์ปดํจํฐ์์๋ผ๋ ์๋ก์ด ์ปจํ
์ด๋๋ฅผ ํ
์คํธํ ์ ์๋ ๋ฐฉ๋ฒ์ ๋ณด์ฌ์ฃผ๋ ๋๋ ํ ๋ฆฌ์
๋๋ค. ์ด ๋ฐฉ๋ฒ์ ํตํด Amazon SageMaker์ ํจ๊ป ์ปจํ
์ด๋๋ฅผ ์ฌ์ฉํ๊ธฐ ์ ์, ์์ ๋ฐ์ดํฐ์
์ ์ ์ํ๊ฒ ๋ฐ๋ณต์ ์ผ๋ก ์ฌ์ฉํ์ฌ, ๊ตฌ์กฐ์ ์ธ ๋ฒ๊ทธ๋ฅผ ์ ๊ฑฐํ ์ ์์ต๋๋ค. ์ด ๋
ธํธ๋ถ์ ๋ท ๋ถ๋ถ์์ ๋ก์ปฌ ํ
์คํธ๋ฅผ ์งํํฉ๋๋ค. ์ด ๊ฐ๋จํ ์ดํ๋ฆฌ์ผ์ด์
์ ์ปจํ
์ด๋์ 5๊ฐ์ ํ์ผ๋ง ์ค์นํฉ๋๋ค. ๊ทธ ์ ๋๋ง ํ์ํ ์๋ ์๊ณ ๋๋ ๋ง์ ๋ฃจํด์ด ์๋ ๊ฒฝ์ฐ๋ผ๋ฉด, ๋ ๋ง์ด ์ค์นํ ์๋ ์์ต๋๋ค. ์ด 5 ๊ฐ๋ Python ์ปจํ
์ด๋์ ํ์ค ๊ตฌ์กฐ๋ฅผ ๋ณด์ฌ์ฃผ์ง๋ง, ๋ค๋ฅธ ํด์
์ ์์ ๋กญ๊ฒ ์ ํํ ์ ์์ผ๋ฏ๋ก ๋ค๋ฅธ ๊ตฌ์กฐ๋ฅผ ๊ฐ์ง ์ ์์ต๋๋ค. ๋ค๋ฅธ ํ๋ก๊ทธ๋๋ฐ ์ธ์ด๋ก ์์ฑํ๋ค๋ฉด, ์ ํํ ํ๋ ์ ์ํฌ ๋ฐ ๋๊ตฌ์ ๋ฐ๋ผ ๊ตฌ์กฐ๊ฐ ๋ฌ๋ผ์ง๋๋ค.์ปจํ
์ด๋ ์์ ๋ฃ์ด์ผ ํ ํ์ผ๋ค์ ๋ค์๊ณผ ๊ฐ์ต๋๋ค: * __`nginx.conf`__ ๋ nginx ํ๋ก ํธ์๋์ Configuration ํ์ผ์
๋๋ค. ์ผ๋ฐ์ ์ผ๋ก์ด ํ์ผ์ ์๋ ๊ทธ๋๋ก ์ฌ์ฉํ ์ ์์ต๋๋ค.* __`predictor.py`__ ๋ ์ค์ ๋ก Flask ์น ์๋ฒ์ ์ฑ์ decision tree ์์ธก์ ์ค์ ๋ก ๊ตฌํํ๋ ํ๋ก๊ทธ๋จ์
๋๋ค. ์ค์ ์ฑ์ ์์ธก ๋ถ๋ถ์ ์์ ํ๊ธธ ์ํ ๊ฒ์
๋๋ค. ์ด ์๊ณ ๋ฆฌ์ฆ์ ๋จ์ํ๊ธฐ ๋๋ฌธ์, ์ฐ๋ฆฌ๋ ์ด ํ์ผ์์ ๋ชจ๋ ์ฒ๋ฆฌ๋ฅผ ์ํํ์ง๋ง, ์ฌ์ฉ์ ์ ์ ๋ก์ง ๊ตฌํ์ ์ํด ํ์ผ์ ๋ณ๋๋ก ๋ถ๋ฆฌํ ์๋ ์์ต๋๋ค. * __`serve`__ ๋ ์ปจํ
์ด๋๊ฐ ํธ์คํ
์ ์์ํ ๋ ์์ํ๋ ํ๋ก๊ทธ๋จ์
๋๋ค. `predictor.py`์์ ์ ์๋ Flask ์ฑ์ ์ฌ๋ฌ ์ธ์คํด์ค๋ฅผ ์คํํ๋ gunicorn ์๋ฒ๋ฅผ ์์ํฉ๋๋ค. ์ด ํ์ผ์ ์๋ ๊ทธ๋๋ก ๊ฐ์ ธ๊ฐ ์ ์์ ๊ฒ์
๋๋ค. * __`train`__ ๋ ํ๋ จ์ ์ํด ์ปจํ
์ด๋๊ฐ ์คํ๋ ๋ ํธ์ถ๋๋ ํ๋ก๊ทธ๋จ์
๋๋ค. ํ๋ จ ์๊ณ ๋ฆฌ์ฆ์ ๊ตฌํํ๊ธฐ ์ํด ์ด ํ๋ก๊ทธ๋จ์ ์์ ํฉ๋๋ค. * __`wsgi.py`__ ๋ Flask app์ ํธ์ถํ๊ธฐ ์ํ ์์ ๋ํผ์
๋๋ค. ์ด ํ์ผ์ ์๋ ๊ทธ๋๋ก ์ฌ์ฉํ ์ ์์ต๋๋ค. ์์ฝํ๋ฉด, ์ดํ๋ฆฌ์ผ์ด์
์์ ๋ณ๊ฒฝํ๋ ค๋ ๋ ํ์ผ์ `train`์ `predictor.py` ์
๋๋ค. DockerfileDockerfile์ ๋น๋ํ๋ ค๋ ์ด๋ฏธ์ง๋ฅผ ์ค๋ช
ํฉ๋๋ค. ์คํํ๋ ค๋ ์์คํ
์ ์ ์ฒด ์ด์ ์ฒด์ ์ค์น๋ฅผ ์ค๋ช
ํ๋ ๊ฒ๊ณผ ๊ฐ์ ๊ฒ์ผ๋ก ์๊ฐํ ์ ์์ต๋๋ค. Docker ์ปจํ
์ด๋๋ ๊ธฐ๋ณธ ์ด์์ ์ํด ํธ์คํธ ์์คํ
์ Linux๋ฅผ ์ฌ์ฉํ๊ธฐ ๋๋ฌธ์ ์ ์ฒด ์ด์ ์ฒด์ ๋ณด๋ค ์กฐ๊ธ ๊ฐ๋ณ์ต๋๋ค.ํ์ด์ฌ ๊ณผํ ์คํ์ ๊ฒฝ์ฐ ํ์ค ์ฐ๋ถํฌ ์ค์น์์ ์์ํ์ฌ ์ผ๋ฐ ๋๊ตฌ๋ฅผ ์คํํ์ฌ scikit-learn์ ํ์ํ ๊ฒ์ ์ค์นํฉ๋๋ค. ๋ง์ง๋ง์ผ๋ก ํน์ ์๊ณ ๋ฆฌ์ฆ์ ๊ตฌํํ๋ ์ฝ๋๋ฅผ ์ปจํ
์ด๋์ ์ถ๊ฐํ๊ณ ์ ์ ํ ํ๊ฒฝ์ ์ค์ ํฉ๋๋ค.๊ทธ ๊ณผ์ ์์ ์ถ๊ฐ ๊ณต๊ฐ์ ์ ๋ฆฌํฉ๋๋ค. ์ด๋ ๊ฒํ๋ฉด ์ปจํ
์ด๋๊ฐ ๋ ์๊ณ ๋น ๋ฅด๊ฒ ์์๋ฉ๋๋ค.์๋ฅผ ๋ค์ด Dockerfile์ ์ดํด ๋ณด๋๋ก ํ๊ฒ ์ต๋๋ค:
###Code
!cat container/Dockerfile
###Output
# Build an image that can do training and inference in SageMaker
# This is a Python 2 image that uses the nginx, gunicorn, flask stack
# for serving inferences in a stable way.
FROM ubuntu:16.04
MAINTAINER Amazon AI <[email protected]>
RUN apt-get -y update && apt-get install -y --no-install-recommends \
wget \
python \
nginx \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Here we get all python packages.
# There's substantial overlap between scipy and numpy that we eliminate by
# linking them together. Likewise, pip leaves the install caches populated which uses
# a significant amount of space. These optimizations save a fair amount of space in the
# image, which reduces start up time.
RUN wget https://bootstrap.pypa.io/get-pip.py && python get-pip.py && \
pip install numpy==1.16.2 scipy==1.2.1 scikit-learn==0.20.2 pandas flask gevent gunicorn && \
(cd /usr/local/lib/python2.7/dist-packages/scipy/.libs; rm *; ln ../../numpy/.libs/* .) && \
rm -rf /root/.cache
# Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard
# output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE
# keeps Python from writing the .pyc files which are unnecessary in this case. We also update
# PATH so that the train and serve programs are found when the container is invoked.
ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
ENV PATH="/opt/program:${PATH}"
# Set up the program in the image
COPY decision_trees /opt/program
WORKDIR /opt/program
###Markdown
์ปจํ
์ด๋ ๋น๋ ๋ฐ ๋ฑ๋ก๋ค์ ์ ์ฝ๋๋`docker build`๋ฅผ ์ฌ์ฉํ์ฌ ์ปจํ
์ด๋ ์ด๋ฏธ์ง๋ฅผ ์์ฑํ๊ณ `docker push`๋ฅผ ์ฌ์ฉํ์ฌ ์ปจํ
์ด๋ ์ด๋ฏธ์ง๋ฅผ ECR์ ํธ์ํ๋ ๋ฐฉ๋ฒ์ ๋ณด์ฌ์ค๋๋ค. ์ด ์ฝ๋๋ ์ ์คํฌ๋ฆฝํธ`container/build-and-push.sh`๋ก๋ ์ฌ์ฉ ๊ฐ๋ฅํ๋ฉฐ`build-and-push.sh decision_trees_sample`์ผ๋ก ์คํํ์ฌ ์ด๋ฏธ์ง`decision_trees_sample`์ ๋น๋ ํ ์ ์์ต๋๋ค.์ด ์ฝ๋๋ ์ฌ์ฉ์ค์ธ ๊ณ์ ๊ณผ ํ์ฌ ๊ธฐ๋ณธ ๋ฆฌ์ (SageMaker ๋
ธํธ๋ถ ์ธ์คํด์ค๋ฅผ ์ฌ์ฉํ๋ ๊ฒฝ์ฐ ๋
ธํธ๋ถ ์ธ์คํด์ค๊ฐ ์์ฑ ๋ ๋ฆฌ์ )์์ ECR Repository๋ฅผ ์ฐพ์ต๋๋ค. Repository๋ฅผ๊ฐ ์กด์ฌํ์ง ์์ผ๋ฉด ์คํฌ๋ฆฝํธ๊ฐ ์ด๋ฅผ ์์ฑํฉ๋๋ค.
###Code
%%sh
# The name of our algorithm
algorithm_name=sagemaker-decision-trees
cd container
chmod +x decision_trees/train
chmod +x decision_trees/serve
account=$(aws sts get-caller-identity --query Account --output text)
# Get the region defined in the current configuration (default to us-west-2 if none defined)
region=$(aws configure get region)
region=${region:-us-west-2}
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest"
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${algorithm_name}" > /dev/null 2>&1
if [ $? -ne 0 ]
then
aws ecr create-repository --repository-name "${algorithm_name}" > /dev/null
fi
# Get the login command from ECR and execute it directly
$(aws ecr get-login --region ${region} --no-include-email)
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
docker build -t ${algorithm_name} .
docker tag ${algorithm_name} ${fullname}
docker push ${fullname}
###Output
_____no_output_____
###Markdown
๋ก์ปฌ ๋จธ์ ์ด๋ Amazon SageMaker ๋
ธํธ๋ถ ์ธ์คํด์ค์์ ์๊ณ ๋ฆฌ์ฆ ํ
์คํธํ๊ธฐAmazon SageMaker๋ก ์๊ณ ๋ฆฌ์ฆ์ ์ฒ์ ํจํค์งํ๋ ๋์, ์๊ณ ๋ฆฌ์ฆ์ด ์ฌ๋ฐ๋ฅด๊ฒ ์๋ํ๋์ง ์ง์ ํ
์คํธํ๊ณ ์ถ์ ๊ฒ์
๋๋ค. `container/local_test` ๋๋ ํ ๋ฆฌ์๋ ์ด๋ฅผ ์ํ ํ๋ ์ ์ํฌ๊ฐ ์์ต๋๋ค. ์ปจํ
์ด๋๋ฅผ ์คํํ๊ณ ์ฌ์ฉํ๊ธฐ ์ํ 3 ๊ฐ์ ์ ์คํฌ๋ฆฝํธ์ ์์์ ์ค๋ช
ํ ๊ฒ๊ณผ ์ ์ฌํ ๋๋ ํ ๋ฆฌ ๊ตฌ์กฐ๊ฐ ํฌํจ๋์ด ์์ต๋๋ค์คํฌ๋ฆฝํธ๋ ๋ค์๊ณผ ๊ฐ์ต๋๋ค:* `train_local.sh`: ์ด๋ฏธ์ง ์ด๋ฆ๊ณผ ์ด๊ฒ์ ํจ๊ป ์คํํ๋ฉด ๋ก์ปฌ ํธ๋ฆฌ์ ๋ํ ํ๋ จ์ด ์คํ๋ฉ๋๋ค. ์๋ฅผ ๋ค์ด`$./train_local.sh sagemaker-decision-trees`๋ฅผ ์คํํ ์ ์์ต๋๋ค. ๊ทธ๊ฒ์ `/test_dir/model` ๋๋ ํ ๋ฆฌ์ ๋ชจ๋ธ์ ์์ฑํฉ๋๋ค. ์๊ณ ๋ฆฌ์ฆ์ ๋ํ ์ฌ๋ฐ๋ฅธ ์ฑ๋ ๋ฐ ๋ฐ์ดํฐ๋ก ์ค์ ๋๋๋ก `test_dir/ input/data/...` ๋๋ ํ ๋ฆฌ๋ฅผ ์์ ํด์ผ ํฉ๋๋ค. ๋ํ ํ
์คํธํ๋ ค๋ ํ์ดํผํ๋ผ๋ฏธํฐ ์ค์ (๋ฌธ์์ด)์ ์ํด `input/config/hyperparameters.json` ํ์ผ์ ์์ ํด์ผ ํฉ๋๋ค. * `serve_local.sh`: ๋ชจ๋ธ์ ํ๋ จํ ํ ์ด๋ฏธ์ง ์ด๋ฆ๊ณผ ํจ๊ป ์คํํ๋ฉด ๋ชจ๋ธ์ ์๋นํ ๊ฒ์
๋๋ค. ์๋ฅผ ๋ค์ด`$./serve_local.sh sagemaker-decision-trees`๋ฅผ ์คํํ ์ ์์ต๋๋ค. ์ด๊ฒ์ ์คํ๋๊ณ ์์ฒญ์ ๊ธฐ๋ค๋ฆฝ๋๋ค. ์ค๋จ์ ์ํด ํค๋ณด๋ ์ธํฐ๋ฝํธ๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค.* `predict.sh`: ํ์ด๋ก๋ ํ์ผ์ ์ด๋ฆ๊ณผ ์ํ๋ HTTP Content Type(์ต์
)์ผ๋ก ์ด๋ฅผ ์คํํ์๊ธฐ ๋ฐ๋๋๋ค. Content Type์ ๊ธฐ๋ณธ์ ์ผ๋ก`text/csv`์
๋๋ค. ์๋ฅผ ๋ค๋ฉด `$./predict.sh payload.csv text/csv`๋ฅผ ์คํํ ์ ์์ต๋๋ค์ด ๋๋ ํ ๋ฆฌ๋ ์ฌ๊ธฐ์ ์ ์๋ ์์ฌ๊ฒฐ์ ํธ๋ฆฌ ์ํ ์๊ณ ๋ฆฌ์ฆ์ ํ
์คํธํ๋๋ก ์ค์ ๋์์ต๋๋ค. ํํธ 2: Amazon SageMaker์์ ์์ ์ ์๊ณ ๋ฆฌ์ฆ ์ฌ์ฉํ๊ธฐํจํค์ง๋ ์ปจํ
์ด๋๋ฅผ ๊ฐ์ง๊ฒ ๋์์ผ๋ฉด, ์ด ์ปจํ
์ด๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ธ์ ํ๋ จํ๊ณ ๋ชจ๋ธ์ ํธ์คํ
๋๋ ๋ฐฐ์น๋ณํ์ ์ํด ์ฌ์ฉํ ์ ์์ต๋๋ค. ์์์ ๋ง๋ ์๊ณ ๋ฆฌ์ฆ์ผ๋ก ๊ทธ๋ ๊ฒ ์งํํด ๋ณด๊ฒ ์ต๋๋ค. ํ๊ฒฝ ์ค์ ์ฌ๊ธฐ์๋ ์ฌ์ฉํ Bucket๊ณผ SageMaker ์์
์ ์ฌ์ฉ๋ Role์ ์ง์ ํฉ๋๋ค.
###Code
# S3 prefix
bucket = '<your_S3_bucket_name_here>'
prefix = 'DEMO-scikit-byo-iris'
# Define IAM role
import boto3
import re
import os
import numpy as np
import pandas as pd
from sagemaker import get_execution_role
role = get_execution_role()
###Output
_____no_output_____
###Markdown
์ธ์
์์ฑ์ธ์
์ SageMaker์ ๋ํ ์ฐ๊ฒฐ ํ๋ผ๋ฏธํฐ๋ค์ ๊ธฐ์ตํฉ๋๋ค. ์ด๋ฅผ ์ฌ์ฉํ์ฌ ๋ชจ๋ SageMaker ์์
์ ์ํํฉ๋๋ค.
###Code
import sagemaker as sage
from time import gmtime, strftime
sess = sage.Session()
###Output
_____no_output_____
###Markdown
ํ๋ จ์ ์ํ ๋ฐ์ดํฐ ์
๋ก๋๋ฐฉ๋ํ ์์ ๋ฐ์ดํฐ๋ก ๋๊ท๋ชจ ๋ชจ๋ธ์ ํ๋ จํ ๋๋ ์ผ๋ฐ์ ์ผ๋ก Amazon Athena, AWS Glue ๋๋ Amazon EMR๊ณผ ๊ฐ์ ๋น
๋ฐ์ดํฐ ๋๊ตฌ๋ฅผ ์ฌ์ฉํ์ฌ S3์์ ๋ฐ์ดํฐ๋ฅผ ์์ฑํฉ๋๋ค. ์ด ์์ ์ ๋ชฉ์ ์ ์ํด ์ฐ๋ฆฌ๋ ๊ณ ์ ์ ์ธ [Iris dataset](https://en.wikipedia.org/wiki/Iris_flower_data_set)์ ์ฌ์ฉํฉ๋๋ค.SageMaker Python SDK์์ ์ ๊ณตํ๋ ๋๊ตฌ๋ฅผ ์ฌ์ฉํ์ฌ ๋ฐ์ดํฐ๋ฅผ ๊ธฐ๋ณธ ๋ฒํท์ ์
๋ก๋ ํ ์ ์์ต๋๋ค.
###Code
WORK_DIRECTORY = 'data'
data_location = sess.upload_data(WORK_DIRECTORY, key_prefix=prefix)
###Output
_____no_output_____
###Markdown
Estimator ์์ฑ ๋ฐ ๋ชจ๋ธ fit ํ๊ธฐ์๊ณ ๋ฆฌ์ฆ์ ๋ง๊ฒ SageMaker๋ฅผ ์ฌ์ฉํ๊ธฐ ์ํด, ์ปจํ
์ด๋๋ฅผ ์ฌ์ฉํ์ฌ ํ๋ จํ๋ ๋ฐฉ๋ฒ์ ์ ์ํ๋ 'Estimator'๋ฅผ ์์ฑํฉ๋๋ค. ์ฌ๊ธฐ์๋ SageMaker ํ๋ จ์ ํธ์ถํ๋ ๋ฐ ํ์ํ ๊ตฌ์ฑ์ด ํฌํจ๋ฉ๋๋ค: * The __container name__. ์ด๊ฒ์ ์์ ์ ๋ช
๋ น์์ ์์ฑ์ด ๋์์ต๋๋ค.* The __role__. ์์์ ์ ์ํ ๋ฐ์ ๊ฐ์ต๋๋ค.* The __instance count__ ํ๋ จ์ ์ฌ์ฉํ ๋จธ์ ์ ์๋ฅผ ์ง์ ํฉ๋๋ค.* The __instance type__ ํ๋ จ์ ์ฌ์ฉํ ๋จธ์ ์ ์ ํ์ ์ง์ ํฉ๋๋ค.* The __output path__ model artifact๊ฐ ์์ฑ๋ ์์น๋ฅผ ๊ฒฐ์ ํฉ๋๋ค. * The __session__ ์์์ ์ ์ํ SageMaker session object ์
๋๋ค.๋ค์์ผ๋ก estimator์์ fit() ์ฌ์ฉํ์ฌ ์ฐ๋ฆฌ๊ฐ ์์์ ์
๋ก๋ํ ๋ฐ์ดํฐ๋ฅผ ํ๋ จํฉ๋๋ค.
###Code
account = sess.boto_session.client('sts').get_caller_identity()['Account']
region = sess.boto_session.region_name
image = '{}.dkr.ecr.{}.amazonaws.com/sagemaker-decision-trees:latest'.format(account, region)
tree = sage.estimator.Estimator(image,
role, 1, 'ml.c4.2xlarge',
output_path="s3://{}/output".format(sess.default_bucket()),
sagemaker_session=sess)
tree.fit(data_location)
###Output
2019-11-21 09:26:24 Starting - Starting the training job...
2019-11-21 09:26:40 Starting - Launching requested ML instances......
2019-11-21 09:27:47 Starting - Preparing the instances for training...
2019-11-21 09:28:25 Downloading - Downloading input data
2019-11-21 09:28:25 Training - Downloading the training image...
2019-11-21 09:28:47 Training - Training image download completed. Training in progress.[31mStarting the training.[0m
[31mTraining complete.[0m
2019-11-21 09:29:11 Uploading - Uploading generated training model
2019-11-21 09:29:11 Completed - Training job completed
Training seconds: 52
Billable seconds: 52
###Markdown
๋ชจ๋ธ ํธ์คํ
ํ๊ธฐํ๋ จ๋ ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ HTTP ์๋ํฌ์ธํธ๋ก ์ค์๊ฐ ์์ธก์ ์ป์ ์ ์์ต๋๋ค. ๋ค์ ๋จ๊ณ์ ๋ฐ๋ผ ํ๋ก์ธ์ค๋ฅผ ์งํํ์ญ์์ค. ๋ชจ๋ธ ๋ฐฐํฌํ๊ธฐSageMaker ํธ์คํ
์ ๋ชจ๋ธ์ ๋ฐฐํฌํ๋ ค๋ฉด ํผํ
๋ ๋ชจ๋ธ์ ๋ํ 'deploy' ํธ์ถ๋ง ์์ผ๋ฉด ๋ฉ๋๋ค. ์ด ํธ์ถ์ ์ธ์คํด์ค ์, ์ธ์คํด์ค ์ ํ ๋ฐ ์ ํ์ ์ผ๋ก serializer ๋ฐ deserializer ๊ธฐ๋ฅ์ ์ฌ์ฉํฉ๋๋ค. ์ด๊ฒ์ ์ต์ข
predictor๊ฐ ์๋ํฌ์ธํธ์์ ์์ฑํ ๋ ์ฌ์ฉ๋ฉ๋๋ค.
###Code
from sagemaker.predictor import csv_serializer
predictor = tree.deploy(1, 'ml.m4.xlarge', serializer=csv_serializer)
###Output
---------------------------------------------------------------------------------------------------!
###Markdown
์ผ๋ถ ๋ฐ์ดํฐ๋ฅผ ์ ํํ๊ณ ์์ธก์ ์ฌ์ฉํ๊ธฐ๋ช ๊ฐ์ง ์์ธก์ ์ํํ๊ธฐ ์ํด ํ๋ จ์ ์ฌ์ฉํ๋ ์ผ๋ถ ๋ฐ์ดํฐ๋ฅผ ์ถ์ถํ๊ณ ์ด์ ๋ํ ์์ธก์ ์ํํฉ๋๋ค. ๋ฌผ๋ก ์ด๊ฒ์ ์๋ชป๋ ํต๊ณ ๊ดํ์ด์ง๋ง ๋ฉ์ปค๋์ฆ์ด ์ด๋ป๊ฒ ์๋ํ๋์ง ์ ์ ์๋ ์ข์ ๋ฐฉ๋ฒ์
๋๋ค.
###Code
shape=pd.read_csv("data/iris.csv", header=None)
shape.sample(3)
# drop the label column in the training set
shape.drop(shape.columns[[0]],axis=1,inplace=True)
shape.sample(3)
import itertools
a = [50*i for i in range(3)]
b = [40+i for i in range(10)]
indices = [i+j for i,j in itertools.product(a,b)]
test_data=shape.iloc[indices[:-1]]
###Output
_____no_output_____
###Markdown
์์ธก์ deploy์์ ์ป์ predictor์ ์์ธก๊ธฐ์ํ ๋ฐ์ดํฐ๋ฅผ ์ฌ์ฉํ์ฌ ์์ธก์ ํธ์ถํ๋ ๊ฒ์ผ๋ก ๋งค์ฐ ์ฝ์ต๋๋ค. serializers๋ ๋ฐ์ดํฐ ๋ณํ์ ๋ด๋นํฉ๋๋ค.
###Code
print(predictor.predict(test_data.values).decode('utf-8'))
###Output
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
###Markdown
์ ํ์ ์ ๋ฆฌ์๋ํฌ์ธํธ๊ฐ ๋๋๋ฉด, ๊ทธ๊ฒ์ ์ ๋ฆฌํด์ผ ํฉ๋๋ค.
###Code
sess.delete_endpoint(predictor.endpoint)
###Output
_____no_output_____
###Markdown
๋ฐฐ์น ๋ณํ Job ์คํ[Amazon SageMaker Batch Transform](https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-batch.html)๋ฅผ ์ฌ์ฉํ๋ฉด ๋์ฉ๋ ๋ฐ์ดํฐ ์ธํธ์ ๋ํ ์ถ๋ก ์ ์ป์ ์ ์์ต๋๋ค. ๋ฐฐ์น ๋ณํ Job์ input ๋ฐ์ดํฐ S3 ์์น๋ฅผ ๊ฐ์ ธ์์ ์ง์ ๋ S3 output ํด๋์ ์์ธก์ ์ถ๋ ฅํฉ๋๋ค. ํธ์คํ
๊ณผ ๋ง์ฐฌ๊ฐ์ง๋ก ํ๋ จ ๋ฐ์ดํฐ์ ๋ํ ์ถ๋ก ์ ์ถ์ถํ์ฌ ๋ฐฐ์น ๋ณํ์ ํ
์คํธํ ์ ์์ต๋๋ค. ๋ณํ Job ์์ฑํ๊ธฐ์ปจํ
์ด๋๋ฅผ ์ฌ์ฉํ์ฌ ๋ฐ์ดํฐ์
์ ๋ํ ์ถ๋ก ๊ฒฐ๊ณผ๋ฅผ ์ป๊ธฐ์ํ ๋ฐฉ๋ฒ์ ์ ์ํ๋ 'Transformer'๋ฅผ ์์ฑํฉ๋๋ค. ์ฌ๊ธฐ์๋ SageMaker ๋ฐฐ์น ๋ณํ์ ํธ์ถํ๋ ๋ฐ ํ์ํ ๊ตฌ์ฑ์ด ํฌํจ๋ฉ๋๋ค.* The __instance count__ ์ถ๋ก ์ ์ถ์ถํ๊ธฐ ์ํด ์ฌ์ฉํ๋ ๋จธ์ ์ ์* The __instance type__ ์ถ๋ก ์ ์ถ์ถํ๊ธฐ ์ํด ์ฌ์ฉํ๋ ๋จธ์ ์ ์ ํ* The __output path__ ์ถ๋ก ๊ฒฐ๊ณผ๊ฐ ์ฐ์ฌ์ง ์์น๋ฅผ ๊ฒฐ์
###Code
transform_output_folder = "batch-transform-output"
output_path="s3://{}/{}".format(sess.default_bucket(), transform_output_folder)
transformer = tree.transformer(instance_count=1,
instance_type='ml.m4.xlarge',
output_path=output_path,
assemble_with='Line',
accept='text/csv')
###Output
_____no_output_____
###Markdown
transformer์ tranform()์ ์ฌ์ฉํ์ฌ ์
๋ก๋ํ ๋ฐ์ดํฐ์ ๋ํ ์ถ๋ก ๊ฒฐ๊ณผ๋ฅผ ์ป์ต๋๋ค. transformer๋ฅผ ํธ์ถํ ๋ ์ด ์ต์
์ ์ฌ์ฉํ ์ ์์ต๋๋ค.* The __data_location__ ์
๋ ฅ ๋ฐ์ดํฐ์ ์์น* The __content_type__ ์ปจํ
์ด๋์ HTTP ์์ฒญ์ ํ ๋ ์ค์ ๋ Content Type* The __split_type__ ์
๋ ฅ ๋ฐ์ดํฐ๋ฅผ ๋ถํ ํ๊ธฐ ์ํ ๊ตฌ๋ถ์* The __input_filter__ ์ปจํ
์ด๋์ HTTP ์์ฒญ์ํ๊ธฐ ์ ์ ์
๋ ฅ์ ์ฒซ ๋ฒ์งธ ์ด (ID)์ด ์ญ์ ๋จ
###Code
transformer.transform(data_location, content_type='text/csv', split_type='Line', input_filter='$[1:]')
transformer.wait()
###Output
..................[31mStarting the inference server with 4 workers.[0m
[31m[2019-11-21 09:57:45 +0000] [11] [INFO] Starting gunicorn 19.9.0[0m
[31m[2019-11-21 09:57:45 +0000] [11] [INFO] Listening at: unix:/tmp/gunicorn.sock (11)[0m
[31m[2019-11-21 09:57:45 +0000] [11] [INFO] Using worker: gevent[0m
[31m[2019-11-21 09:57:45 +0000] [16] [INFO] Booting worker with pid: 16[0m
[31m[2019-11-21 09:57:45 +0000] [17] [INFO] Booting worker with pid: 17[0m
[31m[2019-11-21 09:57:45 +0000] [18] [INFO] Booting worker with pid: 18[0m
[31m[2019-11-21 09:57:45 +0000] [19] [INFO] Booting worker with pid: 19[0m
[31m169.254.255.130 - - [21/Nov/2019:09:58:23 +0000] "GET /ping HTTP/1.1" 200 1 "-" "Go-http-client/1.1"[0m
[31m169.254.255.130 - - [21/Nov/2019:09:58:23 +0000] "GET /execution-parameters HTTP/1.1" 404 2 "-" "Go-http-client/1.1"[0m
[31mInvoked with 150 records[0m
[31m169.254.255.130 - - [21/Nov/2019:09:58:23 +0000] "POST /invocations HTTP/1.1" 200 1400 "-" "Go-http-client/1.1"[0m
[32m169.254.255.130 - - [21/Nov/2019:09:58:23 +0000] "GET /ping HTTP/1.1" 200 1 "-" "Go-http-client/1.1"[0m
[32m169.254.255.130 - - [21/Nov/2019:09:58:23 +0000] "GET /execution-parameters HTTP/1.1" 404 2 "-" "Go-http-client/1.1"[0m
[32mInvoked with 150 records[0m
[32m169.254.255.130 - - [21/Nov/2019:09:58:23 +0000] "POST /invocations HTTP/1.1" 200 1400 "-" "Go-http-client/1.1"[0m
[33m2019-11-21T09:58:23.347:[sagemaker logs]: MaxConcurrentTransforms=1, MaxPayloadInMB=6, BatchStrategy=MULTI_RECORD[0m
###Markdown
์ถ๊ฐ์ ์ธ ์ค์ ์ต์
์ ๋ณด๋ [CreateTransformJob API](https://docs.aws.amazon.com/sagemaker/latest/dg/API_CreateTransformJob.html)์ ์ฐธ๊ณ ํ์๊ธฐ ๋ฐ๋๋๋ค ์ถ๋ ฅ ๋ณด๊ธฐ S3์์ ์์ ๋ณํJob์ ๊ฒฐ๊ณผ๋ฅผ ์ฝ๊ณ ์ถ๋ ฅํฉ๋๋ค.
###Code
s3_client = sess.boto_session.client('s3')
s3_client.download_file(sess.default_bucket(), "{}/iris.csv.out".format(transform_output_folder), '/tmp/iris.csv.out')
with open('/tmp/iris.csv.out') as f:
results = f.readlines()
print("Transform results: \n{}".format(''.join(results)))
###Output
Transform results:
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
setosa
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
versicolor
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
virginica
|
PC-NBV/RGCNN_notebooks/RGCNN Training Attempt-Copy1.ipynb | ###Markdown
WORK IN PROGRESS!!! Trying to create batches from data...
###Code
import numpy as np
from torch_geometric.data import Data
length = len(dataset_train)
batch_size = 32
iterations = np.ceil(length/batch_size)
iterations = iterations.astype(int)
batched_data = torch.empty([125, 1024, 6])
print(dataset_train)
print(dataset_train[0])
aux = Data()
for i in range(iterations):
ob = dataset_train[i:i+batch_size]
pos=torch.empty([0, 3])
y = torch.empty([0])
normal = torch.empty([0, 3])
for data in ob:
pos = torch.cat([pos, data.pos])
y = torch.cat([y, data.y])
normal = torch.cat([normal, data.normal])
batch_data[i] = Data(pos=pos, y=y, normal=normal)
print(pos.shape)
#print(pos.shape)
#print(pos)
print(len(batch_data))
Batched_data = torch.empty([125, 1024, 6])
BATCHED_DATA = []
for i in range(125):
# print(batch_data[i].pos)
pos = torch.empty([32, 1024, 3])
y = torch.empty([32, 1])
normal = torch.empty([32, 1024, 3])
for i in range(batch_size):
pos[i] = batch_data[i].pos[num_points*i:num_points*i+1024]
y[i] = batch_data[i].y[i]
normal[i] = batch_data[i].normal[num_points*i:num_points*i+1024]
BATCH = Data(pos=pos, y=y, normal=normal)
BATCHED_DATA.append(BATCH)
# Batched_data[i] = Data(pos=pos, y=y, normal=normal)
print(pos.shape)
print(normal.shape)
print(y.shape)
print(len(BATCHED_DATA))
for data in BATCHED_DATA:
print(data)
###Output
_____no_output_____ |
Taller_semana_Carolina_Garcia.ipynb | ###Markdown
Carolina Garcia Acosta
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import scipy
import seaborn as sns
import sklearn # Paquete base de ML
from scipy.stats import norm
from sklearn.cluster import KMeans
from sklearn.preprocessing import MinMaxScaler, MaxAbsScaler, RobustScaler, StandardScaler
from google.colab import files
uploaded=files.upload()
%matplotlib inline
###Output
_____no_output_____
###Markdown
Introducciรณn **Contexto comercial.** Usted es un analista en una entidad bancaria, y se le proporciona un conjunto de datos de los clientes. Su jefe le pide que analice la informaciรณn para determinar si existen similaridades entre grupos de clientes para lanzar una campaรฑa de mercadeo.**Problema comercial.** Su tarea es **crear un modelo de clusterizaciรณn para determinar si existen grupos de clientes similares**.**Contexto analรญtico.** Como cientรญfico de datos, se le pide realizar una clusterizaciรณn de los clientes para identificar
###Code
df = pd.read_csv("Lending_club_cleaned_2.csv")
df.head()
###Output
_____no_output_____
###Markdown
Ejercicio 1:Realice una normalizaciรณn de los datos numรฉricos es decir que los valores oscilen entre 0 y 1 en las columnas annual_inc y loan_amnt.Consejo: antes de realizar la normalizaciรณn asegรบrese de que el tipo de dichas columnas si sea numรฉrico.
###Code
# Escriba aquรญ su codigo
print(df['annual_inc'].dtype)
print(df['loan_amnt'].dtype)
def normalize(df):
result = df.copy()
for column in df.columns:
max_val = df[column].max()
min_val = df[column].min()
result[column] = (df[column] - min_val) / (max_val - min_val)
return result
df_norm = normalize(df[['annual_inc', 'loan_amnt']])
print(df_norm.describe())
###Output
annual_inc loan_amnt
count 38705.000000 38705.000000
mean 0.010944 0.313157
std 0.010711 0.216531
min 0.000000 0.000000
25% 0.006254 0.144928
50% 0.009340 0.275362
75% 0.013209 0.420290
max 1.000000 1.000000
###Markdown
Ejercicio 2:Emplee el algoritmo de k-means para agrupar a los clientes usando un nรบmero de clusters de 4.
###Code
# Escriba aquรญ su codigo
k = 4
kmeans = KMeans(n_clusters=k, init='k-means++')
kmeans.fit(df_norm)
labels = kmeans.predict(df_norm)
centroids = kmeans.cluster_centers_
centroids
###Output
_____no_output_____
###Markdown
Ejercicio 3 (Opcional):Realice un grรกfico de dispersiรณn (scatter) para vizualizar los cluster que descubriรณ en el punto anterior (ejercicio 2). Usando colores diferentes para identificar los 4 cluster.
###Code
# Escriba aquรญ su codigo
#Graficar la data
plt.figure(figsize=(6, 6))
color_map = {1:'r', 2:'g', 3:'b' , 4:'c', 5:'y', 6:'w'}
colors = [color_map[x+1] for x in labels]
plt.scatter(df_norm['annual_inc'], df_norm['loan_amnt'], color=colors, alpha=0.4, edgecolor='k')
for idx, centroid in enumerate(centroids):
plt.scatter(*centroid, marker='*', edgecolor='k')
plt.xlim(-0.25, 1.25)
plt.xlabel('Ventas anuales', fontsize=12)
plt.ylim(-0.25, 1.25)
plt.ylabel('loan_amnt', fontsize=12)
plt.yticks(fontsize=12)
plt.title('K-means Clustering after Convergence', fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
Ejercicio 4 (Opcional):Use el mรฉtodo del codo para verificar cual es el nรบmero de clusters รณptimo. Revise desde 1 clรบster hasta 11 para realizar esta validaciรณn.
###Code
# Escriba aquรญ su codigo
sum_sq_d = []
K = range(1,11)
for k in K:
km = KMeans(n_clusters=k)
km = km.fit(df_norm[['annual_inc', 'loan_amnt']])
sum_sq_d.append(km.inertia_)
plt.figure(figsize=(8,6))
plt.plot(K, sum_sq_d, 'rx-.')
plt.xlabel('Ventas anuales', fontsize=12)
plt.xticks(range(1,11), fontsize=12)
plt.ylabel('loan_amnt', fontsize=12)
plt.yticks(fontsize=12)
plt.title('Mรฉtodo del codo determinando k', fontsize=16)
plt.show()
###Output
_____no_output_____ |
lecture_01_intro/numpy_basics.ipynb | ###Markdown
NumPy* Makes working with N-dimensional arrays (e.g. vectors, matrices, tensors) effecient.* NumPy functions are written in C, so they're fast. In fact, Python itself is written in C.
###Code
import numpy as np # import numpy module
###Output
_____no_output_____
###Markdown
N-dimensional arrays
###Code
a = np.array([1, 2, 3, 4, 5])
print(a)
print()
print("a.shape = ", a.shape)
print("type(a) = ", type(a))
print("a.dtype = ", a.dtype)
a = np.zeros((3, 3)) # 3x3 random matrix
print(a)
print()
print("a.shape = ", a.shape)
print("type(a) = ", type(a))
print("a.dtype = ", a.dtype)
a = np.zeros((3, 3), dtype=int) # 3x3 random matrix
print(a)
print()
print("a.shape = ", a.shape)
print("type(a) = ", type(a))
print("a.dtype = ", a.dtype)
a = np.random.rand(2, 3, 3) # 3x3 random matrix of floats in [0, 1)
print(a)
print()
print("a.shape = ", a.shape)
print("type(a) = ", type(a))
print("a.dtype = ", a.dtype)
def show_array_info(a):
print(arr)
print()
print("arr.shape = ", arr.shape)
print("type(arr) = ", type(arr))
print("arr.dtype = ", arr.dtype)
a = np.random.randint(1, 10, size=(5, 5)) # 3x3 random matrix of ints between 1 and 10
show_array_info(a)
###Output
[[8 6 7]
[4 5 5]
[7 9 4]]
arr.shape = (3, 3)
type(arr) = <class 'numpy.ndarray'>
arr.dtype = int64
###Markdown
Array indexing
###Code
m = np.random.randint(1, 10, size=(5, 5)) # 3x3 random matrix of ints between 1 and 10
print(m)
m[0,0], m[1,0], m[3,4] # [row,col] indexes
m[4,4] = 100
m
###Output
_____no_output_____
###Markdown
Subarrays
###Code
m[0,:] # 1st row
m[:,3] # 4th col
m[1:3,3] # 2nd-3rd elements in 4th col
###Output
_____no_output_____
###Markdown
 Reductions
###Code
m = np.random.randint(1, 10, size=(5, 5)) # 3x3 random matrix of ints between 1 and 10
print(m)
m.min(), m.max(), m.mean(), m.var(), m.std() # min, max, mean, variance, standard deviation
###Output
[[9 8 2 3 8]
[8 4 7 4 7]
[3 7 2 3 8]
[3 4 5 5 4]
[1 7 6 8 4]]
###Markdown
Partial reductions
###Code
m.max(axis=0), m.max(axis=1) # max across rows (axis=0) and cols (axis=1), respectively
###Output
_____no_output_____
###Markdown
Array multiplication
###Code
A = np.random.randint(1, 10, size=(2, 2)) # 2x2 random matrix of ints between 1 and 10
b = np.array([2, 3]) # length 2 row vector
c = np.reshape(b, (2, 1)) # b as a col vector instead of row vector
print(A)
print()
print(b)
print()
print(c)
###Output
[[9 7]
[7 8]]
[2 3]
[[2]
[3]]
###Markdown
Element-wise multiplication
###Code
print(A)
print()
print(A * A)
print(b)
print()
print(b / 2)
###Output
[2 3]
[1. 1.5]
###Markdown
Broadcasting
###Code
print(A)
print()
print(b)
print()
print(A * b) # element-wise multiplication of b with each row of A
print(b)
print()
print(c)
print(A)
print()
print(c)
print()
print(A * c) # element-wise multiplication of c with each col of A
###Output
[[5 2]
[8 8]]
[[2]
[3]]
[[10 4]
[24 24]]
###Markdown
 Matrix multiplication\begin{equation}\label{eq:matrixeqn} \begin{pmatrix} m_{00} & m_{01} & m_{02} \\ m_{10} & m_{11} & m_{12} \\ m_{20} & m_{21} & m_{22} \end{pmatrix} \cdot \begin{pmatrix} v_{0} \\ v_{1} \\ v_{2} \end{pmatrix} = \begin{pmatrix} m_{00} * v_{0} + m_{01} * v_{1} + m_{02} * v_{2} \\ m_{10} * v_{0} + m_{11} * v_{1} + m_{12} * v_{2} \\ m_{20} * v_{0} + m_{21} * v_{1} + m_{22} * v_{2} \end{pmatrix}\end{equation}(3 x 3) . (3 x 1) = (3 x 1)
###Code
print(b)
print()
print(c)
print()
print(b.dot(c))
c.dot(b)
print(A)
print()
print(b)
print()
print(A.dot(b))
print(A)
print()
print(c)
print()
print(A.dot(c))
###Output
[[6 5]
[6 7]]
[[2]
[3]]
[[27]
[33]]
###Markdown
Speed and timing* Basically do everything in numpy that you possibly can because it's *much much* faster than native python code.
###Code
import timeit
import time
start = timeit.default_timer() # timestamp in sec
time.sleep(2) # sleep 2 sec
stop = timeit.default_timer() # timestamp in sec
stop - start # elapsed time
a = np.linspace(1, 1000000, num=1000000, endpoint=True) # 1-1000000
start = timeit.default_timer()
for i in range(len(a)):
a[i] = a[i]**2
stop = timeit.default_timer()
stop - start
a = np.linspace(1, 1000000, num=1000000, endpoint=True) # 1-1000000
start = timeit.default_timer()
a = a**2
stop = timeit.default_timer()
stop - start
###Output
_____no_output_____
###Markdown
Plot with matplotlibMany more types of plots than those shown below can be made, such as histograms, contours, etc. Lot's of info on these is available online.
###Code
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d # for 3d plots
x = np.random.rand(100)
y = np.random.rand(100)
z = np.random.rand(100)
fig = plt.figure() # make a figure
plt.plot(x, y, 'o') # plot x vs. y in current figure using circle markers
plt.plot(x, z, 'o') # plot x vs. y in current figure using circle markers
plt.xlabel('x')
plt.ylabel('y or z')
plt.title('2d plot')
plt.legend(['x', 'y']); # last semicolon suppresses title object output
fig = plt.figure() # make a figure
ax = plt.axes(projection='3d') # set axes of current figure to 3d axes (this requires having imported mplot3d from mpl_toolkits)
ax.scatter(x, y, z) # 3d scatter plot of x vs. y vs. z
ax.scatter(x, z, y) # 3d scatter plot of x vs. z vs. y
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('z')
plt.title('3d plot')
plt.legend(['x', 'y']);
###Output
_____no_output_____
###Markdown
Subplots
###Code
fig, ax = plt.subplots(nrows=2, ncols=3)
ax[0,0].scatter(x, y)
ax[0,0].set_xlabel('x')
ax[0,0].set_ylabel('y');
ax[1,1].scatter(x, z, marker='s', color='r')
ax[1,1].set_xlabel('x')
ax[1,1].set_ylabel('z')
ax[0,2].plot(range(len(y)), y, linestyle='-', color='c')
ax[0,2].set_ylabel('y')
fig.tight_layout(); # helps improve margins between plots
###Output
_____no_output_____
###Markdown
Interactive plotsYou may need to install the following for interactive plots in JupyterLab: > conda install -c conda-forge ipympl > conda install -c conda-forge widgetsnbextension > conda install nodejs > jupyter labextension install @jupyter-widgets/jupyterlab-manager > jupyter labextension install jupyter-matplotlib`%matplotlib widget` will start using interactive plots`%matplotlib inline` will go back to using non-interactive plots
###Code
# interactive plot mode
%matplotlib widget
fig1 = plt.figure() # make a figure
plt.plot(x, y, 'o') # plot x vs. y in current figure using circle markers
fig2 = plt.figure() # make a figure
ax = plt.axes(projection='3d') # set axes of current figure to 3d axes
ax.scatter(x, y, z); # 3d scatter plot of x vs. y vs. z
# back to non-interactive plot mode
%matplotlib inline
###Output
_____no_output_____ |
mlcourse/MultipleRegression.ipynb | ###Markdown
Multiple Regression Let's grab a small little data set of Blue Book car values:
###Code
import pandas as pd
df = pd.read_excel('http://cdn.sundog-soft.com/Udemy/DataScience/cars.xls')
df.head()
%matplotlib inline
import numpy as np
df1=df[['Mileage','Price']]
bins = np.arange(0,50000,10000)
#print(bins)
groups = df1.groupby(pd.cut(df1['Mileage'],bins)).mean()
print(groups.head())
groups['Price'].plot.line()
###Output
[ 0 10000 20000 30000 40000]
Mileage Price
Mileage
(0, 10000] 5588.629630 24096.714451
(10000, 20000] 15898.496183 21955.979607
(20000, 30000] 24114.407104 20278.606252
(30000, 40000] 33610.338710 19463.670267
###Markdown
We can use pandas to split up this matrix into the feature vectors we're interested in, and the value we're trying to predict.Note how we are avoiding the make and model; regressions don't work well with ordinal values, unless you can convert them into some numerical order that makes sense somehow.Let's scale our feature data into the same range so we can easily compare the coefficients we end up with.
###Code
import statsmodels.api as sm
from sklearn.preprocessing import StandardScaler
scale = StandardScaler()
X = df[['Mileage', 'Cylinder', 'Doors']]
y = df['Price']
X[['Mileage', 'Cylinder', 'Doors']] = scale.fit_transform(X[['Mileage', 'Cylinder', 'Doors']].values)
# Add a constant column to our model so we can have a Y-intercept
X = sm.add_constant(X)
print (X)
est = sm.OLS(y, X).fit()
print(est.summary())
###Output
const Mileage Cylinder Doors
0 1.0 -1.417485 0.52741 0.556279
1 1.0 -1.305902 0.52741 0.556279
2 1.0 -0.810128 0.52741 0.556279
3 1.0 -0.426058 0.52741 0.556279
4 1.0 0.000008 0.52741 0.556279
.. ... ... ... ...
799 1.0 -0.439853 0.52741 0.556279
800 1.0 -0.089966 0.52741 0.556279
801 1.0 0.079605 0.52741 0.556279
802 1.0 0.750446 0.52741 0.556279
803 1.0 1.932565 0.52741 0.556279
[804 rows x 4 columns]
OLS Regression Results
==============================================================================
Dep. Variable: Price R-squared: 0.360
Model: OLS Adj. R-squared: 0.358
Method: Least Squares F-statistic: 150.0
Date: Sun, 31 Oct 2021 Prob (F-statistic): 3.95e-77
Time: 00:54:10 Log-Likelihood: -8356.7
No. Observations: 804 AIC: 1.672e+04
Df Residuals: 800 BIC: 1.674e+04
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 2.134e+04 279.405 76.388 0.000 2.08e+04 2.19e+04
Mileage -1272.3412 279.567 -4.551 0.000 -1821.112 -723.571
Cylinder 5587.4472 279.527 19.989 0.000 5038.754 6136.140
Doors -1404.5513 279.446 -5.026 0.000 -1953.085 -856.018
==============================================================================
Omnibus: 157.913 Durbin-Watson: 0.069
Prob(Omnibus): 0.000 Jarque-Bera (JB): 257.529
Skew: 1.278 Prob(JB): 1.20e-56
Kurtosis: 4.074 Cond. No. 1.03
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
The table of coefficients above gives us the values to plug into an equation of form: B0 + B1 * Mileage + B2 * cylinders + B3 * doors In this example, it's pretty clear that the number of cylinders is more important than anything based on the coefficients.Could we have figured that out earlier?
###Code
y.groupby(df.Doors).mean()
###Output
_____no_output_____
###Markdown
Surprisingly, more doors does not mean a higher price! (Maybe it implies a sport car in some cases?) So it's not surprising that it's pretty useless as a predictor here. This is a very small data set however, so we can't really read much meaning into it.How would you use this to make an actual prediction? Start by scaling your multiple feature variables into the same scale used to train the model, then just call est.predict() on the scaled features:
###Code
scaled = scale.transform([[45000, 8, 4]])
scaled = np.insert(scaled[0], 0, 1) #Need to add that constant column in again.
print(scaled)
predicted = est.predict(scaled)
print(predicted)
###Output
[1. 3.07256589 1.96971667 0.55627894]
[27658.15707316]
|
Pruebas.ipynb | ###Markdown
Plotting stacked bar chart for number of sales per office
###Code
dates = dff.month_year.sort_values().unique()
office_ids = dff.office_id.unique()
sells = dff.groupby('office_id').month_year.value_counts()
sells[0].sort_index().values
fig = go.Figure(data=[
go.Bar(name=office_names.loc[idx, 'name'], x=dates, y=sells[idx].sort_index().values) for idx in sorted(office_ids)
])
fig.update_layout(barmode='stack')
fig.show()
###Output
_____no_output_____
###Markdown
Plotting revenue per office
###Code
dff.groupby(['office_id', 'month_year'])['sale_amount'].sum()
dates = dff.month_year.sort_values().unique()
office_ids = dff.office_id.unique()
revenue = dff.groupby(['office_id', 'month_year'])['sale_amount'].sum()
revenue[0].sort_index().values
fig = go.Figure(data=[
go.Bar(name=office_names.loc[idx, 'name'], x=dates, y=revenue[idx].sort_index().values) for idx in sorted(office_ids)
])
fig.update_layout(barmode='stack')
fig.show()
###Output
_____no_output_____
###Markdown
Programa paso a paso
###Code
import numpy as np
import pandas as pd
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn import preprocessing
import seaborn as sns
import matplotlib.pyplot as plt
df_train = pd.read_csv('train.csv',index_col='Id')
df_test = pd.read_csv('test.csv',index_col='Id')
df_train.columns
#Columnas a eliminar, dado los factores en el README.md
no_relevancia = ['index','month', 'day', 'month', 'NO2', 'O3', 'DEWP', 'station']
df_train.drop(columns= no_relevancia, inplace= True)
df_test.drop(columns= no_relevancia, inplace= True)
df_train.head()
###Output
_____no_output_____
###Markdown
De primera mano podemos observar que podrรญa ser necesario estandarizar los datos en las siguientes columnas:* year: Categorizar los valores y evitar los miles* hour: Siempre y cuando no estรฉ en formato militar* TEMP: Evitar valores negativos (?)* wd: Categorizarlo como dummies
###Code
df_train.isna().sum()
df_train.dtypes
df_train["year"].value_counts()
print(f"TEMP\nmin: {df_train['TEMP'].min()}\nmax: {df_train['TEMP'].max()}")
df_train["wd"].value_counts()
###Output
_____no_output_____
###Markdown
La direcciรณn tiene mas valores de los que esperaba, creo deberia sintetizarlo en valores binarios para N, E, S, W
###Code
df_train["TEMP"] =(df_train["TEMP"]-df_train["TEMP"].min())/(df_train["TEMP"].max()-df_train["TEMP"].min())
df_test["TEMP"] =(df_test["TEMP"]-df_test["TEMP"].min())/(df_test["TEMP"].max()-df_test["TEMP"].min())
def Estandarizar_Direccion(df):
for idx in df.index:
valor_cargado = df.loc[idx, "wd"]
df.loc[idx, "N"] = 1 if "N" in valor_cargado else 0
df.loc[idx, "S"] = 1 if "S" in valor_cargado else 0
df.loc[idx, "E"] = 1 if "E" in valor_cargado else 0
df.loc[idx, "W"] = 1 if "W" in valor_cargado else 0
df.drop(columns=["wd"])
Estandarizar_Direccion(df_train)
Estandarizar_Direccion(df_test)
df_train.drop(columns= ["wd"], inplace= True)
df_test.drop(columns= ["wd"], inplace= True)
df_train["year"] = df_train["year"]-2013
df_test["year"] = df_test["year"]-2013
df_train.head()
df_test["PM2.5"] = 0
df_test.head()
X = df_train.drop(columns=["PM2.5"])
y = df_train["PM2.5"]
X_train,x_test,y_train, y_test = train_test_split(X,y)
corr = X.corr()
mask = np.triu(np.ones_like(corr, dtype=bool))
f, ax = plt.subplots(figsize=(11, 9))
cmap = sns.diverging_palette(230, 20, as_cmap=True)
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
###Output
_____no_output_____
###Markdown
Claro...1. Puedo dejar la direcciรณn definida por solo N y E, la auscencia de ellas simbolizaria lo contrario2. Podrรญa probar a remover la presiรณn atmosferica y mantener la temperaturaLo hare mas abajo para mantener los datos y ver diferencias
###Code
def modeling_testing(lista_modelos):
for i in lista_modelos:
modelo = i()
modelo.fit(X_train,y_train)
train_score = modelo.score(X_train,y_train)
test_score = modelo.score(x_test, y_test)
print('Modelo :', str(i).split(sep = '.')[-1])
print('Train_score :', train_score,'\nTest_Score:' ,test_score,'\n')
###Output
_____no_output_____
###Markdown

###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression, Ridge
from sklearn.neural_network import MLPRegressor
lista_m= [
RandomForestRegressor,
GradientBoostingRegressor,
KNeighborsRegressor,
LinearRegression,
Ridge,
MLPRegressor
]
modeling_testing(lista_m)
###Output
Modelo : RandomForestRegressor'>
Train_score : 0.9902784167626913
Test_Score: 0.9340531626707985
Modelo : GradientBoostingRegressor'>
Train_score : 0.9061643343551061
Test_Score: 0.9079549029924755
Modelo : KNeighborsRegressor'>
Train_score : 0.9317770919229104
Test_Score: 0.8996389758308745
Modelo : LinearRegression'>
Train_score : 0.8434875329696636
Test_Score: 0.8477555079401506
Modelo : Ridge'>
Train_score : 0.8434875327802022
Test_Score: 0.8477556321537686
Modelo : MLPRegressor'>
Train_score : 0.9071584270801607
Test_Score: 0.9106167540300654
###Markdown
Con valores por default el 'RandomForestRegressor' es el modelo con mayor precisiรณn.
###Code
rfr_0 = RandomForestRegressor(
n_estimators= 100,
criterion= "mse",
min_samples_split= 2,
min_samples_leaf= 1
)
rfr_1 = RandomForestRegressor(
n_estimators= 200,
criterion= "mse",
min_samples_split= 4,
min_samples_leaf= 2
)
rfr_2 = RandomForestRegressor(
n_estimators= 300,
criterion= "mse",
min_samples_split= 6,
min_samples_leaf= 3
)
configuraciones = [rfr_0, rfr_1, rfr_2]
for configuracion in configuraciones:
configuracion.fit(X_train,y_train)
train_score = configuracion.score(X_train,y_train)
test_score = configuracion.score(x_test, y_test)
print('Train_score :', train_score,'\nTest_Score:' ,test_score,'\n')
rfr_3 = RandomForestRegressor(
n_estimators= 50,
criterion= "mse",
min_samples_split= 2,
min_samples_leaf= 1
)
rfr_3.fit(X_train,y_train)
train_score = rfr_3.score(X_train,y_train)
test_score = rfr_3.score(x_test, y_test)
print('Train_score :', train_score,'\nTest_Score:' ,test_score,'\n')
X = df_train.drop(columns=["PM2.5","S","W", "PRES"])
X_train,x_test,y_train, y_test = train_test_split(X,y)
rfr_0.fit(X_train,y_train)
train_score = rfr_0.score(X_train,y_train)
test_score = rfr_0.score(x_test, y_test)
print('Train_score :', train_score,'\nTest_Score:' ,test_score,'\n')
X = df_train.drop(columns=["PM2.5","S","W"])
X_train,x_test,y_train, y_test = train_test_split(X,y)
rfr_0.fit(X_train,y_train)
train_score = rfr_0.score(X_train,y_train)
test_score = rfr_0.score(x_test, y_test)
print('Train_score :', train_score,'\nTest_Score:' ,test_score,'\n')
###Output
Train_score : 0.9903314356573149
Test_Score: 0.9313785140220596
|
examples/SimpleTracker-yolo-model.ipynb | ###Markdown
Loading Object Detector Model YOLO Object Detection and TrackingHere, the YOLO Object Detection Model is used.The pre-trained model is from following link: - Object detection is taken from the following work: **Redmon, J., & Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv preprint arXiv:1804.02767.** - Research paper for YOLO object detections and its improvement can be found here: https://arxiv.org/abs/1804.02767 - Refer the following link for more details on the network: https://pjreddie.com/darknet/yolo/ - The weights and configuration files can be downloaded and stored in a folder. - Weights: https://pjreddie.com/media/files/yolov3.weights
###Code
yolomodel = {"config_path":yolo_config_path.selected,
"model_weights_path":yolo_weights_path.selected,
"coco_names":coco_names_path.selected,
"confidence_threshold": 0.5,
"threshold":0.3
}
net = cv.dnn.readNetFromDarknet(yolomodel["config_path"], yolomodel["model_weights_path"])
labels = open(yolomodel["coco_names"]).read().strip().split("\n")
np.random.seed(12345)
layer_names = net.getLayerNames()
layer_names = [layer_names[i[0]-1] for i in net.getUnconnectedOutLayers()]
bbox_colors = np.random.randint(0, 255, size=(len(labels), 3))
###Output
['yolo_82', 'yolo_94', 'yolo_106']
###Markdown
Instantiate the Tracker Class
###Code
maxLost = 5 # maximum number of object losts counted when the object is being tracked
tracker = SimpleTracker(max_lost = maxLost)
###Output
_____no_output_____
###Markdown
Initiate opencv video capture objectThe `video_src` can take two values:1. If `video_src=0`: OpenCV accesses the camera connected through USB2. If `video_src='video_file_path'`: OpenCV will access the video file at the given path (can be MP4, AVI, etc format)
###Code
video_src = video_file_path.selected #0
cap = cv.VideoCapture(video_src)
###Output
_____no_output_____
###Markdown
Start object detection and tracking
###Code
(H, W) = (None, None) # input image height and width for the network
writer = None
while(True):
ok, image = cap.read()
if not ok:
print("Cannot read the video feed.")
break
if W is None or H is None: (H, W) = image.shape[:2]
blob = cv.dnn.blobFromImage(image, 1 / 255.0, (416, 416), swapRB=True, crop=False)
net.setInput(blob)
detections_layer = net.forward(layer_names) # detect objects using object detection model
detections_bbox = [] # bounding box for detections
boxes, confidences, classIDs = [], [], []
for out in detections_layer:
for detection in out:
scores = detection[5:]
classID = np.argmax(scores)
confidence = scores[classID]
if confidence > yolomodel['confidence_threshold']:
box = detection[0:4] * np.array([W, H, W, H])
(centerX, centerY, width, height) = box.astype("int")
x = int(centerX - (width / 2))
y = int(centerY - (height / 2))
boxes.append([x, y, int(width), int(height)])
confidences.append(float(confidence))
classIDs.append(classID)
idxs = cv.dnn.NMSBoxes(boxes, confidences, yolomodel["confidence_threshold"], yolomodel["threshold"])
if len(idxs)>0:
for i in idxs.flatten():
(x, y) = (boxes[i][0], boxes[i][1])
(w, h) = (boxes[i][2], boxes[i][3])
detections_bbox.append((x, y, x+w, y+h))
clr = [int(c) for c in bbox_colors[classIDs[i]]]
cv.rectangle(image, (x, y), (x+w, y+h), clr, 2)
cv.putText(image, "{}: {:.4f}".format(labels[classIDs[i]], confidences[i]),
(x, y-5), cv.FONT_HERSHEY_SIMPLEX, 0.5, clr, 2)
objects = tracker.update(detections_bbox) # update tracker based on the newly detected objects
for (objectID, centroid) in objects.items():
text = "ID {}".format(objectID)
cv.putText(image, text, (centroid[0] - 10, centroid[1] - 10), cv.FONT_HERSHEY_SIMPLEX,
0.5, (0, 255, 0), 2)
cv.circle(image, (centroid[0], centroid[1]), 4, (0, 255, 0), -1)
cv.imshow("image", image)
if cv.waitKey(1) & 0xFF == ord('q'):
break
if writer is None:
fourcc = cv.VideoWriter_fourcc(*"MJPG")
writer = cv.VideoWriter("output.avi", fourcc, 30, (W, H), True)
writer.write(image)
writer.release()
cap.release()
cv.destroyWindow("image")
###Output
Cannot read the video feed.
|
pymc3/examples/stochastic_volatility.ipynb | ###Markdown
Stochastic Volatility model
###Code
import numpy as np
import pymc3 as pm
from pymc3.distributions.timeseries import GaussianRandomWalk
from scipy.sparse import csc_matrix
from scipy import optimize
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Asset prices have time-varying volatility (variance of day over day `returns`). In some periods, returns are highly variable, while in others very stable. Stochastic volatility models model this with a latent volatility variable, modeled as a stochastic process. The following model is similar to the one described in the No-U-Turn Sampler paper, Hoffman (2011) p21.$$ \sigma \sim Exponential(50) $$$$ \nu \sim Exponential(.1) $$$$ s_i \sim Normal(s_{i-1}, \sigma^{-2}) $$$$ log(\frac{y_i}{y_{i-1}}) \sim t(\nu, 0, exp(-2 s_i)) $$Here, $y$ is the daily return series and $s$ is the latent log volatility process. Build Model First we load some daily returns of the S&P 500.
###Code
n = 400
returns = np.genfromtxt("data/SP500.csv")[-n:]
returns[:5]
plt.plot(returns)
###Output
_____no_output_____
###Markdown
Specifying the model in pymc3 mirrors its statistical specification.
###Code
model = pm.Model()
with model:
sigma = pm.Exponential('sigma', 1./.02, testval=.1)
nu = pm.Exponential('nu', 1./10)
s = GaussianRandomWalk('s', sigma**-2, shape=n)
r = pm.T('r', nu, lam=pm.exp(-2*s), observed=returns)
###Output
_____no_output_____
###Markdown
Fit Model For this model, the full maximum a posteriori (MAP) point is degenerate and has infinite density. However, if we fix `log_sigma` and `nu` it is no longer degenerate, so we find the MAP with respect to the volatility process, 's', keeping `log_sigma` and `nu` constant at their default values. We use L-BFGS because it is more efficient for high dimensional functions (`s` has n elements).
###Code
with model:
start = pm.find_MAP(vars=[s], fmin=optimize.fmin_l_bfgs_b)
###Output
_____no_output_____
###Markdown
We do a short initial run to get near the right area, then start again using a new Hessian at the new starting point to get faster sampling due to better scaling. We do a short run since this is an interactive example.
###Code
with model:
step = pm.NUTS(vars=[s, nu,sigma],scaling=start, gamma=.25)
start2 = pm.sample(100, step, start=start)[-1]
# Start next run at the last sampled position.
step = pm.NUTS(vars=[s, nu,sigma],scaling=start2, gamma=.55)
trace = pm.sample(2000, step, start=start2)
figsize(12,6)
pm.traceplot(trace, model.vars[:-1]);
figsize(12,6)
title(str(s))
plot(trace[s][::10].T,'b', alpha=.03);
xlabel('time')
ylabel('log volatility')
###Output
_____no_output_____
###Markdown
Looking at the returns over time and overlaying the estimated standard deviation we can see how the model tracks the volatility over time.
###Code
plot(returns)
plot(np.exp(trace[s][::10].T), 'r', alpha=.03);
sd = np.exp(trace[s].T)
plot(-np.exp(trace[s][::10].T), 'r', alpha=.03);
xlabel('time')
ylabel('returns')
###Output
_____no_output_____
###Markdown
Stochastic Volatility model
###Code
import numpy as np
import pymc3 as pm
from pymc3.distributions.timeseries import GaussianRandomWalk
from scipy.sparse import csc_matrix
from scipy import optimize
%pylab inline
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Asset prices have time-varying volatility (variance of day over day `returns`). In some periods, returns are highly variable, while in others very stable. Stochastic volatility models model this with a latent volatility variable, modeled as a stochastic process. The following model is similar to the one described in the No-U-Turn Sampler paper, Hoffman (2011) p21.$$ \sigma \sim Exponential(50) $$$$ \nu \sim Exponential(.1) $$$$ s_i \sim Normal(s_{i-1}, \sigma^{-2}) $$$$ log(\frac{y_i}{y_{i-1}}) \sim t(\nu, 0, exp(-2 s_i)) $$Here, $y$ is the daily return series and $s$ is the latent log volatility process. Build Model First we load some daily returns of the S&P 500.
###Code
n = 400
returns = np.genfromtxt("data/SP500.csv")[-n:]
returns[:5]
plt.plot(returns)
###Output
_____no_output_____
###Markdown
Specifying the model in pymc3 mirrors its statistical specification.
###Code
model = pm.Model()
with model:
sigma = pm.Exponential('sigma', 1./.02, testval=.1)
nu = pm.Exponential('nu', 1./10)
s = GaussianRandomWalk('s', sigma**-2, shape=n)
r = pm.StudentT('r', nu, lam=pm.exp(-2*s), observed=returns)
###Output
_____no_output_____
###Markdown
Fit Model For this model, the full maximum a posteriori (MAP) point is degenerate and has infinite density. However, if we fix `log_sigma` and `nu` it is no longer degenerate, so we find the MAP with respect to the volatility process, 's', keeping `log_sigma` and `nu` constant at their default values. We use L-BFGS because it is more efficient for high dimensional functions (`s` has n elements).
###Code
with model:
start = pm.find_MAP(vars=[s], fmin=optimize.fmin_l_bfgs_b)
###Output
_____no_output_____
###Markdown
We do a short initial run to get near the right area, then start again using a new Hessian at the new starting point to get faster sampling due to better scaling. We do a short run since this is an interactive example.
###Code
with model:
step = pm.NUTS(vars=[s, nu,sigma],scaling=start, gamma=.25)
start2 = pm.sample(100, step, start=start)[-1]
# Start next run at the last sampled position.
step = pm.NUTS(vars=[s, nu,sigma],scaling=start2, gamma=.55)
trace = pm.sample(2000, step, start=start2)
figsize(12,6)
pm.traceplot(trace, model.vars[:-1]);
figsize(12,6)
title(str(s))
plot(trace[s][::10].T,'b', alpha=.03);
xlabel('time')
ylabel('log volatility')
###Output
_____no_output_____
###Markdown
Looking at the returns over time and overlaying the estimated standard deviation we can see how the model tracks the volatility over time.
###Code
plot(returns)
plot(np.exp(trace[s][::10].T), 'r', alpha=.03);
sd = np.exp(trace[s].T)
plot(-np.exp(trace[s][::10].T), 'r', alpha=.03);
xlabel('time')
ylabel('returns')
###Output
_____no_output_____ |
multi-output-multi-label-regression.ipynb | ###Markdown
x_w3_L9(last lec)-mlt-dip-iitm Multi-output/Multi-label RegressionIn case of multi-output regression, there are more than one output labels, all of which are real numbers. Training data let's generate synthetic data for demonstrating the training set in multi-output regression using make_regression dataset generation function from sklearn library.
###Code
from sklearn.datasets import make_regression
X, y, coef = make_regression(n_samples=100, n_features=10, n_informative=10, bias=1, n_targets=5, shuffle=True, coef=True, random_state=42)
print(X.shape)
print(y.shape)
###Output
(100, 10)
(100, 5)
###Markdown
Let's examine first five examples in terms of their features and labels:
###Code
print("Sample training examples:\n ", X[:5])
print("\n")
print("Corresponding labels:\n ", y[:5])
###Output
Sample training examples:
[[-2.07339023 -0.37144087 1.27155509 1.75227044 0.93567839 -1.40751169
-0.77781669 -0.34268759 -1.11057585 1.24608519]
[-0.90938745 -1.40185106 -0.50347565 -0.56629773 0.09965137 0.58685709
2.19045563 1.40279431 -0.99053633 0.79103195]
[-0.18565898 -1.19620662 -0.64511975 1.0035329 0.36163603 0.81252582
1.35624003 -1.10633497 -0.07201012 -0.47917424]
[ 0.03526355 0.21397991 -0.57581824 0.75750771 -0.53050115 -0.11232805
-0.2209696 -0.69972551 0.6141667 1.96472513]
[-0.51604473 -0.46227529 -0.8946073 -0.47874862 1.25575613 -0.43449623
-0.30917212 0.09612078 0.22213377 0.93828381]]
Corresponding labels:
[[-133.15919852 -88.95797818 98.19127175 25.68295511 -132.79294654]
[-110.38909784 146.04459736 -169.58916067 118.96066861 -177.08414159]
[ -97.80350267 4.32654061 -87.56082281 -5.58466452 6.36897388]
[ 25.39024616 -70.41180117 186.15213706 132.77153362 53.42301307]
[-140.61925153 -53.87007831 -101.11514549 -113.36926374 -115.61959345]]
###Markdown
and the coefficients or weight vector used for generating this dataset is
###Code
coef
###Output
_____no_output_____
###Markdown
[Preprocessing: Dummy feature and train-test split]
###Code
from sklearn.model_selection import train_test_split
def add_dummy_feature(X):
# np.column_stack((np.ones(x.shape[0]) x))
X_dummyFeature = np.column_stack((np.ones(X.shape[0]), X))
return X_dummyFeature
def trainTestSplit(X, y):
return train_test_split(X,y, test_size=.2,random_state=42 )
def preprocess(X, y):
X_withdummyfeature = add_dummy_feature(X)
X_train, X_test, y_train, y_test = trainTestSplit(X_withdummyfeature, y)
return (X_train, X_test, y_train, y_test)
X_train, X_test, y_train, y_test = preprocess(X, y)
###Output
_____no_output_____
###Markdown
Model There are two options for modeling this problem:1. Solve k independent linear regression problems. Gives some flexibility in using different representation for each problem.2. Solve a joint learning problem as outlined in the equation above. We would pursue this approach. Loss Loss function(loss): J(w) = (1/2)$(Xw-y)^T (Xw - y)$ Optimization1. Normal equation2. Gradient descent and its variations EvaluationRMSE or Loss Linear regression implementation
###Code
class LinReg(object):
'''
Linear regression model
-----------------------
y = X@w
X: A feature matrix
w: weight vector
y: label vector
'''
def __init__(self):
self.t0 = 200
self.t1 = 100000
def predict(self, X:np.ndarray) -> np.ndarray:
'''Prediction of output label for a given input.
Args:
X: Feature matrix for given inputs.
Returns:
y: Output label vector as predicted by the given model.
'''
y = X @ self.w
return y
def loss(self, X:np.ndarray, y:np.ndarray) -> float:
'''Calculate loss for a model based on known labels
Args:
X: Feature matrix for given inputs.
y: Output label vector as predicted by the given model.
Returns:
Loss
'''
e = y - self.predict(X)
return (1/2) * (np.transpose(e) @ e)
def rmse(self, X:np.ndarray, y:np.ndarray) -> float:
'''Calculates root mean squared error of prediction w.r.t. actual label.
Args:
X: Feature matrix for given inputs.
y: Output label vector as predicted by the given model.
Returns:
Loss
'''
return np.sqrt((2/X.shape[0]) * self.loss(X, y))
def fit(self, X:np.ndarray, y:np.ndarray) -> np.ndarray:
'''Estimates parameters of the linear regression model with normal equation.
Args:
X: Feature matrix for given inputs.
y: Output label vector as predicted by the given model.
Returns:
Weight vector
'''
self.w = np.linalg.pinv(X) @ y
return self.w
def calculate_gradient(self, X:np.ndarray, y:np.ndarray)->np.ndarray:
'''Calculates gradients of loss function w.r.t. weight vector on training set.
Args:
X: Feature matrix for given inputs.
y: Output label vector as predicted by the given model.
Returns:
A vector of gradients
'''
return np.transpose(X)@(self.predict(X) - y)
def update_weights(self, grad:np.ndarray, lr:float) -> np.ndarray:
'''Updates the weights based on the gradient of loss function.
Weight updates are carried out with the following formula:
w_new := w_old - lr*grad
Args:
2. grad: gradient of loss w.r.t. w
3. lr: learning rate
Returns:
Updated weight vector
'''
return (self.w - lr*grad)
def learning_schedule(self, t):
return self.t0 / (t + self.t1)
def gd(self, X:np.ndarray, y:np.ndarray, num_epochs:int, lr:float) -> np.ndarray:
'''Estimates parameters of linear regression model through gradient descent.
Args:
X: Feature matrix for given inputs.
y: Output label vector as predicted by the given model.
num_epochs: Number of training steps
lr: learning rate
Returns:
Weight vector: Final weight vector
'''
self.w = np.zeros((X.shape[1], y.shape[1]))
self.w_all = []
self.err_all = []
for i in np.arange(0, num_epochs):
dJdW = self.calculate_gradient(X, y)
self.w_all.append(self.w)
self.err_all.append(self.loss(X, y))
self.w = self.update_weights(dJdW, lr)
return self.w
def mbgd(self, X:np.ndarray, y:np.ndarray,
num_epochs:int, batch_size:int) -> np.ndarray:
'''Estimates parameters of linear regression model through gradient descent.
Args:
X: Feature matrix of training data.
y: Label vector for training data
num_epochs: Number of training steps
batch_size: Number of examples in a batch
Returns:
Weight vector: Final weight vector
'''
self.w = np.zeros((X.shape[1]))
self.w_all = [] # all params across iterations.
self.err_all = [] # error across iterations
mini_batch_id = 0
for epoch in range(num_epochs):
shuffled_indices = np.random.permutation(X.shape[0])
X_shuffled = X[shuffled_indices]
y_shuffled = y[shuffled_indices]
for i in range(0, X.shape[0], batch_size):
mini_batch_id += 1
xi = X_shuffled[i:i+minibatch_size]
yi = y_shuffled[i:i+minibatch_size]
self.w_all.append(self.w)
self.err_all.append(self.loss(xi, yi))
dJdW = 2/batch_size * self.calculate_gradient(xi, yi)
self.w = self.update_weights(dJdW, self.learning_schedule(mini_batch_id))
return self.w
def sgd(self, X:np.ndarray, y:np.ndarray,
num_epochs:int, batch_size:int) -> np.ndarray:
'''Estimates parameters of linear regression model through gradient descent.
Args:
X: Feature matrix of training data.
y: Label vector for training data
num_epochs: Number of training steps
batch_size: Number of examples in a batch
Returns:
Weight vector: Final weight vector
'''
self.w = np.zeros((X.shape[1]))
self.w_all = [] # all params across iterations.
self.err_all = [] # error across iterations
mini_batch_id = 0
for epoch in range(num_epochs):
for i in range(X.shape[0]):
random_index = np.random.randint(X.shape[0])
xi = X_shuffled[random_index:random_index+1]
yi = y_shuffled[random_index:random_index+1]
self.w_all.append(self.w)
self.err_all.append(self.loss(xi, yi))
gradients = 2 * self.calculate_gradient(xi, yi)
lr = self.learning_schedule(epoch * X.shape[0] + i)
self.w = self.update_weights(gradients, lr)
return self.w
lin_reg = LinReg()
w = lin_reg.fit(X_train, y_train)
# Check if the weight vector si same as the coefficient vector used fo rmaking the data:
np.testing.assert_almost_equal(w[1:, :], coef, decimal=2)
###Output
_____no_output_____
###Markdown
Let's check the estimated weight vector
###Code
w
###Output
_____no_output_____
###Markdown
The weight vectors are along the column.
###Code
w = lin_reg.gd(X_train, y_train, num_epochs=100, lr=0.01)
np.testing.assert_almost_equal(w[1:, :], coef, decimal=2)
###Output
_____no_output_____ |
notebook/bert_baseline.ipynb | ###Markdown
Prepare
###Code
device = 'cuda:1' if cuda.is_available() else 'cpu'
MAX_LEN = 150
BATCH_SIZE = 64
EPOCHS = 1
LEARNING_RATE = 1e-05
DISTIL_BERT_CHECKPOINT = 'distilbert-base-uncased'
RUN_NAME = 'ROS'
TEST_PATH = '../data/processed/quick_test.csv'
TRAIN_PATH = '../data/ros/train.csv'
MODEL_SAVE = '../models/'
tokenizer = DistilBertTokenizer.from_pretrained(DISTIL_BERT_CHECKPOINT)
###Output
_____no_output_____
###Markdown
Dataset and dataloader
###Code
class QuoraDataset(Dataset):
def __init__(self, file_path, tokenizer, max_len):
self._dataset = pd.read_csv(file_path, low_memory=False)
self._tokenizer = tokenizer
self._max_len = max_len
def __getitem__(self, index):
text = self._dataset.iloc[index]["question_text"]
inputs = self._tokenizer(
[text],
truncation=True,
return_tensors="pt",
max_length=self._max_len,
padding='max_length'
)
return {
"ids": inputs["input_ids"],
"mask": inputs["attention_mask"],
"target": torch.tensor(self._dataset.iloc[index]["target"], dtype=torch.long)
}
def __len__(self):
return len(self._dataset)
train_dataset = QuoraDataset(TRAIN_PATH, tokenizer, MAX_LEN)
test_dataset = QuoraDataset(TEST_PATH, tokenizer, MAX_LEN)
train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=True)
###Output
_____no_output_____
###Markdown
DistilBert Model
###Code
class DistilBertModelClass(nn.Module):
def __init__(self):
super(DistilBertModelClass, self).__init__()
self.distil_bert = DistilBertModel.from_pretrained("distilbert-base-uncased")
self.linear1 = nn.Linear(768, 2)
self.sigmoid = nn.Sigmoid()
def forward(self, ids, mask):
bert_out = self.distil_bert(ids, mask)
x = bert_out.last_hidden_state[:, -1, :] # get bert last hidden state
x = self.linear1(x)
x = self.sigmoid(x)
return x
model = DistilBertModelClass()
model.to(device);
###Output
Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertModel: ['vocab_layer_norm.bias', 'vocab_projector.bias', 'vocab_transform.weight', 'vocab_projector.weight', 'vocab_layer_norm.weight', 'vocab_transform.bias']
- This IS expected if you are initializing DistilBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing DistilBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
###Markdown
Training
###Code
# Creating the loss function and optimizer
loss = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(params=model.parameters(), lr=LEARNING_RATE)
from sklearn.metrics import accuracy_score, f1_score, roc_auc_score
from collections import defaultdict
def accuracy(model, loader):
model.eval()
with torch.no_grad():
y_pred = []
y_true = []
classname = {0: 'Sincere', 1: 'Insincere'}
correct_pred = defaultdict(lambda: 0)
total_pred = defaultdict(lambda: 0)
for inputs in loader:
ids = inputs['ids'].squeeze(1).to(device)
mask = inputs['mask'].squeeze(1).to(device)
targets = inputs['target'].to(device)
output = model(ids, mask).squeeze()
_, predictions = torch.max(output, 1)
y_pred += list(predictions.to('cpu'))
y_true += list(targets.to('cpu'))
for target, prediction in zip(targets, predictions):
if target.item() == prediction.item():
correct_pred[classname[target.item()]] += 1
total_pred[classname[prediction.item()]] += 1
results = {
'accuracy': accuracy_score(y_true, y_pred),
'f1': f1_score(y_true, y_pred),
'roc_auc': roc_auc_score(y_true, y_pred)
}
for classname, correct_count in correct_pred.items():
results['precision_' + classname] = 100 * float(correct_count) / total_pred[classname]
return results
results = accuracy(model, test_loader)
results
def train(epoch=1):
model.train()
for idx, inputs in enumerate(train_loader):
ids = inputs['ids'].squeeze(1).to(device)
mask = inputs['mask'].squeeze(1).to(device)
target = inputs['target'].to(device)
output = model(ids, mask).squeeze()
optimizer.zero_grad()
l = loss(output, target)
l.backward()
optimizer.step()
# Log Loss
run["train/loss"].log(l.item())
if idx % 10 == 0:
print(f'Epoch: {epoch}, {idx}/{len(train_loader)}, Loss: {l.item()}')
if idx % 20 == 0:
results = accuracy(model, test_loader)
run["train/accuracy"] = results['accuracy']
run["train/f1"] = results['f1']
run["train/roc_auc"] = results['roc_auc']
run["train/precision_Sincere"] = results['precision_Sincere']
run["train/precision_Insincere"] = results['precision_Insincere']
print(results)
print("Saving model...")
torch.save(model.state_dict(), Path(MODEL_SAVE) / f'ftbert_{idx}_{datetime.now()}' )
###Output
_____no_output_____
###Markdown
Training
###Code
# track training and results...
import neptune.new as neptune
run = neptune.init(
project=settings.project,
api_token=settings.api_token,
name='RandomOversampling'
)
train(epoch=EPOCHS)
run.stop()
###Output
https://app.neptune.ai/demenezes/Mestrado-RI/e/MES-6
Remember to stop your run once youโve finished logging your metadata (https://docs.neptune.ai/api-reference/run#.stop). It will be stopped automatically only when the notebook kernel/interactive console is terminated.
Epoch: 1, 0/13497, Loss: 0.6846345067024231
{'accuracy': 0.1761968085106383, 'f1': 0.267297457125961, 'roc_auc': 0.5034451153534436, 'precision_Insincere': 15.484755053100377, 'precision_Sincere': 87.64044943820225}
Saving model...
Epoch: 1, 10/13497, Loss: 0.6750589609146118
Epoch: 1, 20/13497, Loss: 0.6509659886360168
Epoch: 1, 30/13497, Loss: 0.6095486879348755
Epoch: 1, 40/13497, Loss: 0.5514026880264282
Epoch: 1, 50/13497, Loss: 0.49052292108535767
Epoch: 1, 60/13497, Loss: 0.476421594619751
Epoch: 1, 70/13497, Loss: 0.4465118944644928
Epoch: 1, 80/13497, Loss: 0.4685976207256317
Epoch: 1, 90/13497, Loss: 0.42306268215179443
Epoch: 1, 100/13497, Loss: 0.456206351518631
Epoch: 1, 110/13497, Loss: 0.4823126196861267
Epoch: 1, 120/13497, Loss: 0.4374268352985382
Epoch: 1, 130/13497, Loss: 0.43227869272232056
Epoch: 1, 140/13497, Loss: 0.40552234649658203
Epoch: 1, 150/13497, Loss: 0.4238656163215637
###Markdown
###Code
for fold, (train_index, valid_index) in enumerate(skf.split(all_label, all_label)):
# remove this line if you want to train for all 7 folds
if fold == 2:
break # due to kernel time limit
logger.info('================ fold {} ==============='.format(fold))
train_input_ids = torch.tensor(all_input_ids[train_index], dtype=torch.long)
train_input_mask = torch.tensor(all_input_mask[train_index], dtype=torch.long)
train_segment_ids = torch.tensor(all_segment_ids[train_index], dtype=torch.long)
train_label = torch.tensor(all_label[train_index], dtype=torch.long)
valid_input_ids = torch.tensor(all_input_ids[valid_index], dtype=torch.long)
valid_input_mask = torch.tensor(all_input_mask[valid_index], dtype=torch.long)
valid_segment_ids = torch.tensor(all_segment_ids[valid_index], dtype=torch.long)
valid_label = torch.tensor(all_label[valid_index], dtype=torch.long)
train = torch.utils.data.TensorDataset(train_input_ids, train_input_mask, train_segment_ids, train_label)
valid = torch.utils.data.TensorDataset(valid_input_ids, valid_input_mask, valid_segment_ids, valid_label)
test = torch.utils.data.TensorDataset(test_input_ids, test_input_mask, test_segment_ids)
train_loader = torch.utils.data.DataLoader(train, batch_size=batch_size, shuffle=True)
valid_loader = torch.utils.data.DataLoader(valid, batch_size=batch_size, shuffle=False)
test_loader = torch.utils.data.DataLoader(test, batch_size=batch_size, shuffle=False)
model = NeuralNet()
model.cuda()
loss_fn = torch.nn.CrossEntropyLoss()
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}]
optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate, eps=1e-6)
model.train()
best_f1 = 0.
valid_best = np.zeros((valid_label.size(0), 2))
early_stop = 0
for epoch in range(num_epochs):
train_loss = 0.
for batch in tqdm(train_loader):
batch = tuple(t.cuda() for t in batch)
x_ids, x_mask, x_sids, y_truth = batch
y_pred = model(x_ids, x_mask, x_sids)
loss = loss_fn(y_pred, y_truth)
optimizer.zero_grad()
loss.backward()
optimizer.step()
train_loss += loss.item() / len(train_loader)
model.eval()
val_loss = 0.
valid_preds_fold = np.zeros((valid_label.size(0), 2))
with torch.no_grad():
for i, batch in tqdm(enumerate(valid_loader)):
batch = tuple(t.cuda() for t in batch)
x_ids, x_mask, x_sids, y_truth = batch
y_pred = model(x_ids, x_mask, x_sids).detach()
val_loss += loss_fn(y_pred, y_truth).item() / len(valid_loader)
valid_preds_fold[i * batch_size:(i + 1) * batch_size] = F.softmax(y_pred, dim=1).cpu().numpy()
acc, f1 = metric(all_label[valid_index], np.argmax(valid_preds_fold, axis=1))
if best_f1 < f1:
early_stop = 0
best_f1 = f1
valid_best = valid_preds_fold
torch.save(model.state_dict(), 'model_fold_{}.bin'.format(fold))
else:
early_stop += 1
logger.info(
'epoch: %d, train loss: %.8f, valid loss: %.8f, acc: %.8f, f1: %.8f, best_f1: %.8f\n' %
(epoch, train_loss, val_loss, acc, f1, best_f1))
torch.cuda.empty_cache()
if early_stop >= patience:
break
test_preds_fold = np.zeros((len(test_df), 2))
valid_preds_fold = np.zeros((valid_label.size(0), 2))
model.load_state_dict(torch.load('model_fold_{}.bin'.format(fold)))
model.eval()
with torch.no_grad():
for i, batch in tqdm(enumerate(valid_loader)):
batch = tuple(t.cuda() for t in batch)
x_ids, x_mask, x_sids, y_truth = batch
y_pred = model(x_ids, x_mask, x_sids).detach()
valid_preds_fold[i * batch_size:(i + 1) * batch_size] = F.softmax(y_pred, dim=1).cpu().numpy()
with torch.no_grad():
for i, batch in tqdm(enumerate(test_loader)):
batch = tuple(t.cuda() for t in batch)
x_ids, x_mask, x_sids = batch
y_pred = model(x_ids, x_mask, x_sids).detach()
test_preds_fold[i * batch_size:(i + 1) * batch_size] = F.softmax(y_pred, dim=1).cpu().numpy()
valid_best = valid_preds_fold
oof_train[valid_index] = valid_best
acc, f1 = metric(all_label[valid_index], np.argmax(valid_best, axis=1))
logger.info('epoch: best, acc: %.8f, f1: %.8f, best_f1: %.8f\n' %
(acc, f1, best_f1))
#oof_test += test_preds_fold / 7 # uncomment this for 7 folds
oof_test += test_preds_fold / 2 # comment this line when training for 7 folds
logger.info(f1_score(labels, np.argmax(oof_train, axis=1)))
train_df['pred_target'] = np.argmax(oof_train, axis=1)
train_df.head()
test_df['target'] = np.argmax(oof_test, axis=1)
logger.info(test_df['target'].value_counts())
submit['target'] = np.argmax(oof_test, axis=1)
submit.to_csv('submission_3fold.csv', index=False)
###Output
_____no_output_____ |
PDA Assignment 1.ipynb | ###Markdown
Programming for Data Analysis Practical Assignment***1. Explain the overall purpose of the NumPy package.2. Explain the use of the "simple random data" and "Permutations" functions.3. Explain the use and purpose of at least five "Distributions" functions.4. Explain the use of seeds in generating pseudorandom numbers. 1. Explain the overall purpose of the NumPy package.*** NumPyNumPy is a linear algebra library in Python. It is used to perform mathematical and logical operations in arrays.A NumPy array is a grid that contains values of the same type.(1) There are 2 types of arrays :1. Vectors - are one dimensional2. Matrices - are multidimensionalWhy use NumPy when Python can perform the same function(s)?There are 2 reasons to use NumPy rather than Python, they are :1. NumPy is more efficient, meaning it uses less memory to store data.2. It handles the data from mathematical operations better.It's because of these 2 functions that NumPy is so popular and explains it's purpose. It allows for real life complex data to be used to solve solutions to everyday problems. NumPy is used across many industries such as the computer gaming industry, which uses it for computer generated images, electrical engineers use it to determine the properties of a circuit, medical companies use it for CAT scans and MRIs, the robotic industry uses it to operate robot movements and IT companies use it for tracking user information, to perform search queries and manage databases. These are just a small amount of examples. 2. Explain the use of the "Simple random data" and "Permutations" functions.*** Simple Random DataBefore I get into the randon fucntion(s) in numpy, I want to explore why anyone would need to generate random numbers. It turns out the use of random numbers is utilized across many industries.. It is used in science, art, statistics, gaming, gambling and other industries.(2). It is used to for encryption, modeling complex phenomena and for selecting random samples from larger data sets. (3).A specific example of the use of random generated numbers comes from the online betting exchange Betfair. In their online help centre, they offer an explanation of "What are Random Number Generators, and how do they work?". (4)(. It is very interesting, especially their explanation on the use of 'seeds'. More on seeds at the end of this assignment. But basically, they say it is used to generate numbers that do not have patterns and thus appear to be random. In NumPy, there are several ways to generate simple random data such as rand, randn, and random. They all return random numbers but go about it slightly different.*_Rand_* - creates an array of a specified shape and fills it with random numbers from a uniform distribution over \[0.0,1.0). (5) *_Randn_* - creates the same array as _Rand_ but fills it with random values based on the 'standard normal' distribution.(6).*_Random_* - returns an array filled with random numbers from a continuous uniform distribution in the half open interval \[0.0,1.0) (7) Below are examples of three simple random functions, (rand, randn and random) and histographs to illustrate their outcomes.
###Code
### import numpy library to assis in running the random fuctions.
import numpy as np
###Output
_____no_output_____
###Markdown
random.rand Random.rand (d0,d1,...dn)is the random function where d is a parameter that gives the array dimension. Example, random.rand(2,3) will return 6 random numbers \(2*3) with dimensions 2 rows by 3 columns.If you just wanted to generate a specific number or a specific amount of numbers you would just enter how many numbers you want to generate instead of giving it dimensions. ie random.rand(1000). Here are two examples of the random.rand function : The first example shows how to generate 10 random numbers in an array with two rows and five columns.
###Code
np.random.rand(2,5)
###Output
_____no_output_____
###Markdown
The second example shows how to generate a defined amount of random numbers without specifying dimensions. In this examplewe choose 1000 random numbers.
###Code
### chose 1000 as random numbers to get a better representation in the random.rand histogram.
np.random.rand(1000)
###Output
_____no_output_____
###Markdown
Below shows how to create a histogram for the np.random.rand function.
###Code
x = np.random.rand(1000)
### import matplotlib librart to assist with the creation of the histograms.
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
plt.hist(x)
plt.xlabel('continuous variables')
plt.ylabel('number of outcomes')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the histogram above, the random.rand is uniform with 10 different columns of continuous variables all around the 100 level of outcomes. random.randn The random.randn function is the same as the random.rand as far as being able to give the outcome dimensions and/or the ability to generate many random numbers without specifying the dimension. The only difference is that the random.randn is based on a normal distribution. Please see the below histogram based on 5000 outcomes. Here are two examples of the random.randn function : The first example shows how to generate random numbers in an array with four rows and four columns.
###Code
np.random.randn(4,4)
###Output
_____no_output_____
###Markdown
The second example shows how to generate a defined amount of random numbers without specifying dimensions. In this example we choose 5000 random numbers.
###Code
np.random.randn(10)
###Output
_____no_output_____
###Markdown
Below shows how to create a histogram for the np.random.randn function.
###Code
y = np.random.randn(5000)
plt.style.use('seaborn-whitegrid')
plt.hist(y)
plt.xlabel('continuous variables')
plt.ylabel('number of outcomes')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the above histogram, the random.randn returns a normal distribution with the classic bell shape. random.random Random.random is the same as the random.rand except for how the arguments are handled. In random.rand, the shapes are separate arguments while in the random.random function, the shape argument is a single tuple.(8) Here are two examples of the random.random function : The first example shows how to generate a random number from the uniform distribution without specifying a number of outcomes.
###Code
np.random.random()
###Output
_____no_output_____
###Markdown
The second example shows how to generate a defined amount of random numbers from the uniform distribution.
###Code
np.random.random(10)
###Output
_____no_output_____
###Markdown
Below shows how to create a histogram for the np.random.random function with 5,000 outcomes.
###Code
z = np.random.random(5000)
plt.style.use('seaborn-whitegrid')
plt.hist(z)
plt.xlabel('continuous variable')
plt.ylabel('number of outcomes')
plt.show()
###Output
_____no_output_____
###Markdown
You can tell from the above histogram that the outcomes show a uniform distribution based on 5000 random numbers. Permutations In mathematics, permutation is defined as the act of arranging all the members of a set into some sequence or order, or if the set is already ordered, rearranging its elements, a process called permuting. (9) The use of random permutations is often fundamental to fields that use randomized algorithms such as coding theory, cryptography and simulation. (10) Below is an example of generating a permutation using the random.permutation function in numpy.
###Code
np.random.permutation(12)
###Output
_____no_output_____
###Markdown
The second example of the random.permutation function shows how you can take the above output and reshapeit into an array with dimensions by using the arange and reshape functions.
###Code
ar = np.arange(12).reshape((3,4))
np.random.permutation(ar)
###Output
_____no_output_____
###Markdown
3. Explain the use and purpose of at least five "Distributions" functions.*** Distributions Distribution, as defined in statistics, is a listing or function showing all the possible values (or intervals) of the data and how often they occur. (11) The main purpose of distributions is that they can be used as a shorthand for describinng and calculating related quantities, such as likelihhods of observations, and plotting the relationship between observations in the domain. (12) There are many types of distribtions and below we will look at 5 of the most common. They are the normal distribution, the uniform distribution, the exponential distribution, the poisson distribution and the binomial distribution. Normal Distribution The normal distribution, also known as the Gaussian distribution, is a probability distribution that is symmetric about the mean, showing that data near the mean are more frequent in occurrence than data far from the mean. (13) Here are two examples of the normal distribution function using the random.normal function in numpy :
###Code
np.random.normal(0,0.1,10)
nd = np.random.normal(0,0.1,1000)
### used some plt.'functions' to assist in getting a histogram with labels for the x & y axis' and also to create grid lines.
plt.style.use('seaborn-whitegrid')
plt.hist(nd)
plt.xlabel('continuous variable')
plt.ylabel('number of outcomes')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the above histogram, most of the data are near the mean (0). Uniform Distribution The uniform distribution is a continuous distribution. It is a probability distribution that has a constant probability.It is also known as the rectangular distribution because when plotted, the outcomes take the form of a rectangle. Here are two examples of the uniform distribution function using the random.uniform function in numpy :
###Code
np.random.uniform(0,1,10)
ud = np.random.uniform(0,1,1000)
plt.style.use('seaborn-whitegrid')
plt.hist(ud)
plt.xlabel('continuous variable')
plt.ylabel('number of outcomes')
plt.show()
###Output
_____no_output_____
###Markdown
As you can see from the above histogram, the uniform distribution takes the shape of a rectangle. Exponetial Distribution The exponential distribution (also known as the negative distribution) describes the time until some specific event(s) occur. A popular example is the time before an earthquake takes place. Another example might be how many days before a car battery runs out. The exponential distribution is widely used in the field of reliability. Reliability deals with the amount of time a product lasts. (14) Here are two examples of the exponential distribution function using the random.exponential function in numpy :
###Code
np.random.exponential(1.0, 10)
ed = np.random.exponential(1.0, 1000)
plt.style.use('seaborn-whitegrid')
plt.hist(ed)
plt.xlabel('continuous variable')
plt.ylabel('number of outcomes')
plt.show()
###Output
_____no_output_____
###Markdown
Poisson Distribution The poisson distribution is the discrete probability distribution of the number of events occurring in a given time period, given the average number of times the event occurs over that time period.(15)The poisson distribution is applied in many ways. Examples are, predicting how many rare diseases will be diagnosed in any given time period, how many car accidents will there be on New Year's eve and to predict the number of failures of a machine in a month.(16) Here are two examples of the poisson distribution function using the random.poisson function in numpy :
###Code
np.random.poisson(1.0,10)
pd = np.random.poisson(1.0,1000)
plt.style.use('seaborn-whitegrid')
plt.hist(pd)
plt.xlabel('discrete variable')
plt.ylabel('number of outcomes')
plt.show()
###Output
_____no_output_____
###Markdown
As expected in the above poisson histogram, the events start to decline right around the mean, (1.0). Binomial Distribution The binomial distribution can be thought of as simply the probability of a success or failure outcome in an experiment or survey that is repeated multiple times. The binomial is a type of distribution that has two possible outcomes. An example of this is predicting a baby's gender.(17) Here are two examples of the binomial distribution function using the random.binomial function in numpy :
###Code
np.random.binomial(1,0.5,100)
bd = np.random.binomial(1.0,0.5,1000)
plt.style.use('seaborn-whitegrid')
plt.hist(bd)
plt.xlabel('discrete variable')
plt.ylabel('number of outcomes')
plt.show()
###Output
_____no_output_____ |
Jupyter/3.*.ipynb | ###Markdown
collections.defaultdict()
###Code
import collections
dd = collections.defaultdict()
###Output
_____no_output_____
###Markdown
ๅช่ฏป็ๆ ๅฐ่งๅพ
###Code
from types import MappingProxyType
d = {1: 'a'}
d_proxy = MappingProxyType(d)
d_proxy
d_proxy[2] = 'b'
d[2] = 'b'
d_proxy
ls = set([1, 2, 3])
ls_proxy = MappingProxyType(ls)
ls[0] = 99
from unicodedata import name
{chr(i) for i in range(32, 256) if 'SIGN' in name(chr(i), '')}
name('&', '')
{chr(50)}
for i in range(32, 256):
# if 'SIGN' in name(chr(i), ''):
print(chr(i))
print(chr(50))
###Output
2
###Markdown
set
###Code
a = list(range(10))
b = {1, 2, 3}
b.union(a)
b.update(a)
b
b = {1,1,2,2,2,3,5,5}
set(a) & b
id(1)
id(1.0)
a = 1
b = 1.0
a == b
hash(a)
hash(b)
id(a)
id(b)
dict_1 = {'a': 1, 'b': 2}
dict_2 = {'b': 2, 'a': 1}
dict_1 == dict_2
dict_1.keys()
dict_2.keys()
DIAL_CODES = [(86, 'China'),(91, 'India'),(1, 'United States'),(62, 'Indonesia'),(55, 'Brazil'),(92, 'Pakistan'),(880, 'Bangladesh'),(234, 'Nigeria'),(7, 'Russia'),(81, 'Japan'),]
d1 = dict(DIAL_CODES)
d2 = dict(sorted(DIAL_CODES))
d3 = dict(sorted(DIAL_CODES, key=lambda x: x[1]))
print(d1.keys())
print(d2.keys())
print(d3.keys())
###Output
dict_keys([86, 91, 1, 62, 55, 92, 880, 234, 7, 81])
dict_keys([1, 7, 55, 62, 81, 86, 91, 92, 234, 880])
dict_keys([880, 55, 86, 91, 62, 81, 234, 92, 7, 1])
|
EXPLORATION/Node_02/[E-02] Only_LMS_Code_Blocks.ipynb | ###Markdown
2. Iris์ ์ธ ๊ฐ์ง ํ์ข
, ๋ถ๋ฅํด๋ณผ ์ ์๊ฒ ์ด์?**์บ๊ธ์ iris ๋ฐ์ดํฐ์
์ ์ด์ฉํด ๊ธฐ๋ณธ์ ์ธ ๋จธ์ ๋ฌ๋ ๋ถ๋ฅ ํ์คํฌ๋ฅผ ์งํํ๊ณ , ์์ฃผ ์ฌ์ฉ๋๋ ๋ชจ๋ธ๊ณผ ํ๋ จ๊ธฐ๋ฒ์ ์์๋ณธ๋ค.** 2-1. ๋ค์ด๊ฐ๋ฉฐ 2-2. Iris์ ์ธ ๊ฐ์ง ํ์ข
, ๋ถ๋ฅํด ๋ณผ๊น์? (1) ๋ถ๊ฝ ๋ถ๋ฅ ๋ฌธ์ ```bash$ pip install scikit-learn $ pip install matplotlib``` 2-3. Iris์ ์ธ ๊ฐ์ง ํ์ข
, ๋ถ๋ฅํด ๋ณผ๊น์? (2) ๋ฐ์ดํฐ ์ค๋น, ๊ทธ๋ฆฌ๊ณ ์์ธํ ์ดํด๋ณด๊ธฐ๋ ๊ธฐ๋ณธ!
###Code
from sklearn.datasets import load_iris
iris = load_iris()
print(dir(iris))
# dir()๋ ๊ฐ์ฒด๊ฐ ์ด๋ค ๋ณ์์ ๋ฉ์๋๋ฅผ ๊ฐ์ง๊ณ ์๋์ง ๋์ดํจ
iris.keys()
iris_data = iris.data
print(iris_data.shape)
#shape๋ ๋ฐฐ์ด์ ํ์์ ๋ณด๋ฅผ ์ถ๋ ฅ
iris_data[0]
iris_label = iris.target
print(iris_label.shape)
iris_label
iris.target_names
print(iris.DESCR)
iris.feature_names
iris.filename
###Output
_____no_output_____
###Markdown
2-4. ์ฒซ ๋ฒ์งธ ๋จธ์ ๋ฌ๋ ์ค์ต, ๊ฐ๋จํ๊ณ ๋ ๋น ๋ฅด๊ฒ! (1) ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ์ ํ์ต์ํค๊ธฐ ์ํ ๋ฌธ์ ์ง์ ์ ๋ต์ง ์ค๋น
###Code
import pandas as pd
print(pd.__version__)
iris_df = pd.DataFrame(data=iris_data, columns=iris.feature_names)
iris_df
iris_df["label"] = iris.target
iris_df
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(iris_data,
iris_label,
test_size=0.2,
random_state=7)
print('X_train ๊ฐ์: ', len(X_train),', X_test ๊ฐ์: ', len(X_test))
X_train.shape, y_train.shape
X_test.shape, y_test.shape
y_train, y_test
###Output
_____no_output_____
###Markdown
2-5. ์ฒซ ๋ฒ์งธ ๋จธ์ ๋ฌ๋ ์ค์ต, ๊ฐ๋จํ๊ณ ๋ ๋น ๋ฅด๊ฒ! (2) ์ฒซ ๋ฒ์งธ ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ ํ์ต์ํค๊ธฐ
###Code
from sklearn.tree import DecisionTreeClassifier
decision_tree = DecisionTreeClassifier(random_state=32)
print(decision_tree._estimator_type)
decision_tree.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
2-6. ์ฒซ ๋ฒ์งธ ๋จธ์ ๋ฌ๋ ์ค์ต, ๊ฐ๋จํ๊ณ ๋ ๋น ๋ฅด๊ฒ! (3) ์ฒซ ๋ฒ์งธ ๋จธ์ ๋ฌ๋ ๋ชจ๋ธ ํ๊ฐํ๊ธฐ
###Code
y_pred = decision_tree.predict(X_test)
y_pred
y_test
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_test, y_pred)
accuracy
###Output
_____no_output_____
###Markdown
2-7. ์ฒซ ๋ฒ์งธ ๋จธ์ ๋ฌ๋ ์ค์ต, ๊ฐ๋จํ๊ณ ๋ ๋น ๋ฅด๊ฒ! (4) ๋ค๋ฅธ ๋ชจ๋ธ๋ ํด ๋ณด๊ณ ์ถ๋ค๋ฉด? ์ฝ๋ ํ ์ค๋ง ๋ฐ๊พธ๋ฉด ๋ผ!
###Code
# (1) ํ์ํ ๋ชจ๋ import
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
# (2) ๋ฐ์ดํฐ ์ค๋น
iris = load_iris()
iris_data = iris.data
iris_label = iris.target
# (3) train, test ๋ฐ์ดํฐ ๋ถ๋ฆฌ
X_train, X_test, y_train, y_test = train_test_split(iris_data,
iris_label,
test_size=0.2,
random_state=7)
# (4) ๋ชจ๋ธ ํ์ต ๋ฐ ์์ธก
decision_tree = DecisionTreeClassifier(random_state=32)
decision_tree.fit(X_train, y_train)
y_pred = decision_tree.predict(X_test)
print(classification_report(y_test, y_pred))
from sklearn.ensemble import RandomForestClassifier
X_train, X_test, y_train, y_test = train_test_split(iris_data,
iris_label,
test_size=0.2,
random_state=21)
random_forest = RandomForestClassifier(random_state=32)
random_forest.fit(X_train, y_train)
y_pred = random_forest.predict(X_test)
print(classification_report(y_test, y_pred))
from sklearn import svm
svm_model = svm.SVC()
print(svm_model._estimator_type)
# ์ฝ๋๋ฅผ ์
๋ ฅํ์ธ์
from sklearn.linear_model import SGDClassifier
sgd_model = SGDClassifier()
sgd_model.fit(X_train, y_train)
y_pred = sgd_model.predict(X_test)
print(classification_report(y_test, y_pred))
from sklearn.linear_model import LogisticRegression
logistic_model = LogisticRegression()
print(logistic_model._estimator_type)
# ์ฝ๋๋ฅผ ์
๋ ฅํ์ธ์
from sklearn.linear_model import LogisticRegression
logistic_model = LogisticRegression()
logistic_model.fit(X_train, y_train)
y_pred = logistic_model.predict(X_test)
print(classification_report(y_test, y_pred))
###Output
_____no_output_____
###Markdown
2-8. ๋ด ๋ชจ๋ธ์ ์ผ๋ง๋ ๋๋ํ๊ฐ? ๋ค์ํ๊ฒ ํ๊ฐํด ๋ณด๊ธฐ (1) ์ ํ๋์๋ ํจ์ ์ด ์๋ค
###Code
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
digits_data = digits.data
digits_data.shape
digits_data[0]
import matplotlib.pyplot as plt
%matplotlib inline
plt.imshow(digits.data[0].reshape(8, 8), cmap='gray')
plt.axis('off')
plt.show()
for i in range(10):
plt.subplot(2, 5, i+1)
plt.imshow(digits.data[i].reshape(8, 8), cmap='gray')
plt.axis('off')
plt.show()
digits_label = digits.target
print(digits_label.shape)
digits_label[:20]
new_label = [3 if i == 3 else 0 for i in digits_label]
new_label[:20]
# ์ฝ๋๋ฅผ ์
๋ ฅํ์ธ์
# ํ์ํ ๋ชจ๋ ์ํฌํธ
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
from sklearn.metrics import accuracy_score
# ๋ฐ์ดํฐ ์ค๋น
digits = load_digits()
digits_data = digits.data
digits_label = digits.target
new_label = [3 if i == 3 else 0 for i in digits_label]
# train, test ๋ฐ์ดํฐ ๋ถ๋ฆฌ
X_train, X_test, y_train, y_test = train_test_split(digits_data,
new_label,
test_size=0.2,
random_state=15)
# ๋ชจ๋ธ ํ์ต ๋ฐ ์์ธก
decision_tree = DecisionTreeClassifier(random_state=15)
decision_tree.fit(X_train, y_train)
y_pred = decision_tree.predict(X_test)
print(classification_report(y_test, y_pred))
# ์ ํ๋ ์ธก์
accuracy = accuracy_score(y_test, y_pred)
fake_pred = [0] * len(y_pred)
accuracy = accuracy_score(y_test, fake_pred)
accuracy
###Output
_____no_output_____
###Markdown
2-9. ๋ด ๋ชจ๋ธ์ ์ผ๋ง๋ ๋๋ํ๊ฐ? ๋ค์ํ๊ฒ ํ๊ฐํด ๋ณด๊ธฐ (2) ์ ๋ต๊ณผ ์ค๋ต์๋ ์ข
๋ฅ๊ฐ ์๋ค!
###Code
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
confusion_matrix(y_test, fake_pred)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
print(classification_report(y_test, fake_pred, zero_division=0))
accuracy_score(y_test, y_pred), accuracy_score(y_test, fake_pred)
###Output
_____no_output_____
###Markdown
2-10. ๋ฐ์ดํฐ๊ฐ ๋ฌ๋ผ๋ ๋ฌธ์ ์์ด์! 2-11. ํ๋ก์ ํธ (1) load_digits : ์๊ธ์จ๋ฅผ ๋ถ๋ฅํด ๋ด
์๋ค
###Code
import sklearn
print(sklearn.__version__)
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
###Output
_____no_output_____
###Markdown
2-12. ํ๋ก์ ํธ (2) load_wine : ์์ธ์ ๋ถ๋ฅํด ๋ด
์๋ค
###Code
from sklearn.datasets import load_wine
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
###Output
_____no_output_____
###Markdown
2-13. ํ๋ก์ ํธ (3) load_breast_cancer : ์ ๋ฐฉ์ ์ฌ๋ถ๋ฅผ ์ง๋จํด ๋ด
์๋ค
###Code
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
###Output
_____no_output_____ |
notebooks/nb7.ipynb | ###Markdown
GYM results
###Code
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from sys_simulator.general import load_with_pickle, sns_confidence_interval_plot
filepath = "D:\Dev/sys-simulator-2/data\dql\gym\script4/20210312-190806/log.pickle"
file = open(filepath, 'rb')
data = pickle.load(file)
file.close()
data.keys()
data['test_rewards']
y_label = 'Average rewards'
sns_confidence_interval_plot(
np.array(data['train_rewards']),
y_label,
'algo',
f'Episode/{data["eval_every"]}'
)
###Output
_____no_output_____ |
2020-01-09-PyData-Heidelberg/examples/conways_game_of_life.ipynb | ###Markdown
John Conway's Game Of Life: Threaded Edition Some of the following code is adapted from https://jakevdp.github.io/blog/2013/08/07/conways-game-of-life/
###Code
from time import sleep
from threading import Thread
import numpy as np
from ipycanvas import MultiCanvas, hold_canvas
def life_step(x):
"""Game of life step"""
nbrs_count = sum(np.roll(np.roll(x, i, 0), j, 1)
for i in (-1, 0, 1) for j in (-1, 0, 1)
if (i != 0 or j != 0))
return (nbrs_count == 3) | (x & (nbrs_count == 2))
def draw(x, canvas, color='black'):
with hold_canvas(canvas):
canvas.clear()
canvas.fill_style = color
r = 0
for row in x:
c = 0
for value in row:
if value:
canvas.fill_rect(r * n_pixels, c * n_pixels, n_pixels, n_pixels)
c += 1
r += 1
glider_gun =\
[[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1],
[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,1,1],
[1,1,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[1,1,0,0,0,0,0,0,0,0,1,0,0,0,1,0,1,1,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]]
x = np.zeros((50, 70), dtype=bool)
x[1:10,1:37] = glider_gun
n_pixels = 10
multi = MultiCanvas(2, size=(x.shape[1] * n_pixels, x.shape[0] * n_pixels))
multi[0].fill_style = '#FFF0C9'
multi[0].fill_rect(0, 0, multi.size[0], multi.size[1])
multi
draw(x, multi[1], '#5770B3')
class GameOfLife(Thread):
def __init__(self, x, canvas):
self.x = x
self.canvas = canvas
super(GameOfLife, self).__init__()
def run(self):
for _ in range(1_000):
self.x = life_step(self.x)
draw(self.x, self.canvas, '#5770B3')
sleep(0.1)
GameOfLife(x, multi[1]).start()
###Output
_____no_output_____
###Markdown
The game is now running in a separate thread, nothing stops you from changing the background color:
###Code
multi[0].fill_style = '#D0FFB3'
multi[0].fill_rect(0, 0, multi.size[0], multi.size[1])
###Output
_____no_output_____ |
workshops/kfp-caip-sklearn/lab-02-kfp-pipeline/lab-02.ipynb | ###Markdown
Continuous training pipeline with Kubeflow Pipeline and AI Platform **Learning Objectives:**1. Learn how to use Kubeflow Pipeline(KFP) pre-build components (BiqQuery, AI Platform training and predictions)1. Learn how to use KFP lightweight python components1. Learn how to build a KFP with these components1. Learn how to compile, upload, and run a KFP with the command lineIn this lab, you will build, deploy, and run a KFP pipeline that orchestrates **BigQuery** and **AI Platform** services to train, tune, and deploy a **scikit-learn** model. Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the `covertype_training_pipeline.py` file that we will generate below.The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
###Code
!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
###Output
_____no_output_____
###Markdown
The pipeline uses a mix of custom and pre-build components.- Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution: - [BigQuery query component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/bigquery/query) - [AI Platform Training component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/train) - [AI Platform Deploy component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/deploy)- Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's [Lightweight Python Components](https://www.kubeflow.org/docs/pipelines/sdk/lightweight-python-components/) mechanism. The code for the components is in the `helper_components.py` file: - **Retrieve Best Run**. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job. - **Evaluate Model**. This component evaluates a *sklearn* trained model using a provided metric and a testing dataset.
###Code
%%writefile ./pipeline/covertype_training_pipeline.py
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""KFP orchestrating BigQuery and Cloud AI Platform services."""
import os
from helper_components import evaluate_model
from helper_components import retrieve_best_run
from jinja2 import Template
import kfp
from kfp.components import func_to_container_op
from kfp.dsl.types import Dict
from kfp.dsl.types import GCPProjectID
from kfp.dsl.types import GCPRegion
from kfp.dsl.types import GCSPath
from kfp.dsl.types import String
from kfp.gcp import use_gcp_secret
# Defaults and environment settings
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
TRAINING_FILE_PATH = 'datasets/training/data.csv'
VALIDATION_FILE_PATH = 'datasets/validation/data.csv'
TESTING_FILE_PATH = 'datasets/testing/data.csv'
# Parameter defaults
SPLITS_DATASET_ID = 'splits'
HYPERTUNE_SETTINGS = """
{
"hyperparameters": {
"goal": "MAXIMIZE",
"maxTrials": 6,
"maxParallelTrials": 3,
"hyperparameterMetricTag": "accuracy",
"enableTrialEarlyStopping": True,
"params": [
{
"parameterName": "max_iter",
"type": "DISCRETE",
"discreteValues": [500, 1000]
},
{
"parameterName": "alpha",
"type": "DOUBLE",
"minValue": 0.0001,
"maxValue": 0.001,
"scaleType": "UNIT_LINEAR_SCALE"
}
]
}
}
"""
# Helper functions
def generate_sampling_query(source_table_name, num_lots, lots):
"""Prepares the data sampling query."""
sampling_query_template = """
SELECT *
FROM
`{{ source_table }}` AS cover
WHERE
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})
"""
query = Template(sampling_query_template).render(
source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])
return query
# Create component factories
component_store = kfp.components.ComponentStore(
local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])
bigquery_query_op = component_store.load_component('bigquery/query')
mlengine_train_op = component_store.load_component('ml_engine/train')
mlengine_deploy_op = component_store.load_component('ml_engine/deploy')
retrieve_best_run_op = func_to_container_op(
retrieve_best_run, base_image=BASE_IMAGE)
evaluate_model_op = func_to_container_op(evaluate_model, base_image=BASE_IMAGE)
@kfp.dsl.pipeline(
name='Covertype Classifier Training',
description='The pipeline training and deploying the Covertype classifierpipeline_yaml'
)
def covertype_train(project_id,
region,
source_table_name,
gcs_root,
dataset_id,
evaluation_metric_name,
evaluation_metric_threshold,
model_id,
version_id,
replace_existing_version,
hypertune_settings=HYPERTUNE_SETTINGS,
dataset_location='US'):
"""Orchestrates training and deployment of an sklearn model."""
# Create the training split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])
training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)
create_training_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=training_file_path,
dataset_location=dataset_location)
# Create the validation split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[8])
validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)
create_validation_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=validation_file_path,
dataset_location=dataset_location)
# Create the testing split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[9])
testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)
create_testing_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=testing_file_path,
dataset_location=dataset_location)
# Tune hyperparameters
tune_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'
]
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',
kfp.dsl.RUN_ID_PLACEHOLDER)
hypertune = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=tune_args,
training_input=hypertune_settings)
# Retrieve the best trial
get_best_trial = retrieve_best_run_op(
project_id, hypertune.outputs['job_id'])
# Train the model on a combined training and validation datasets
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)
train_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--alpha',
get_best_trial.outputs['alpha'], '--max_iter',
get_best_trial.outputs['max_iter'], '--hptune', 'False'
]
train_model = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=train_args)
# Evaluate the model on the testing split
eval_model = evaluate_model_op(
dataset_path=str(create_testing_split.outputs['output_gcs_path']),
model_path=str(train_model.outputs['job_dir']),
metric_name=evaluation_metric_name)
# Deploy the model if the primary metric is better than threshold
with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):
deploy_model = mlengine_deploy_op(
model_uri=train_model.outputs['job_dir'],
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
replace_existing_version=replace_existing_version)
# Configure the pipeline to run using the service account defined
# in the user-gcp-sa k8s secret
if USE_KFP_SA == 'True':
kfp.dsl.get_pipeline_conf().add_op_transformer(
use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
The custom components execute in a container image defined in `base_image/Dockerfile`.
###Code
!cat base_image/Dockerfile
###Output
_____no_output_____
###Markdown
The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in `trainer_image/Dockerfile`.
###Code
!cat trainer_image/Dockerfile
###Output
_____no_output_____
###Markdown
Building and deploying the pipelineBefore deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on [Argo Workflow](https://github.com/argoproj/argo), which is expressed in YAML. Configure environment settingsUpdate the below constants with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default`.- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the **SETTINGS** for your instance2. Use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD** section of the **SETTINGS** window.Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
###Code
!gsutil ls
###Output
_____no_output_____
###Markdown
**HINT:** For **ENDPOINT**, use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SDK** section of the **SETTINGS** window.For **ARTIFACT_STORE_URI**, copyย theย bucketย nameย whichย startsย withย theย qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-defaultย prefixย fromย theย previousย cellย output. Your copied value should look like **'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default'**
###Code
REGION = 'us-central1'
ENDPOINT = '337dd39580cbcbd2-dot-us-central2.pipelines.googleusercontent.com' #ย TO DO: REPLACEย WITHย YOURย ENDPOINT
ARTIFACT_STORE_URI = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' #ย TO DO: REPLACEย WITHย YOURย ARTIFACT_STOREย NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
###Output
_____no_output_____
###Markdown
Build the trainer image
###Code
IMAGE_NAME='trainer_image'
TAG='latest'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
###Output
_____no_output_____
###Markdown
**Note**: Please ignore any **incompatibility ERROR** that may appear for the packages visions as it will not affect the lab's functionality.
###Code
!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image
###Output
_____no_output_____
###Markdown
Build the base image for custom components
###Code
IMAGE_NAME='base_image'
TAG='latest'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image
###Output
_____no_output_____
###Markdown
Compile the pipelineYou can compile the DSL using an API from the **KFP SDK** or using the **KFP** compiler.To compile the pipeline DSL using the **KFP** compiler. Set the pipeline's compile time settingsThe pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the `user-gcp-sa` secret of the Kubernetes namespace hosting KFP. If you want to use the `user-gcp-sa` service account you change the value of `USE_KFP_SA` to `True`.Note that the default AI Platform Pipelines configuration does not define the `user-gcp-sa` secret.
###Code
USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
###Output
_____no_output_____
###Markdown
Use the CLI compiler to compile the pipeline
###Code
!dsl-compile --py pipeline/covertype_training_pipeline.py --output covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
The result is the `covertype_training_pipeline.yaml` file.
###Code
!head covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Deploy the pipeline package
###Code
PIPELINE_NAME='covertype_continuous_training'
!kfp --endpoint $ENDPOINT pipeline upload \
-p $PIPELINE_NAME \
covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Submitting pipeline runsYou can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run. List the pipelines in AI Platform Pipelines
###Code
!kfp --endpoint $ENDPOINT pipeline list
###Output
_____no_output_____
###Markdown
Submit a runFind the ID of the `covertype_continuous_training` pipeline you uploaded in the previous step and update the value of `PIPELINE_ID` .
###Code
PIPELINE_ID='0918568d-758c-46cf-9752-e04a4403cd84' #ย TO DO: REPLACEย WITHย YOURย PIPELINE ID
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_001'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'splits'
EVALUATION_METRIC = 'accuracy'
EVALUATION_METRIC_THRESHOLD = '0.69'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)
###Output
_____no_output_____
###Markdown
Run the pipeline using theย `kfp`ย command line by retrieving the variables from the environment to pass to the pipeline where:- EXPERIMENT_NAME is set to the experiment used to run the pipeline. You can choose any name you want. If the experiment does not exist it will be created by the command- RUN_ID is the name of the run. You can use an arbitrary name- PIPELINE_ID is the id of your pipeline. Use the value retrieved by the `kfp pipeline list` command- GCS_STAGING_PATH is the URI to the Cloud Storage location used by the pipeline to store intermediate files. By default, it is set to the `staging` folder in your artifact store.- REGION is a compute region for AI Platform Training and Prediction. You should be already familiar with these and other parameters passed to the command. If not go back and review the pipeline code.
###Code
!kfp --endpoint $ENDPOINT run submit \
-e $EXPERIMENT_NAME \
-r $RUN_ID \
-p $PIPELINE_ID \
project_id=$PROJECT_ID \
gcs_root=$GCS_STAGING_PATH \
region=$REGION \
source_table_name=$SOURCE_TABLE \
dataset_id=$DATASET_ID \
evaluation_metric_name=$EVALUATION_METRIC \
evaluation_metric_threshold=$EVALUATION_METRIC_THRESHOLD \
model_id=$MODEL_ID \
version_id=$VERSION_ID \
replace_existing_version=$REPLACE_EXISTING_VERSION
###Output
_____no_output_____
###Markdown
Continuous training pipeline with KFP and Cloud AI Platform **Learning Objectives:**1. Learn how to use KF pre-build components (BiqQuery, CAIP training and predictions)1. Learn how to use KF lightweight python components1. Learn how to build a KF pipeline with these components1. Learn how to compile, upload, and run a KF pipeline with the command lineIn this lab, you will build, deploy, and run a KFP pipeline that orchestrates **BigQuery** and **Cloud AI Platform** services to train, tune, and deploy a **scikit-learn** model. Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the `covertype_training_pipeline.py` file that we will generate below.The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
###Code
!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
###Output
_____no_output_____
###Markdown
The pipeline uses a mix of custom and pre-build components.- Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution: - [BigQuery query component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/bigquery/query) - [AI Platform Training component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/train) - [AI Platform Deploy component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/deploy)- Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's [Lightweight Python Components](https://www.kubeflow.org/docs/pipelines/sdk/lightweight-python-components/) mechanism. The code for the components is in the `helper_components.py` file: - **Retrieve Best Run**. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job. - **Evaluate Model**. This component evaluates a *sklearn* trained model using a provided metric and a testing dataset.
###Code
%%writefile ./pipeline/covertype_training_pipeline.py
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""KFP pipeline orchestrating BigQuery and Cloud AI Platform services."""
import os
from helper_components import evaluate_model
from helper_components import retrieve_best_run
from jinja2 import Template
import kfp
from kfp.components import func_to_container_op
from kfp.dsl.types import Dict
from kfp.dsl.types import GCPProjectID
from kfp.dsl.types import GCPRegion
from kfp.dsl.types import GCSPath
from kfp.dsl.types import String
from kfp.gcp import use_gcp_secret
# Defaults and environment settings
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
TRAINING_FILE_PATH = 'datasets/training/data.csv'
VALIDATION_FILE_PATH = 'datasets/validation/data.csv'
TESTING_FILE_PATH = 'datasets/testing/data.csv'
# Parameter defaults
SPLITS_DATASET_ID = 'splits'
HYPERTUNE_SETTINGS = """
{
"hyperparameters": {
"goal": "MAXIMIZE",
"maxTrials": 6,
"maxParallelTrials": 3,
"hyperparameterMetricTag": "accuracy",
"enableTrialEarlyStopping": True,
"params": [
{
"parameterName": "max_iter",
"type": "DISCRETE",
"discreteValues": [500, 1000]
},
{
"parameterName": "alpha",
"type": "DOUBLE",
"minValue": 0.0001,
"maxValue": 0.001,
"scaleType": "UNIT_LINEAR_SCALE"
}
]
}
}
"""
# Helper functions
def generate_sampling_query(source_table_name, num_lots, lots):
"""Prepares the data sampling query."""
sampling_query_template = """
SELECT *
FROM
`{{ source_table }}` AS cover
WHERE
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})
"""
query = Template(sampling_query_template).render(
source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])
return query
# Create component factories
component_store = kfp.components.ComponentStore(
local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])
bigquery_query_op = component_store.load_component('bigquery/query')
mlengine_train_op = component_store.load_component('ml_engine/train')
mlengine_deploy_op = component_store.load_component('ml_engine/deploy')
retrieve_best_run_op = func_to_container_op(
retrieve_best_run, base_image=BASE_IMAGE)
evaluate_model_op = func_to_container_op(evaluate_model, base_image=BASE_IMAGE)
@kfp.dsl.pipeline(
name='Covertype Classifier Training',
description='The pipeline training and deploying the Covertype classifierpipeline_yaml'
)
def covertype_train(project_id,
region,
source_table_name,
gcs_root,
dataset_id,
evaluation_metric_name,
evaluation_metric_threshold,
model_id,
version_id,
replace_existing_version,
hypertune_settings=HYPERTUNE_SETTINGS,
dataset_location='US'):
"""Orchestrates training and deployment of an sklearn model."""
# Create the training split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])
training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)
create_training_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=training_file_path,
dataset_location=dataset_location)
# Create the validation split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[8])
validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)
create_validation_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=validation_file_path,
dataset_location=dataset_location)
# Create the testing split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[9])
testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)
create_testing_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=testing_file_path,
dataset_location=dataset_location)
# Tune hyperparameters
tune_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'
]
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',
kfp.dsl.RUN_ID_PLACEHOLDER)
hypertune = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=tune_args,
training_input=hypertune_settings)
# Retrieve the best trial
get_best_trial = retrieve_best_run_op(
project_id, hypertune.outputs['job_id'])
# Train the model on a combined training and validation datasets
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)
train_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--alpha',
get_best_trial.outputs['alpha'], '--max_iter',
get_best_trial.outputs['max_iter'], '--hptune', 'False'
]
train_model = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=train_args)
# Evaluate the model on the testing split
eval_model = evaluate_model_op(
dataset_path=str(create_testing_split.outputs['output_gcs_path']),
model_path=str(train_model.outputs['job_dir']),
metric_name=evaluation_metric_name)
# Deploy the model if the primary metric is better than threshold
with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):
deploy_model = mlengine_deploy_op(
model_uri=train_model.outputs['job_dir'],
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
replace_existing_version=replace_existing_version)
# Configure the pipeline to run using the service account defined
# in the user-gcp-sa k8s secret
if USE_KFP_SA == 'True':
kfp.dsl.get_pipeline_conf().add_op_transformer(
use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
The custom components execute in a container image defined in `base_image/Dockerfile`.
###Code
!cat base_image/Dockerfile
###Output
_____no_output_____
###Markdown
The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in `trainer_image/Dockerfile`.
###Code
!cat trainer_image/Dockerfile
###Output
_____no_output_____
###Markdown
Building and deploying the pipelineBefore deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on [Argo Workflow](https://github.com/argoproj/argo), which is expressed in YAML. Configure environment settingsUpdate the below constants with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default`.- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the **SETTINGS** for your instance2. Use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD** section of the **SETTINGS** window.
###Code
REGION = 'us-central1'
ENDPOINT = '337dd39580cbcbd2-dot-us-central2.pipelines.googleusercontent.com' #Change
ARTIFACT_STORE_URI = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' #Change
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
###Output
_____no_output_____
###Markdown
Build the trainer image
###Code
IMAGE_NAME='trainer_image'
TAG='latest'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
###Output
_____no_output_____
###Markdown
**Note**: Please ignore any **incompatibility ERROR** that may appear for the packages visions as it will not affect the lab's functionality.
###Code
!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image
###Output
_____no_output_____
###Markdown
Build the base image for custom components
###Code
IMAGE_NAME='base_image'
TAG='latest'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image
###Output
_____no_output_____
###Markdown
Compile the pipelineYou can compile the DSL using an API from the **KFP SDK** or using the **KFP** compiler.To compile the pipeline DSL using the **KFP** compiler. Set the pipeline's compile time settingsThe pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the `user-gcp-sa` secret of the Kubernetes namespace hosting Kubeflow Pipelines. If you want to use the `user-gcp-sa` service account you change the value of `USE_KFP_SA` to `True`.Note that the default AI Platform Pipelines configuration does not define the `user-gcp-sa` secret.
###Code
USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
###Output
_____no_output_____
###Markdown
Use the CLI compiler to compile the pipeline
###Code
!dsl-compile --py pipeline/covertype_training_pipeline.py --output covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
The result is the `covertype_training_pipeline.yaml` file.
###Code
!head covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Deploy the pipeline package
###Code
PIPELINE_NAME='covertype_continuous_training'
!kfp --endpoint $ENDPOINT pipeline upload \
-p $PIPELINE_NAME \
covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Continuous training pipeline with KFP and Cloud AI Platform **Learning Objectives:**1. Learn how to use KF pre-build components (BiqQuery, CAIP training and predictions)1. Learn how to use KF lightweight python components1. Learn how to build a KF pipeline with these components1. Learn how to compile, upload, and run a KF pipeline with the command lineIn this lab, you will build, deploy, and run a KFP pipeline that orchestrates **BigQuery** and **Cloud AI Platform** services to train, tune, and deploy a **scikit-learn** model. Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the `covertype_training_pipeline.py` file that we will generate below.The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
###Code
!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
###Output
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
###Markdown
The pipeline uses a mix of custom and pre-build components.- Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution: - [BigQuery query component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/bigquery/query) - [AI Platform Training component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/train) - [AI Platform Deploy component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/deploy)- Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's [Lightweight Python Components](https://www.kubeflow.org/docs/pipelines/sdk/lightweight-python-components/) mechanism. The code for the components is in the `helper_components.py` file: - **Retrieve Best Run**. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job. - **Evaluate Model**. This component evaluates a *sklearn* trained model using a provided metric and a testing dataset.
###Code
%%writefile ./pipeline/covertype_training_pipeline.py
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""KFP pipeline orchestrating BigQuery and Cloud AI Platform services."""
import os
from helper_components import evaluate_model
from helper_components import retrieve_best_run
from jinja2 import Template
import kfp
import kfp.components as comp
from kfp.components import func_to_container_op
from kfp.dsl.types import Dict
from kfp.dsl.types import GCPProjectID
from kfp.dsl.types import GCPRegion
from kfp.dsl.types import GCSPath
from kfp.dsl.types import String
from kfp.gcp import use_gcp_secret
# Defaults and environment settings
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
TRAINING_FILE_PATH = 'datasets/training/data.csv'
VALIDATION_FILE_PATH = 'datasets/validation/data.csv'
TESTING_FILE_PATH = 'datasets/testing/data.csv'
# Parameter defaults
SPLITS_DATASET_ID = 'splits'
HYPERTUNE_SETTINGS = """
{
"hyperparameters": {
"goal": "MAXIMIZE",
"maxTrials": 6,
"maxParallelTrials": 3,
"hyperparameterMetricTag": "accuracy",
"enableTrialEarlyStopping": True,
"params": [
{
"parameterName": "max_iter",
"type": "DISCRETE",
"discreteValues": [500, 1000]
},
{
"parameterName": "alpha",
"type": "DOUBLE",
"minValue": 0.0001,
"maxValue": 0.001,
"scaleType": "UNIT_LINEAR_SCALE"
}
]
}
}
"""
# Helper functions
def generate_sampling_query(source_table_name, num_lots, lots):
"""Prepares the data sampling query."""
sampling_query_template = """
SELECT *
FROM
`{{ source_table }}` AS cover
WHERE
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})
"""
query = Template(sampling_query_template).render(
source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])
return query
# Create component factories
component_store = kfp.components.ComponentStore(
local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])
bigquery_query_op = comp.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/bigquery/query/component.yaml')
mlengine_train_op = comp.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/ml_engine/train/component.yaml')
mlengine_deploy_op = comp.load_component_from_url('https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/ml_engine/deploy/component.yaml')
retrieve_best_run_op = func_to_container_op(
retrieve_best_run, base_image=BASE_IMAGE)
evaluate_model_op = func_to_container_op(evaluate_model, base_image=BASE_IMAGE)
@kfp.dsl.pipeline(
name='Covertype Classifier Training',
description='The pipeline training and deploying the Covertype classifierpipeline_yaml'
)
def covertype_train(project_id,
region,
source_table_name,
gcs_root,
dataset_id,
evaluation_metric_name,
evaluation_metric_threshold,
model_id,
version_id,
replace_existing_version,
hypertune_settings=HYPERTUNE_SETTINGS,
dataset_location='US'):
"""Orchestrates training and deployment of an sklearn model."""
# Create the training split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])
training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)
create_training_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=training_file_path,
dataset_location=dataset_location)
# Create the validation split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[8])
validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)
create_validation_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=validation_file_path,
dataset_location=dataset_location)
# Create the testing split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[9])
testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)
create_testing_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=testing_file_path,
dataset_location=dataset_location)
# Tune hyperparameters
tune_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'
]
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',
kfp.dsl.RUN_ID_PLACEHOLDER)
# NOTE: Based on [this](https://github.com/kubeflow/pipelines/blob/0.2.5/components/gcp/ml_engine/train/component.yaml#L47)
# 'job_dir' is passed to the program as 'job-dir' CLI argument. Now, 'fire' module automatically converts
# 'job-dir' CLI argument to 'job_dir' and passes it to the 'train_evaluate' function as argument. Hence, we are not explictly
# passing 'job_dir' argument while we invoke the 'train_evaluate' function in trainer_image/train.py
hypertune = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=tune_args,
training_input=hypertune_settings)
# Retrieve the best trial
get_best_trial = retrieve_best_run_op(
project_id, hypertune.outputs['job_id'])
# Train the model on a combined training and validation datasets
# NOTE: kfp.dsl.RUN_ID_PLACEHOLDER returns the runId of the current run. It is the
# same ID returned when you run pipeline with 'kfp --endpoint $ENDPOINT run submit' command.
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)
train_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--alpha',
get_best_trial.outputs['alpha'], '--max_iter',
get_best_trial.outputs['max_iter'], '--hptune', 'False'
]
train_model = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=train_args)
# Evaluate the model on the testing split
eval_model = evaluate_model_op(
dataset_path=str(create_testing_split.outputs['output_gcs_path']),
model_path=str(train_model.outputs['job_dir']),
metric_name=evaluation_metric_name)
# Deploy the model if the primary metric is better than threshold
with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):
deploy_model = mlengine_deploy_op(
model_uri=train_model.outputs['job_dir'],
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
replace_existing_version=replace_existing_version)
# Configure the pipeline to run using the service account defined
# in the user-gcp-sa k8s secret
if USE_KFP_SA == 'True':
kfp.dsl.get_pipeline_conf().add_op_transformer(
use_gcp_secret('user-gcp-sa'))
###Output
Overwriting ./pipeline/covertype_training_pipeline.py
###Markdown
The custom components execute in a container image defined in `base_image/Dockerfile`.
###Code
!cat base_image/Dockerfile
###Output
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire scikit-learn==0.20.4 pandas==0.24.2 kfp==0.2.5
###Markdown
The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in `trainer_image/Dockerfile`.
###Code
!cat trainer_image/Dockerfile
###Output
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
###Markdown
Building and deploying the pipelineBefore deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on [Argo Workflow](https://github.com/argoproj/argo), which is expressed in YAML. Configure environment settingsUpdate the below constants with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default`.- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the **SETTINGS** for your instance2. Use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD** section of the **SETTINGS** window.
###Code
REGION = 'us-central1'
ENDPOINT = '797bf278628f63a9-dot-us-central2.pipelines.googleusercontent.com' # change
ARTIFACT_STORE_URI = 'gs://mlops-ai-platform-kubeflowpipelines-default' # change
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
print(PROJECT_ID)
###Output
mlops-ai-platform
###Markdown
Build the trainer image
###Code
IMAGE_NAME='trainer_image_kfp_caip_sklearn_lab02'
TAG='latest'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
###Output
_____no_output_____
###Markdown
**Note**: Please ignore any **incompatibility ERROR** that may appear for the packages visions as it will not affect the lab's functionality.
###Code
!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image
###Output
Creating temporary tarball archive of 4 file(s) totalling 6.7 KiB before compression.
Uploading tarball of [trainer_image] to [gs://mlops-ai-platform_cloudbuild/source/1613292784.416231-57420b97c01c4270b12133f6ff547b97.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/mlops-ai-platform/locations/global/builds/6801a738-4da9-4910-9a09-d3e88927d134].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/6801a738-4da9-4910-9a09-d3e88927d134?project=15641782362].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "6801a738-4da9-4910-9a09-d3e88927d134"
FETCHSOURCE
Fetching storage object: gs://mlops-ai-platform_cloudbuild/source/1613292784.416231-57420b97c01c4270b12133f6ff547b97.tgz#1613292784929807
Copying gs://mlops-ai-platform_cloudbuild/source/1613292784.416231-57420b97c01c4270b12133f6ff547b97.tgz#1613292784929807...
/ [1 files][ 1.8 KiB/ 1.8 KiB]
Operation completed over 1 objects/1.8 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 11.78kB
Step 1/5 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
d519e2592276: Pulling fs layer
d22d2dfcfa9c: Pulling fs layer
b3afe92c540b: Pulling fs layer
42499980e339: Pulling fs layer
5cc6f3cb2c4a: Pulling fs layer
264016c313db: Pulling fs layer
3049a6851b27: Pulling fs layer
f364009b5525: Pulling fs layer
ceb8710fb121: Pulling fs layer
60dd84bd5a31: Pulling fs layer
a4ab234100c0: Pulling fs layer
323ade0d04aa: Pulling fs layer
4e0e566fd2a8: Pulling fs layer
cc71efc47f44: Pulling fs layer
1cb247765bd9: Pulling fs layer
85bfe947ef8b: Pulling fs layer
cfba0db75741: Pulling fs layer
0803f0431169: Pulling fs layer
3049a6851b27: Waiting
f364009b5525: Waiting
ceb8710fb121: Waiting
60dd84bd5a31: Waiting
a4ab234100c0: Waiting
323ade0d04aa: Waiting
4e0e566fd2a8: Waiting
cc71efc47f44: Waiting
1cb247765bd9: Waiting
85bfe947ef8b: Waiting
cfba0db75741: Waiting
0803f0431169: Waiting
42499980e339: Waiting
5cc6f3cb2c4a: Waiting
264016c313db: Waiting
b3afe92c540b: Verifying Checksum
b3afe92c540b: Download complete
d22d2dfcfa9c: Verifying Checksum
d22d2dfcfa9c: Download complete
42499980e339: Verifying Checksum
42499980e339: Download complete
d519e2592276: Verifying Checksum
d519e2592276: Download complete
3049a6851b27: Verifying Checksum
3049a6851b27: Download complete
264016c313db: Verifying Checksum
264016c313db: Download complete
ceb8710fb121: Verifying Checksum
ceb8710fb121: Download complete
f364009b5525: Verifying Checksum
f364009b5525: Download complete
60dd84bd5a31: Verifying Checksum
60dd84bd5a31: Download complete
a4ab234100c0: Verifying Checksum
a4ab234100c0: Download complete
323ade0d04aa: Verifying Checksum
323ade0d04aa: Download complete
cc71efc47f44: Verifying Checksum
cc71efc47f44: Download complete
4e0e566fd2a8: Verifying Checksum
4e0e566fd2a8: Download complete
1cb247765bd9: Verifying Checksum
1cb247765bd9: Download complete
85bfe947ef8b: Verifying Checksum
85bfe947ef8b: Download complete
0803f0431169: Verifying Checksum
0803f0431169: Download complete
5cc6f3cb2c4a: Verifying Checksum
5cc6f3cb2c4a: Download complete
d519e2592276: Pull complete
d22d2dfcfa9c: Pull complete
b3afe92c540b: Pull complete
42499980e339: Pull complete
cfba0db75741: Verifying Checksum
cfba0db75741: Download complete
5cc6f3cb2c4a: Pull complete
264016c313db: Pull complete
3049a6851b27: Pull complete
f364009b5525: Pull complete
ceb8710fb121: Pull complete
60dd84bd5a31: Pull complete
a4ab234100c0: Pull complete
323ade0d04aa: Pull complete
4e0e566fd2a8: Pull complete
cc71efc47f44: Pull complete
1cb247765bd9: Pull complete
85bfe947ef8b: Pull complete
cfba0db75741: Pull complete
0803f0431169: Pull complete
Digest: sha256:9dbaf9b5c23151fbaae3f8479c1ba2382936af933d371459c110782b86c983ad
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> 86632554702c
Step 2/5 : RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
---> Running in 5d43d5de5b88
Collecting fire
Downloading fire-0.4.0.tar.gz (87 kB)
Collecting cloudml-hypertune
Downloading cloudml-hypertune-0.1.0.dev6.tar.gz (3.2 kB)
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
Requirement already satisfied: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.1)
Requirement already satisfied: numpy>=1.12.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (1.19.5)
Requirement already satisfied: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2021.1)
Requirement already satisfied: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.6.0)
Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.7/site-packages (from python-dateutil>=2.5.0->pandas==0.24.2) (1.15.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Building wheels for collected packages: cloudml-hypertune, fire, termcolor
Building wheel for cloudml-hypertune (setup.py): started
Building wheel for cloudml-hypertune (setup.py): finished with status 'done'
Created wheel for cloudml-hypertune: filename=cloudml_hypertune-0.1.0.dev6-py2.py3-none-any.whl size=3988 sha256=01bd1268af84328555134c2158c7908e24bcbad4e4147d333746eff21364b5fb
Stored in directory: /root/.cache/pip/wheels/a7/ff/87/e7bed0c2741fe219b3d6da67c2431d7f7fedb183032e00f81e
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.4.0-py2.py3-none-any.whl size=115928 sha256=c820b5a6847ac914754687a74880f0fbd40dc196248ce91fae0c8b37b4c3faa4
Stored in directory: /root/.cache/pip/wheels/8a/67/fb/2e8a12fa16661b9d5af1f654bd199366799740a85c64981226
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4829 sha256=97f761ce063f2276f4340dba0f7ee9fa97e0d56d50a8de5529a839b6b3040197
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built cloudml-hypertune fire termcolor
Installing collected packages: termcolor, scikit-learn, pandas, fire, cloudml-hypertune
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 0.24.1
Uninstalling scikit-learn-0.24.1:
Successfully uninstalled scikit-learn-0.24.1
Attempting uninstall: pandas
Found existing installation: pandas 1.2.1
Uninstalling pandas-1.2.1:
Successfully uninstalled pandas-1.2.1
Successfully installed cloudml-hypertune-0.1.0.dev6 fire-0.4.0 pandas-0.24.2 scikit-learn-0.20.4 termcolor-1.1.0
[91mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
visions 0.7.0 requires pandas>=0.25.3, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 2.8.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,>=0.25.3, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 2.8.0 requires visions[type_image_path]==0.4.4, but you have visions 0.7.0 which is incompatible.
[0mRemoving intermediate container 5d43d5de5b88
---> 9f49e2c68d53
Step 3/5 : WORKDIR /app
---> Running in 045baa19f59b
Removing intermediate container 045baa19f59b
---> eceb5e13292d
Step 4/5 : COPY train.py .
---> 39bfd4d2ace1
Step 5/5 : ENTRYPOINT ["python", "train.py"]
---> Running in 062544db78db
Removing intermediate container 062544db78db
---> a8731711bc81
Successfully built a8731711bc81
Successfully tagged gcr.io/mlops-ai-platform/trainer_image_kfp_caip_sklearn_lab02:latest
PUSH
Pushing gcr.io/mlops-ai-platform/trainer_image_kfp_caip_sklearn_lab02:latest
The push refers to repository [gcr.io/mlops-ai-platform/trainer_image_kfp_caip_sklearn_lab02]
bef6d1fd1169: Preparing
d8dc54f8a10a: Preparing
885390e5d213: Preparing
0f0532eed74a: Preparing
615e303004c8: Preparing
6ad6e8fd4ff0: Preparing
945f0370cab4: Preparing
289ab6c33408: Preparing
034a4b160541: Preparing
27b18b7fb87e: Preparing
ae18d372a1da: Preparing
cc450d62afb9: Preparing
d7d0fb2f7eb0: Preparing
3e75deadeefa: Preparing
c77962bfc51d: Preparing
caef3b0fe7f1: Preparing
c39d9f02e96e: Preparing
3a88efae17e5: Preparing
9f10818f1f96: Preparing
27502392e386: Preparing
c95d2191d777: Preparing
cc450d62afb9: Waiting
d7d0fb2f7eb0: Waiting
3e75deadeefa: Waiting
c77962bfc51d: Waiting
caef3b0fe7f1: Waiting
c39d9f02e96e: Waiting
6ad6e8fd4ff0: Waiting
945f0370cab4: Waiting
289ab6c33408: Waiting
034a4b160541: Waiting
27b18b7fb87e: Waiting
ae18d372a1da: Waiting
3a88efae17e5: Waiting
9f10818f1f96: Waiting
27502392e386: Waiting
c95d2191d777: Waiting
0f0532eed74a: Layer already exists
615e303004c8: Layer already exists
945f0370cab4: Layer already exists
6ad6e8fd4ff0: Layer already exists
289ab6c33408: Layer already exists
034a4b160541: Layer already exists
ae18d372a1da: Layer already exists
27b18b7fb87e: Layer already exists
d7d0fb2f7eb0: Layer already exists
cc450d62afb9: Layer already exists
3e75deadeefa: Layer already exists
c77962bfc51d: Layer already exists
c39d9f02e96e: Layer already exists
caef3b0fe7f1: Layer already exists
9f10818f1f96: Layer already exists
3a88efae17e5: Layer already exists
c95d2191d777: Layer already exists
27502392e386: Layer already exists
bef6d1fd1169: Pushed
d8dc54f8a10a: Pushed
885390e5d213: Pushed
latest: digest: sha256:71f93b79939b2e242ee44090fb74e2b92e8c556a69a7e4b800cf3ebdef21d79f size: 4708
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
6801a738-4da9-4910-9a09-d3e88927d134 2021-02-14T08:53:05+00:00 2M24S gs://mlops-ai-platform_cloudbuild/source/1613292784.416231-57420b97c01c4270b12133f6ff547b97.tgz gcr.io/mlops-ai-platform/trainer_image_kfp_caip_sklearn_lab02 (+1 more) SUCCESS
###Markdown
Build the base image for custom components Our custom containers will run on this image.
###Code
IMAGE_NAME='base_image_kfp_caip_sklearn_lab02'
TAG='latest'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image
###Output
Creating temporary tarball archive of 2 file(s) totalling 244 bytes before compression.
Uploading tarball of [base_image] to [gs://mlops-ai-platform_cloudbuild/source/1613293227.432482-31715e43adf14df9b1297d82cc846602.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/mlops-ai-platform/locations/global/builds/c6dd4b1e-f4f3-4be9-a0b8-3bc857c043f6].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/c6dd4b1e-f4f3-4be9-a0b8-3bc857c043f6?project=15641782362].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "c6dd4b1e-f4f3-4be9-a0b8-3bc857c043f6"
FETCHSOURCE
Fetching storage object: gs://mlops-ai-platform_cloudbuild/source/1613293227.432482-31715e43adf14df9b1297d82cc846602.tgz#1613293227851225
Copying gs://mlops-ai-platform_cloudbuild/source/1613293227.432482-31715e43adf14df9b1297d82cc846602.tgz#1613293227851225...
/ [1 files][ 285.0 B/ 285.0 B]
Operation completed over 1 objects/285.0 B.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 3.584kB
Step 1/2 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
d519e2592276: Pulling fs layer
d22d2dfcfa9c: Pulling fs layer
b3afe92c540b: Pulling fs layer
42499980e339: Pulling fs layer
5cc6f3cb2c4a: Pulling fs layer
264016c313db: Pulling fs layer
3049a6851b27: Pulling fs layer
f364009b5525: Pulling fs layer
ceb8710fb121: Pulling fs layer
60dd84bd5a31: Pulling fs layer
a4ab234100c0: Pulling fs layer
323ade0d04aa: Pulling fs layer
4e0e566fd2a8: Pulling fs layer
cc71efc47f44: Pulling fs layer
1cb247765bd9: Pulling fs layer
85bfe947ef8b: Pulling fs layer
cfba0db75741: Pulling fs layer
0803f0431169: Pulling fs layer
42499980e339: Waiting
5cc6f3cb2c4a: Waiting
264016c313db: Waiting
3049a6851b27: Waiting
f364009b5525: Waiting
ceb8710fb121: Waiting
60dd84bd5a31: Waiting
a4ab234100c0: Waiting
323ade0d04aa: Waiting
4e0e566fd2a8: Waiting
cc71efc47f44: Waiting
1cb247765bd9: Waiting
85bfe947ef8b: Waiting
cfba0db75741: Waiting
0803f0431169: Waiting
b3afe92c540b: Verifying Checksum
b3afe92c540b: Download complete
d22d2dfcfa9c: Verifying Checksum
d22d2dfcfa9c: Download complete
42499980e339: Verifying Checksum
42499980e339: Download complete
d519e2592276: Download complete
3049a6851b27: Verifying Checksum
3049a6851b27: Download complete
264016c313db: Verifying Checksum
264016c313db: Download complete
ceb8710fb121: Verifying Checksum
ceb8710fb121: Download complete
60dd84bd5a31: Verifying Checksum
60dd84bd5a31: Download complete
a4ab234100c0: Verifying Checksum
a4ab234100c0: Download complete
323ade0d04aa: Verifying Checksum
323ade0d04aa: Download complete
4e0e566fd2a8: Verifying Checksum
4e0e566fd2a8: Download complete
cc71efc47f44: Verifying Checksum
cc71efc47f44: Download complete
1cb247765bd9: Verifying Checksum
1cb247765bd9: Download complete
85bfe947ef8b: Download complete
f364009b5525: Verifying Checksum
f364009b5525: Download complete
0803f0431169: Verifying Checksum
0803f0431169: Download complete
5cc6f3cb2c4a: Verifying Checksum
5cc6f3cb2c4a: Download complete
d519e2592276: Pull complete
d22d2dfcfa9c: Pull complete
b3afe92c540b: Pull complete
42499980e339: Pull complete
cfba0db75741: Verifying Checksum
cfba0db75741: Download complete
5cc6f3cb2c4a: Pull complete
264016c313db: Pull complete
3049a6851b27: Pull complete
f364009b5525: Pull complete
ceb8710fb121: Pull complete
60dd84bd5a31: Pull complete
a4ab234100c0: Pull complete
323ade0d04aa: Pull complete
4e0e566fd2a8: Pull complete
cc71efc47f44: Pull complete
1cb247765bd9: Pull complete
85bfe947ef8b: Pull complete
cfba0db75741: Pull complete
0803f0431169: Pull complete
Digest: sha256:9dbaf9b5c23151fbaae3f8479c1ba2382936af933d371459c110782b86c983ad
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> 86632554702c
Step 2/2 : RUN pip install -U fire scikit-learn==0.20.4 pandas==0.24.2 kfp==0.2.5
---> Running in 0687988be165
Collecting fire
Downloading fire-0.4.0.tar.gz (87 kB)
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
Collecting kfp==0.2.5
Downloading kfp-0.2.5.tar.gz (116 kB)
Collecting urllib3<1.25,>=1.15
Downloading urllib3-1.24.3-py2.py3-none-any.whl (118 kB)
Requirement already satisfied: six>=1.10 in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (1.15.0)
Requirement already satisfied: certifi in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (2020.12.5)
Requirement already satisfied: python-dateutil in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (2.8.1)
Requirement already satisfied: PyYAML in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (5.4.1)
Requirement already satisfied: google-cloud-storage>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (1.30.0)
Collecting kubernetes<=10.0.0,>=8.0.0
Downloading kubernetes-10.0.0-py2.py3-none-any.whl (1.5 MB)
Requirement already satisfied: PyJWT>=1.6.4 in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (2.0.1)
Requirement already satisfied: cryptography>=2.4.2 in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (3.3.1)
Requirement already satisfied: google-auth>=1.6.1 in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (1.24.0)
Collecting requests_toolbelt>=0.8.0
Downloading requests_toolbelt-0.9.1-py2.py3-none-any.whl (54 kB)
Collecting cloudpickle==1.1.1
Downloading cloudpickle-1.1.1-py2.py3-none-any.whl (17 kB)
Collecting kfp-server-api<=0.1.40,>=0.1.18
Downloading kfp-server-api-0.1.40.tar.gz (38 kB)
Collecting argo-models==2.2.1a
Downloading argo-models-2.2.1a0.tar.gz (28 kB)
Requirement already satisfied: jsonschema>=3.0.1 in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (3.2.0)
Collecting tabulate==0.8.3
Downloading tabulate-0.8.3.tar.gz (46 kB)
Collecting click==7.0
Downloading Click-7.0-py2.py3-none-any.whl (81 kB)
Collecting Deprecated
Downloading Deprecated-1.2.11-py2.py3-none-any.whl (9.1 kB)
Collecting strip-hints
Downloading strip-hints-0.1.9.tar.gz (30 kB)
Requirement already satisfied: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2021.1)
Requirement already satisfied: numpy>=1.12.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (1.19.5)
Requirement already satisfied: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.6.0)
Requirement already satisfied: cffi>=1.12 in /opt/conda/lib/python3.7/site-packages (from cryptography>=2.4.2->kfp==0.2.5) (1.14.4)
Requirement already satisfied: pycparser in /opt/conda/lib/python3.7/site-packages (from cffi>=1.12->cryptography>=2.4.2->kfp==0.2.5) (2.20)
Requirement already satisfied: setuptools>=40.3.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==0.2.5) (49.6.0.post20210108)
Requirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==0.2.5) (4.7)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==0.2.5) (4.2.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==0.2.5) (0.2.7)
Requirement already satisfied: google-cloud-core<2.0dev,>=1.2.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage>=1.13.0->kfp==0.2.5) (1.3.0)
Requirement already satisfied: google-resumable-media<2.0dev,>=0.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage>=1.13.0->kfp==0.2.5) (1.2.0)
Requirement already satisfied: google-api-core<2.0.0dev,>=1.16.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==0.2.5) (1.22.4)
Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==0.2.5) (2.25.1)
Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==0.2.5) (1.52.0)
Requirement already satisfied: protobuf>=3.12.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==0.2.5) (3.14.0)
Requirement already satisfied: google-crc32c<2.0dev,>=1.0 in /opt/conda/lib/python3.7/site-packages (from google-resumable-media<2.0dev,>=0.6.0->google-cloud-storage>=1.13.0->kfp==0.2.5) (1.1.2)
Requirement already satisfied: pyrsistent>=0.14.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==0.2.5) (0.17.3)
Requirement already satisfied: importlib-metadata in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==0.2.5) (3.4.0)
Requirement already satisfied: attrs>=17.4.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==0.2.5) (20.3.0)
Requirement already satisfied: requests-oauthlib in /opt/conda/lib/python3.7/site-packages (from kubernetes<=10.0.0,>=8.0.0->kfp==0.2.5) (1.3.0)
Requirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /opt/conda/lib/python3.7/site-packages (from kubernetes<=10.0.0,>=8.0.0->kfp==0.2.5) (0.57.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth>=1.6.1->kfp==0.2.5) (0.4.8)
Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==0.2.5) (2.10)
Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==0.2.5) (3.0.4)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Requirement already satisfied: wrapt<2,>=1.10 in /opt/conda/lib/python3.7/site-packages (from Deprecated->kfp==0.2.5) (1.12.1)
Requirement already satisfied: typing-extensions>=3.6.4 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->jsonschema>=3.0.1->kfp==0.2.5) (3.7.4.3)
Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->jsonschema>=3.0.1->kfp==0.2.5) (3.4.0)
Requirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib->kubernetes<=10.0.0,>=8.0.0->kfp==0.2.5) (3.0.1)
Requirement already satisfied: wheel in /opt/conda/lib/python3.7/site-packages (from strip-hints->kfp==0.2.5) (0.36.2)
Building wheels for collected packages: kfp, argo-models, tabulate, kfp-server-api, fire, strip-hints, termcolor
Building wheel for kfp (setup.py): started
Building wheel for kfp (setup.py): finished with status 'done'
Created wheel for kfp: filename=kfp-0.2.5-py3-none-any.whl size=159979 sha256=fc9dda81fa40c59d2346044656b48cedc87af14a35e35a9836c318818b4c128a
Stored in directory: /root/.cache/pip/wheels/98/74/7e/0a882d654bdf82d039460ab5c6adf8724ae56e277de7c0eaea
Building wheel for argo-models (setup.py): started
Building wheel for argo-models (setup.py): finished with status 'done'
Created wheel for argo-models: filename=argo_models-2.2.1a0-py3-none-any.whl size=57308 sha256=cbcf81f3c6fe3162aae5f510065b88f60303167279d2205f6356af9b6cce84bb
Stored in directory: /root/.cache/pip/wheels/a9/4b/fd/cdd013bd2ad1a7162ecfaf954e9f1bb605174a20e3c02016b7
Building wheel for tabulate (setup.py): started
Building wheel for tabulate (setup.py): finished with status 'done'
Created wheel for tabulate: filename=tabulate-0.8.3-py3-none-any.whl size=23379 sha256=7824a217aa23352e642ee83c401d10a30894b0a386edc4b758226eb9ea19afb4
Stored in directory: /root/.cache/pip/wheels/b8/a2/a6/812a8a9735b090913e109133c7c20aaca4cf07e8e18837714f
Building wheel for kfp-server-api (setup.py): started
Building wheel for kfp-server-api (setup.py): finished with status 'done'
Created wheel for kfp-server-api: filename=kfp_server_api-0.1.40-py3-none-any.whl size=102470 sha256=8109cc950bceb69494134f60ba24c60ab8a92f2ed08da2db74d63de9284453e3
Stored in directory: /root/.cache/pip/wheels/01/e3/43/3972dea76ee89e35f090b313817089043f2609236cf560069d
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.4.0-py2.py3-none-any.whl size=115928 sha256=85a14098e9d581a9b5a7285d5810ef0ad87fee6637227d54ed63920395ee4504
Stored in directory: /root/.cache/pip/wheels/8a/67/fb/2e8a12fa16661b9d5af1f654bd199366799740a85c64981226
Building wheel for strip-hints (setup.py): started
Building wheel for strip-hints (setup.py): finished with status 'done'
Created wheel for strip-hints: filename=strip_hints-0.1.9-py2.py3-none-any.whl size=20993 sha256=492349f50568408c526d12a89a016d54edf2dac62134d97b072c340728df845a
Stored in directory: /root/.cache/pip/wheels/2d/b8/4e/a3ec111d2db63cec88121bd7c0ab1a123bce3b55dd19dda5c1
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4829 sha256=a59d80982cbf7eed377a885e50c65725ea1a557d952431d1a967190a947591e3
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built kfp argo-models tabulate kfp-server-api fire strip-hints termcolor
Installing collected packages: urllib3, kubernetes, termcolor, tabulate, strip-hints, requests-toolbelt, kfp-server-api, Deprecated, cloudpickle, click, argo-models, scikit-learn, pandas, kfp, fire
Attempting uninstall: urllib3
Found existing installation: urllib3 1.26.3
Uninstalling urllib3-1.26.3:
Successfully uninstalled urllib3-1.26.3
Attempting uninstall: kubernetes
Found existing installation: kubernetes 12.0.1
Uninstalling kubernetes-12.0.1:
Successfully uninstalled kubernetes-12.0.1
Attempting uninstall: cloudpickle
Found existing installation: cloudpickle 1.6.0
Uninstalling cloudpickle-1.6.0:
Successfully uninstalled cloudpickle-1.6.0
Attempting uninstall: click
Found existing installation: click 7.1.2
Uninstalling click-7.1.2:
Successfully uninstalled click-7.1.2
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 0.24.1
Uninstalling scikit-learn-0.24.1:
Successfully uninstalled scikit-learn-0.24.1
Attempting uninstall: pandas
Found existing installation: pandas 1.2.1
Uninstalling pandas-1.2.1:
Successfully uninstalled pandas-1.2.1
[91mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
conda 4.9.2 requires ruamel_yaml>=0.11.14, which is not installed.
visions 0.7.0 requires pandas>=0.25.3, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 2.8.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,>=0.25.3, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 2.8.0 requires visions[type_image_path]==0.4.4, but you have visions 0.7.0 which is incompatible.
jupyterlab-git 0.11.0 requires nbdime<2.0.0,>=1.1.0, but you have nbdime 2.1.0 which is incompatible.
black 20.8b1 requires click>=7.1.2, but you have click 7.0 which is incompatible.
[0mSuccessfully installed Deprecated-1.2.11 argo-models-2.2.1a0 click-7.0 cloudpickle-1.1.1 fire-0.4.0 kfp-0.2.5 kfp-server-api-0.1.40 kubernetes-10.0.0 pandas-0.24.2 requests-toolbelt-0.9.1 scikit-learn-0.20.4 strip-hints-0.1.9 tabulate-0.8.3 termcolor-1.1.0 urllib3-1.24.3
Removing intermediate container 0687988be165
---> 4401ecd7a29e
Successfully built 4401ecd7a29e
Successfully tagged gcr.io/mlops-ai-platform/base_image_kfp_caip_sklearn_lab02:latest
PUSH
Pushing gcr.io/mlops-ai-platform/base_image_kfp_caip_sklearn_lab02:latest
The push refers to repository [gcr.io/mlops-ai-platform/base_image_kfp_caip_sklearn_lab02]
aee583347e9c: Preparing
0f0532eed74a: Preparing
615e303004c8: Preparing
6ad6e8fd4ff0: Preparing
945f0370cab4: Preparing
289ab6c33408: Preparing
034a4b160541: Preparing
27b18b7fb87e: Preparing
ae18d372a1da: Preparing
cc450d62afb9: Preparing
d7d0fb2f7eb0: Preparing
3e75deadeefa: Preparing
c77962bfc51d: Preparing
caef3b0fe7f1: Preparing
c39d9f02e96e: Preparing
3a88efae17e5: Preparing
9f10818f1f96: Preparing
27502392e386: Preparing
c95d2191d777: Preparing
289ab6c33408: Waiting
034a4b160541: Waiting
27b18b7fb87e: Waiting
ae18d372a1da: Waiting
cc450d62afb9: Waiting
d7d0fb2f7eb0: Waiting
3e75deadeefa: Waiting
c77962bfc51d: Waiting
caef3b0fe7f1: Waiting
c39d9f02e96e: Waiting
3a88efae17e5: Waiting
9f10818f1f96: Waiting
27502392e386: Waiting
c95d2191d777: Waiting
615e303004c8: Layer already exists
6ad6e8fd4ff0: Layer already exists
945f0370cab4: Layer already exists
0f0532eed74a: Layer already exists
289ab6c33408: Layer already exists
27b18b7fb87e: Layer already exists
ae18d372a1da: Layer already exists
034a4b160541: Layer already exists
3e75deadeefa: Layer already exists
c77962bfc51d: Layer already exists
d7d0fb2f7eb0: Layer already exists
cc450d62afb9: Layer already exists
3a88efae17e5: Layer already exists
9f10818f1f96: Layer already exists
caef3b0fe7f1: Layer already exists
c39d9f02e96e: Layer already exists
27502392e386: Layer already exists
c95d2191d777: Layer already exists
aee583347e9c: Pushed
latest: digest: sha256:d7a44454b72f0fb10c54f552d89035898884daa255ba22a5bddc0a3eb21ce6d8 size: 4293
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
c6dd4b1e-f4f3-4be9-a0b8-3bc857c043f6 2021-02-14T09:00:28+00:00 2M50S gs://mlops-ai-platform_cloudbuild/source/1613293227.432482-31715e43adf14df9b1297d82cc846602.tgz gcr.io/mlops-ai-platform/base_image_kfp_caip_sklearn_lab02 (+1 more) SUCCESS
###Markdown
Compile the pipelineYou can compile the DSL using an API from the **KFP SDK** or using the **KFP** compiler.To compile the pipeline DSL using the **KFP** compiler. Set the pipeline's compile time settingsThe pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the `user-gcp-sa` secret of the Kubernetes namespace hosting Kubeflow Pipelines. If you want to use the `user-gcp-sa` service account you change the value of `USE_KFP_SA` to `True`.Note that the default AI Platform Pipelines configuration does not define the `user-gcp-sa` secret.
###Code
USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
###Output
env: USE_KFP_SA=False
env: BASE_IMAGE=gcr.io/mlops-ai-platform/base_image_kfp_caip_sklearn_lab02:latest
env: TRAINER_IMAGE=gcr.io/mlops-ai-platform/trainer_image_kfp_caip_sklearn_lab02:latest
env: COMPONENT_URL_SEARCH_PREFIX=https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/
env: RUNTIME_VERSION=1.15
env: PYTHON_VERSION=3.7
###Markdown
Use the CLI compiler to compile the pipeline
###Code
# !python3 pipeline/covertype_training_pipeline.py
!dsl-compile --py pipeline/covertype_training_pipeline.py --output covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
The result is the `covertype_training_pipeline.yaml` file.
###Code
!head covertype_training_pipeline.yaml
###Output
"apiVersion": |-
argoproj.io/v1alpha1
"kind": |-
Workflow
"metadata":
"annotations":
"pipelines.kubeflow.org/pipeline_spec": |-
{"description": "The pipeline training and deploying the Covertype classifierpipeline_yaml", "inputs": [{"name": "project_id"}, {"name": "region"}, {"name": "source_table_name"}, {"name": "gcs_root"}, {"name": "dataset_id"}, {"name": "evaluation_metric_name"}, {"name": "evaluation_metric_threshold"}, {"name": "model_id"}, {"name": "version_id"}, {"name": "replace_existing_version"}, {"default": "\n{\n \"hyperparameters\": {\n \"goal\": \"MAXIMIZE\",\n \"maxTrials\": 6,\n \"maxParallelTrials\": 3,\n \"hyperparameterMetricTag\": \"accuracy\",\n \"enableTrialEarlyStopping\": True,\n \"params\": [\n {\n \"parameterName\": \"max_iter\",\n \"type\": \"DISCRETE\",\n \"discreteValues\": [500, 1000]\n },\n {\n \"parameterName\": \"alpha\",\n \"type\": \"DOUBLE\",\n \"minValue\": 0.0001,\n \"maxValue\": 0.001,\n \"scaleType\": \"UNIT_LINEAR_SCALE\"\n }\n ]\n }\n}\n", "name": "hypertune_settings", "optional": true}, {"default": "US", "name": "dataset_location", "optional": true}], "name": "Covertype Classifier Training"}
"generateName": |-
covertype-classifier-training-
###Markdown
Deploy the pipeline package
###Code
PIPELINE_NAME='covertype_continuous_training'
!kfp --endpoint $ENDPOINT pipeline upload \
-p $PIPELINE_NAME \
covertype_training_pipeline.yaml
###Output
Pipeline da559a55-223d-4409-a564-6d35d57016f8 has been submitted
Pipeline Details
------------------
ID da559a55-223d-4409-a564-6d35d57016f8
Name covertype_continuous_training
Description
Uploaded at 2021-02-14T09:36:21+00:00
+-----------------------------+--------------------------------------------------+
| Parameter Name | Default Value |
+=============================+==================================================+
| project_id | |
+-----------------------------+--------------------------------------------------+
| region | |
+-----------------------------+--------------------------------------------------+
| source_table_name | |
+-----------------------------+--------------------------------------------------+
| gcs_root | |
+-----------------------------+--------------------------------------------------+
| dataset_id | |
+-----------------------------+--------------------------------------------------+
| evaluation_metric_name | |
+-----------------------------+--------------------------------------------------+
| evaluation_metric_threshold | |
+-----------------------------+--------------------------------------------------+
| model_id | |
+-----------------------------+--------------------------------------------------+
| version_id | |
+-----------------------------+--------------------------------------------------+
| replace_existing_version | |
+-----------------------------+--------------------------------------------------+
| hypertune_settings | { |
| | "hyperparameters": { |
| | "goal": "MAXIMIZE", |
| | "maxTrials": 6, |
| | "maxParallelTrials": 3, |
| | "hyperparameterMetricTag": "accuracy", |
| | "enableTrialEarlyStopping": True, |
| | "params": [ |
| | { |
| | "parameterName": "max_iter", |
| | "type": "DISCRETE", |
| | "discreteValues": [500, 1000] |
| | }, |
| | { |
| | "parameterName": "alpha", |
| | "type": "DOUBLE", |
| | "minValue": 0.0001, |
| | "maxValue": 0.001, |
| | "scaleType": "UNIT_LINEAR_SCALE" |
| | } |
| | ] |
| | } |
| | } |
+-----------------------------+--------------------------------------------------+
| dataset_location | US |
+-----------------------------+--------------------------------------------------+
###Markdown
Submitting pipeline runsYou can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run. List the pipelines in AI Platform Pipelines
###Code
!kfp --endpoint $ENDPOINT pipeline list
###Output
+--------------------------------------+-------------------------------------------------+---------------------------+
| Pipeline ID | Name | Uploaded at |
+======================================+=================================================+===========================+
| da559a55-223d-4409-a564-6d35d57016f8 | covertype_continuous_training | 2021-02-14T09:36:21+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| 195c50c3-155e-4ede-ac26-a57a14e52822 | [Tutorial] DSL - Control structures | 2021-02-14T08:10:34+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| 87a99a32-8757-4e24-80d0-9500c39cd4cc | [Tutorial] Data passing in python components | 2021-02-14T08:10:33+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| b1dc50d0-c870-4718-b10c-c537882eba1b | [Demo] TFX - Iris classification pipeline | 2021-02-14T08:10:31+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| 676842ef-e125-47fe-a738-2f0f3de8ec7f | [Demo] TFX - Taxi tip prediction model trainer | 2021-02-14T08:10:30+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| 79d2d5cb-55d3-40e8-9750-436e3940436d | [Demo] XGBoost - Training with confusion matrix | 2021-02-14T08:10:29+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
###Markdown
Submit a runFind the ID of the `covertype_continuous_training` pipeline you uploaded in the previous step and update the value of `PIPELINE_ID` .
###Code
PIPELINE_ID='da559a55-223d-4409-a564-6d35d57016f8' #Change
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_003'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'splits'
EVALUATION_METRIC = 'accuracy'
EVALUATION_METRIC_THRESHOLD = '0.69'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)
!echo $GCS_STAGING_PATH
!echo $PIPELINE_ID
!kfp --endpoint $ENDPOINT run submit \
-e $EXPERIMENT_NAME \
-r $RUN_ID \
-p $PIPELINE_ID \
project_id=$PROJECT_ID \
gcs_root=$GCS_STAGING_PATH \
region=$REGION \
source_table_name=$SOURCE_TABLE \
dataset_id=$DATASET_ID \
evaluation_metric_name=$EVALUATION_METRIC \
evaluation_metric_threshold=$EVALUATION_METRIC_THRESHOLD \
model_id=$MODEL_ID \
version_id=$VERSION_ID \
replace_existing_version=$REPLACE_EXISTING_VERSION
###Output
Run 2d6e3f9d-2fd8-47e6-96f7-e1ecfac69738 is submitted
+--------------------------------------+---------+----------+---------------------------+
| run id | name | status | created at |
+======================================+=========+==========+===========================+
| 2d6e3f9d-2fd8-47e6-96f7-e1ecfac69738 | Run_003 | | 2021-02-14T10:07:09+00:00 |
+--------------------------------------+---------+----------+---------------------------+
###Markdown
Continuous training pipeline with KFP and Cloud AI Platform **Learning Objectives:**1. Learn how to use KF pre-build components (BiqQuery, CAIP training and predictions)1. Learn how to use KF lightweight python components1. Learn how to build a KF pipeline with these components1. Learn how to compile, upload, and run a KF pipeline with the command lineIn this lab, you will build, deploy, and run a KFP pipeline that orchestrates **BigQuery** and **Cloud AI Platform** services to train, tune, and deploy a **scikit-learn** model. Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the `covertype_training_pipeline.py` file that we will generate below.The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
###Code
!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
###Output
_____no_output_____
###Markdown
The pipeline uses a mix of custom and pre-build components.- Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution: - [BigQuery query component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/bigquery/query) - [AI Platform Training component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/train) - [AI Platform Deploy component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/deploy)- Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's [Lightweight Python Components](https://www.kubeflow.org/docs/pipelines/sdk/lightweight-python-components/) mechanism. The code for the components is in the `helper_components.py` file: - **Retrieve Best Run**. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job. - **Evaluate Model**. This component evaluates a *sklearn* trained model using a provided metric and a testing dataset.
###Code
%%writefile ./pipeline/covertype_training_pipeline.py
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""KFP pipeline orchestrating BigQuery and Cloud AI Platform services."""
import os
from helper_components import evaluate_model
from helper_components import retrieve_best_run
from jinja2 import Template
import kfp
from kfp.components import func_to_container_op
from kfp.dsl.types import Dict
from kfp.dsl.types import GCPProjectID
from kfp.dsl.types import GCPRegion
from kfp.dsl.types import GCSPath
from kfp.dsl.types import String
from kfp.gcp import use_gcp_secret
# Defaults and environment settings
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
TRAINING_FILE_PATH = 'datasets/training/data.csv'
VALIDATION_FILE_PATH = 'datasets/validation/data.csv'
TESTING_FILE_PATH = 'datasets/testing/data.csv'
# Parameter defaults
SPLITS_DATASET_ID = 'splits'
HYPERTUNE_SETTINGS = """
{
"hyperparameters": {
"goal": "MAXIMIZE",
"maxTrials": 6,
"maxParallelTrials": 3,
"hyperparameterMetricTag": "accuracy",
"enableTrialEarlyStopping": True,
"params": [
{
"parameterName": "max_iter",
"type": "DISCRETE",
"discreteValues": [500, 1000]
},
{
"parameterName": "alpha",
"type": "DOUBLE",
"minValue": 0.0001,
"maxValue": 0.001,
"scaleType": "UNIT_LINEAR_SCALE"
}
]
}
}
"""
# Helper functions
def generate_sampling_query(source_table_name, num_lots, lots):
"""Prepares the data sampling query."""
sampling_query_template = """
SELECT *
FROM
`{{ source_table }}` AS cover
WHERE
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})
"""
query = Template(sampling_query_template).render(
source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])
return query
# Create component factories
component_store = kfp.components.ComponentStore(
local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])
bigquery_query_op = component_store.load_component('bigquery/query')
mlengine_train_op = component_store.load_component('ml_engine/train')
mlengine_deploy_op = component_store.load_component('ml_engine/deploy')
retrieve_best_run_op = func_to_container_op(
retrieve_best_run, base_image=BASE_IMAGE)
evaluate_model_op = func_to_container_op(evaluate_model, base_image=BASE_IMAGE)
@kfp.dsl.pipeline(
name='Covertype Classifier Training',
description='The pipeline training and deploying the Covertype classifierpipeline_yaml'
)
def covertype_train(project_id,
region,
source_table_name,
gcs_root,
dataset_id,
evaluation_metric_name,
evaluation_metric_threshold,
model_id,
version_id,
replace_existing_version,
hypertune_settings=HYPERTUNE_SETTINGS,
dataset_location='US'):
"""Orchestrates training and deployment of an sklearn model."""
# Create the training split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])
training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)
create_training_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=training_file_path,
dataset_location=dataset_location)
# Create the validation split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[8])
validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)
create_validation_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=validation_file_path,
dataset_location=dataset_location)
# Create the testing split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[9])
testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)
create_testing_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=testing_file_path,
dataset_location=dataset_location)
# Tune hyperparameters
tune_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'
]
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',
kfp.dsl.RUN_ID_PLACEHOLDER)
hypertune = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=tune_args,
training_input=hypertune_settings)
# Retrieve the best trial
get_best_trial = retrieve_best_run_op(
project_id, hypertune.outputs['job_id'])
# Train the model on a combined training and validation datasets
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)
train_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--alpha',
get_best_trial.outputs['alpha'], '--max_iter',
get_best_trial.outputs['max_iter'], '--hptune', 'False'
]
train_model = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=train_args)
# Evaluate the model on the testing split
eval_model = evaluate_model_op(
dataset_path=str(create_testing_split.outputs['output_gcs_path']),
model_path=str(train_model.outputs['job_dir']),
metric_name=evaluation_metric_name)
# Deploy the model if the primary metric is better than threshold
with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):
deploy_model = mlengine_deploy_op(
model_uri=train_model.outputs['job_dir'],
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
replace_existing_version=replace_existing_version)
# Configure the pipeline to run using the service account defined
# in the user-gcp-sa k8s secret
if USE_KFP_SA == 'True':
kfp.dsl.get_pipeline_conf().add_op_transformer(
use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
The custom components execute in a container image defined in `base_image/Dockerfile`.
###Code
!cat base_image/Dockerfile
###Output
_____no_output_____
###Markdown
The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in `trainer_image/Dockerfile`.
###Code
!cat trainer_image/Dockerfile
###Output
_____no_output_____
###Markdown
Building and deploying the pipelineBefore deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on [Argo Workflow](https://github.com/argoproj/argo), which is expressed in YAML. Configure environment settingsUpdate the below constants with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `hostedkfp-default-` prefix.- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the *SETTINGS* for your instance2. Use the value of the `host` variable in the *Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD* section of the *SETTINGS* window.
###Code
REGION = 'us-central1'
ENDPOINT = '337dd39580cbcbd2-dot-us-central2.pipelines.googleusercontent.com'
ARTIFACT_STORE_URI = 'gs://hostedkfp-default-e8c59nl4zo'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
###Output
_____no_output_____
###Markdown
Build the trainer image
###Code
IMAGE_NAME='trainer_image'
TAG='latest'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image
###Output
_____no_output_____
###Markdown
Build the base image for custom components
###Code
IMAGE_NAME='base_image'
TAG='latest'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image
###Output
_____no_output_____
###Markdown
Compile the pipelineYou can compile the DSL using an API from the **KFP SDK** or using the **KFP** compiler.To compile the pipeline DSL using the **KFP** compiler. Set the pipeline's compile time settingsThe pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the `user-gcp-sa` secret of the Kubernetes namespace hosting Kubeflow Pipelines. If you want to use the `user-gcp-sa` service account you change the value of `USE_KFP_SA` to `True`.Note that the default AI Platform Pipelines configuration does not define the `user-gcp-sa` secret.
###Code
USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
###Output
_____no_output_____
###Markdown
Use the CLI compiler to compile the pipeline
###Code
!dsl-compile --py pipeline/covertype_training_pipeline.py --output covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
The result is the `covertype_training_pipeline.yaml` file.
###Code
!head covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Deploy the pipeline package
###Code
PIPELINE_NAME='covertype_continuous_training'
!kfp --endpoint $ENDPOINT pipeline upload \
-p $PIPELINE_NAME \
covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Submitting pipeline runsYou can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run. List the pipelines in AI Platform Pipelines
###Code
!kfp --endpoint $ENDPOINT pipeline list
###Output
_____no_output_____
###Markdown
Submit a runFind the ID of the `covertype_continuous_training` pipeline you uploaded in the previous step and update the value of `PIPELINE_ID` .
###Code
PIPELINE_ID='0918568d-758c-46cf-9752-e04a4403cd84'
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_001'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'splits'
EVALUATION_METRIC = 'accuracy'
EVALUATION_METRIC_THRESHOLD = '0.69'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)
!kfp --endpoint $ENDPOINT run submit \
-e $EXPERIMENT_NAME \
-r $RUN_ID \
-p $PIPELINE_ID \
project_id=$PROJECT_ID \
gcs_root=$GCS_STAGING_PATH \
region=$REGION \
source_table_name=$SOURCE_TABLE \
dataset_id=$DATASET_ID \
evaluation_metric_name=$EVALUATION_METRIC \
evaluation_metric_threshold=$EVALUATION_METRIC_THRESHOLD \
model_id=$MODEL_ID \
version_id=$VERSION_ID \
replace_existing_version=$REPLACE_EXISTING_VERSION
###Output
_____no_output_____
###Markdown
Submitting pipeline runsYou can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run. List the pipelines in AI Platform Pipelines
###Code
!kfp --endpoint $ENDPOINT pipeline list
###Output
_____no_output_____
###Markdown
Submit a runFind the ID of the `covertype_continuous_training` pipeline you uploaded in the previous step and update the value of `PIPELINE_ID` .
###Code
PIPELINE_ID='0918568d-758c-46cf-9752-e04a4403cd84' #Change
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_001'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'splits'
EVALUATION_METRIC = 'accuracy'
EVALUATION_METRIC_THRESHOLD = '0.69'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)
!kfp --endpoint $ENDPOINT run submit \
-e $EXPERIMENT_NAME \
-r $RUN_ID \
-p $PIPELINE_ID \
project_id=$PROJECT_ID \
gcs_root=$GCS_STAGING_PATH \
region=$REGION \
source_table_name=$SOURCE_TABLE \
dataset_id=$DATASET_ID \
evaluation_metric_name=$EVALUATION_METRIC \
evaluation_metric_threshold=$EVALUATION_METRIC_THRESHOLD \
model_id=$MODEL_ID \
version_id=$VERSION_ID \
replace_existing_version=$REPLACE_EXISTING_VERSION
###Output
_____no_output_____
###Markdown
Continuous training pipeline with Kubeflow Pipeline (KFP) and Cloud AI Platform **Learning Objectives:**1. Learn how to use KFP pre-build components (BiqQuery, AI Platform training and predictions)1. Learn how to use KFP lightweight python components1. Learn how to build a KFP with these components1. Learn how to compile, upload, and run a KFP with the command lineIn this lab, you will build, deploy, and run a KFP pipeline that orchestrates **BigQuery** and **Cloud AI Platform** services to train, tune, and deploy a **scikit-learn** model. Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the `covertype_training_pipeline.py` file that we will generate below.The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
###Code
!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
###Output
_____no_output_____
###Markdown
The pipeline uses a mix of custom and pre-build components.- Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution: - [BigQuery query component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/bigquery/query) - [AI Platform Training component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/train) - [AI Platform Deploy component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/deploy)- Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's [Lightweight Python Components](https://www.kubeflow.org/docs/pipelines/sdk/lightweight-python-components/) mechanism. The code for the components is in the `helper_components.py` file: - **Retrieve Best Run**. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job. - **Evaluate Model**. This component evaluates a *sklearn* trained model using a provided metric and a testing dataset.
###Code
%%writefile ./pipeline/covertype_training_pipeline.py
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""KFP orchestrating BigQuery and Cloud AI Platform services."""
import os
from helper_components import evaluate_model
from helper_components import retrieve_best_run
from jinja2 import Template
import kfp
from kfp.components import func_to_container_op
from kfp.dsl.types import Dict
from kfp.dsl.types import GCPProjectID
from kfp.dsl.types import GCPRegion
from kfp.dsl.types import GCSPath
from kfp.dsl.types import String
from kfp.gcp import use_gcp_secret
# Defaults and environment settings
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
TRAINING_FILE_PATH = 'datasets/training/data.csv'
VALIDATION_FILE_PATH = 'datasets/validation/data.csv'
TESTING_FILE_PATH = 'datasets/testing/data.csv'
# Parameter defaults
SPLITS_DATASET_ID = 'splits'
HYPERTUNE_SETTINGS = """
{
"hyperparameters": {
"goal": "MAXIMIZE",
"maxTrials": 6,
"maxParallelTrials": 3,
"hyperparameterMetricTag": "accuracy",
"enableTrialEarlyStopping": True,
"params": [
{
"parameterName": "max_iter",
"type": "DISCRETE",
"discreteValues": [500, 1000]
},
{
"parameterName": "alpha",
"type": "DOUBLE",
"minValue": 0.0001,
"maxValue": 0.001,
"scaleType": "UNIT_LINEAR_SCALE"
}
]
}
}
"""
# Helper functions
def generate_sampling_query(source_table_name, num_lots, lots):
"""Prepares the data sampling query."""
sampling_query_template = """
SELECT *
FROM
`{{ source_table }}` AS cover
WHERE
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})
"""
query = Template(sampling_query_template).render(
source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])
return query
# Create component factories
component_store = kfp.components.ComponentStore(
local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])
bigquery_query_op = component_store.load_component('bigquery/query')
mlengine_train_op = component_store.load_component('ml_engine/train')
mlengine_deploy_op = component_store.load_component('ml_engine/deploy')
retrieve_best_run_op = func_to_container_op(
retrieve_best_run, base_image=BASE_IMAGE)
evaluate_model_op = func_to_container_op(evaluate_model, base_image=BASE_IMAGE)
@kfp.dsl.pipeline(
name='Covertype Classifier Training',
description='The pipeline training and deploying the Covertype classifierpipeline_yaml'
)
def covertype_train(project_id,
region,
source_table_name,
gcs_root,
dataset_id,
evaluation_metric_name,
evaluation_metric_threshold,
model_id,
version_id,
replace_existing_version,
hypertune_settings=HYPERTUNE_SETTINGS,
dataset_location='US'):
"""Orchestrates training and deployment of an sklearn model."""
# Create the training split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])
training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)
create_training_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=training_file_path,
dataset_location=dataset_location)
# Create the validation split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[8])
validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)
create_validation_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=validation_file_path,
dataset_location=dataset_location)
# Create the testing split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[9])
testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)
create_testing_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=testing_file_path,
dataset_location=dataset_location)
# Tune hyperparameters
tune_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'
]
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',
kfp.dsl.RUN_ID_PLACEHOLDER)
hypertune = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=tune_args,
training_input=hypertune_settings)
# Retrieve the best trial
get_best_trial = retrieve_best_run_op(
project_id, hypertune.outputs['job_id'])
# Train the model on a combined training and validation datasets
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)
train_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--alpha',
get_best_trial.outputs['alpha'], '--max_iter',
get_best_trial.outputs['max_iter'], '--hptune', 'False'
]
train_model = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=train_args)
# Evaluate the model on the testing split
eval_model = evaluate_model_op(
dataset_path=str(create_testing_split.outputs['output_gcs_path']),
model_path=str(train_model.outputs['job_dir']),
metric_name=evaluation_metric_name)
# Deploy the model if the primary metric is better than threshold
with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):
deploy_model = mlengine_deploy_op(
model_uri=train_model.outputs['job_dir'],
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
replace_existing_version=replace_existing_version)
# Configure the pipeline to run using the service account defined
# in the user-gcp-sa k8s secret
if USE_KFP_SA == 'True':
kfp.dsl.get_pipeline_conf().add_op_transformer(
use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
The custom components execute in a container image defined in `base_image/Dockerfile`.
###Code
!cat base_image/Dockerfile
###Output
_____no_output_____
###Markdown
The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in `trainer_image/Dockerfile`.
###Code
!cat trainer_image/Dockerfile
###Output
_____no_output_____
###Markdown
Building and deploying the pipelineBefore deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on [Argo Workflow](https://github.com/argoproj/argo), which is expressed in YAML. Configure environment settingsUpdate the below constants with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default`.- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the **SETTINGS** for your instance2. Use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD** section of the **SETTINGS** window.Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
###Code
!gsutil ls
REGION = 'us-central1'
ENDPOINT = '337dd39580cbcbd2-dot-us-central2.pipelines.googleusercontent.com' #ย TO DO: REPLACEย WITHย YOURย ENDPOINT
# Use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SDK** section of the **SETTINGS** window.
ARTIFACT_STORE_URI = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' #ย TO DO: REPLACEย WITHย YOURย ARTIFACT_STOREย NAME
# (HINT: Copyย theย bucketย nameย whichย startsย withย theย qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-defaultย prefixย fromย theย previousย cellย output.
# Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default')
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
###Output
_____no_output_____
###Markdown
Build the trainer image
###Code
IMAGE_NAME='trainer_image'
TAG='latest'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
###Output
_____no_output_____
###Markdown
**Note**: Please ignore any **incompatibility ERROR** that may appear for the packages visions as it will not affect the lab's functionality.
###Code
!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image
###Output
_____no_output_____
###Markdown
Build the base image for custom components
###Code
IMAGE_NAME='base_image'
TAG='latest'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image
###Output
_____no_output_____
###Markdown
Compile the pipelineYou can compile the DSL using an API from the **KFP SDK** or using the **KFP** compiler.To compile the pipeline DSL using the **KFP** compiler. Set the pipeline's compile time settingsThe pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the `user-gcp-sa` secret of the Kubernetes namespace hosting KFP. If you want to use the `user-gcp-sa` service account you change the value of `USE_KFP_SA` to `True`.Note that the default AI Platform Pipelines configuration does not define the `user-gcp-sa` secret.
###Code
USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
###Output
_____no_output_____
###Markdown
Use the CLI compiler to compile the pipeline
###Code
!dsl-compile --py pipeline/covertype_training_pipeline.py --output covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
The result is the `covertype_training_pipeline.yaml` file.
###Code
!head covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Deploy the pipeline package
###Code
PIPELINE_NAME='covertype_continuous_training'
!kfp --endpoint $ENDPOINT pipeline upload \
-p $PIPELINE_NAME \
covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Submitting pipeline runsYou can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run. List the pipelines in AI Platform Pipelines
###Code
!kfp --endpoint $ENDPOINT pipeline list
###Output
_____no_output_____
###Markdown
Submit a runFind the ID of the `covertype_continuous_training` pipeline you uploaded in the previous step and update the value of `PIPELINE_ID` .
###Code
PIPELINE_ID='0918568d-758c-46cf-9752-e04a4403cd84' #ย TO DO: REPLACEย WITHย YOURย PIPELINE ID
#ย HINT:ย Copyย theย PIPELINEย IDย fromย theย previousย cellย output.ย
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_001'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'splits'
EVALUATION_METRIC = 'accuracy'
EVALUATION_METRIC_THRESHOLD = '0.69'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)
###Output
_____no_output_____
###Markdown
Run the pipeline using theย kfpย command line by retrieving the variables from the environment to pass to the pipeline where:- EXPERIMENT_NAME is set to the experiment used to run the pipeline. You can choose any name you want. If the experiment does not exist it will be created by the command- RUN_ID is the name of the run. You can use an arbitrary name- PIPELINE_ID is the id of your pipeline. Use the value retrieved by the `kfp pipeline list` command- GCS_STAGING_PATH is the URI to the GCS location used by the pipeline to store intermediate files. By default, it is set to the `staging` folder in your artifact store.- REGION is a compute region for AI Platform Training and Prediction. You should be already familiar with these and other parameters passed to the command. If not go back and review the pipeline code.
###Code
!kfp --endpoint $ENDPOINT run submit \
-e $EXPERIMENT_NAME \
-r $RUN_ID \
-p $PIPELINE_ID \
project_id=$PROJECT_ID \
gcs_root=$GCS_STAGING_PATH \
region=$REGION \
source_table_name=$SOURCE_TABLE \
dataset_id=$DATASET_ID \
evaluation_metric_name=$EVALUATION_METRIC \
evaluation_metric_threshold=$EVALUATION_METRIC_THRESHOLD \
model_id=$MODEL_ID \
version_id=$VERSION_ID \
replace_existing_version=$REPLACE_EXISTING_VERSION
###Output
_____no_output_____
###Markdown
Continuous training pipeline with KFP and Cloud AI Platform **Learning Objectives:**1. Learn how to use KF pre-build components (BiqQuery, CAIP training and predictions)1. Learn how to use KF lightweight python components1. Learn how to build a KF pipeline with these components1. Learn how to compile, upload, and run a KF pipeline with the command lineIn this lab, you will build, deploy, and run a KFP pipeline that orchestrates **BigQuery** and **Cloud AI Platform** services to train, tune, and deploy a **scikit-learn** model. Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the `covertype_training_pipeline.py` file that we will generate below.The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
###Code
!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
###Output
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
###Markdown
The pipeline uses a mix of custom and pre-build components.- Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution: - [BigQuery query component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/bigquery/query) - [AI Platform Training component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/train) - [AI Platform Deploy component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/deploy)- Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's [Lightweight Python Components](https://www.kubeflow.org/docs/pipelines/sdk/lightweight-python-components/) mechanism. The code for the components is in the `helper_components.py` file: - **Retrieve Best Run**. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job. - **Evaluate Model**. This component evaluates a *sklearn* trained model using a provided metric and a testing dataset.
###Code
%%writefile ./pipeline/covertype_training_pipeline.py
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""KFP pipeline orchestrating BigQuery and Cloud AI Platform services."""
import os
from helper_components import evaluate_model
from helper_components import retrieve_best_run
from jinja2 import Template
import kfp
from kfp.components import func_to_container_op
from kfp.dsl.types import Dict
from kfp.dsl.types import GCPProjectID
from kfp.dsl.types import GCPRegion
from kfp.dsl.types import GCSPath
from kfp.dsl.types import String
from kfp.gcp import use_gcp_secret
# Defaults and environment settings
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
TRAINING_FILE_PATH = 'datasets/training/data.csv'
VALIDATION_FILE_PATH = 'datasets/validation/data.csv'
TESTING_FILE_PATH = 'datasets/testing/data.csv'
# Parameter defaults
SPLITS_DATASET_ID = 'splits'
HYPERTUNE_SETTINGS = """
{
"hyperparameters": {
"goal": "MAXIMIZE",
"maxTrials": 6,
"maxParallelTrials": 3,
"hyperparameterMetricTag": "accuracy",
"enableTrialEarlyStopping": True,
"params": [
{
"parameterName": "max_iter",
"type": "DISCRETE",
"discreteValues": [500, 1000]
},
{
"parameterName": "alpha",
"type": "DOUBLE",
"minValue": 0.0001,
"maxValue": 0.001,
"scaleType": "UNIT_LINEAR_SCALE"
}
]
}
}
"""
# Helper functions
def generate_sampling_query(source_table_name, num_lots, lots):
"""Prepares the data sampling query."""
sampling_query_template = """
SELECT *
FROM
`{{ source_table }}` AS cover
WHERE
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})
"""
query = Template(sampling_query_template).render(
source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])
return query
# Create component factories
component_store = kfp.components.ComponentStore(
local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])
bigquery_query_op = component_store.load_component('bigquery/query')
mlengine_train_op = component_store.load_component('ml_engine/train')
mlengine_deploy_op = component_store.load_component('ml_engine/deploy')
retrieve_best_run_op = func_to_container_op(
retrieve_best_run, base_image=BASE_IMAGE)
evaluate_model_op = func_to_container_op(evaluate_model, base_image=BASE_IMAGE)
@kfp.dsl.pipeline(
name='Covertype Classifier Training',
description='The pipeline training and deploying the Covertype classifierpipeline_yaml'
)
def covertype_train(project_id,
region,
source_table_name,
gcs_root,
dataset_id,
evaluation_metric_name,
evaluation_metric_threshold,
model_id,
version_id,
replace_existing_version,
hypertune_settings=HYPERTUNE_SETTINGS,
dataset_location='US'):
"""Orchestrates training and deployment of an sklearn model."""
# Create the training split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])
training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)
create_training_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=training_file_path,
dataset_location=dataset_location)
# Create the validation split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[8])
validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)
create_validation_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=validation_file_path,
dataset_location=dataset_location)
# Create the testing split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[9])
testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)
create_testing_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=testing_file_path,
dataset_location=dataset_location)
# Tune hyperparameters
tune_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'
]
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',
kfp.dsl.RUN_ID_PLACEHOLDER)
hypertune = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=tune_args,
training_input=hypertune_settings)
# Retrieve the best trial
get_best_trial = retrieve_best_run_op(
project_id, hypertune.outputs['job_id'])
# Train the model on a combined training and validation datasets
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)
train_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--alpha',
get_best_trial.outputs['alpha'], '--max_iter',
get_best_trial.outputs['max_iter'], '--hptune', 'False'
]
train_model = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=train_args)
# Evaluate the model on the testing split
eval_model = evaluate_model_op(
dataset_path=str(create_testing_split.outputs['output_gcs_path']),
model_path=str(train_model.outputs['job_dir']),
metric_name=evaluation_metric_name)
# Deploy the model if the primary metric is better than threshold
with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):
deploy_model = mlengine_deploy_op(
model_uri=train_model.outputs['job_dir'],
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
replace_existing_version=replace_existing_version)
# Configure the pipeline to run using the service account defined
# in the user-gcp-sa k8s secret
if USE_KFP_SA == 'True':
kfp.dsl.get_pipeline_conf().add_op_transformer(
use_gcp_secret('user-gcp-sa'))
###Output
Overwriting ./pipeline/covertype_training_pipeline.py
###Markdown
The custom components execute in a container image defined in `base_image/Dockerfile`.
###Code
!cat base_image/Dockerfile
###Output
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire scikit-learn==0.20.4 pandas==0.24.2 kfp==0.2.5
###Markdown
The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in `trainer_image/Dockerfile`.
###Code
!cat trainer_image/Dockerfile
###Output
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
###Markdown
Building and deploying the pipelineBefore deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on [Argo Workflow](https://github.com/argoproj/argo), which is expressed in YAML. Configure environment settingsUpdate the below constants with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `hostedkfp-default-` prefix.- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the *SETTINGS* for your instance2. Use the value of the `host` variable in the *Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD* section of the *SETTINGS* window.
###Code
REGION = 'us-central1'
ENDPOINT = 'https://71a54605ca951f8a-dot-us-central2.pipelines.googleusercontent.com/'
ARTIFACT_STORE_URI = 'gs://notebooks-project-kubeflowpipelines-default'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
###Output
_____no_output_____
###Markdown
Build the trainer image
###Code
IMAGE_NAME='trainer_image'
TAG='latest'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image
###Output
Creating temporary tarball archive of 2 file(s) totalling 3.4 KiB before compression.
Uploading tarball of [trainer_image] to [gs://notebooks-project_cloudbuild/source/1591846773.8-b09a0d7fa65a4fdfbc4faf93d5ebc5d9.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/notebooks-project/builds/1cf04f3f-5647-4406-89f1-c5c620a0ffcd].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/1cf04f3f-5647-4406-89f1-c5c620a0ffcd?project=57195341408].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "1cf04f3f-5647-4406-89f1-c5c620a0ffcd"
FETCHSOURCE
Fetching storage object: gs://notebooks-project_cloudbuild/source/1591846773.8-b09a0d7fa65a4fdfbc4faf93d5ebc5d9.tgz#1591846774195452
Copying gs://notebooks-project_cloudbuild/source/1591846773.8-b09a0d7fa65a4fdfbc4faf93d5ebc5d9.tgz#1591846774195452...
/ [1 files][ 1.6 KiB/ 1.6 KiB]
Operation completed over 1 objects/1.6 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
***** NOTICE *****
Alternative official `docker` images, including multiple versions across
multiple platforms, are maintained by the Docker Team. For details, please
visit https://hub.docker.com/_/docker.
***** END OF NOTICE *****
Sending build context to Docker daemon 6.144kB
Step 1/5 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
23884877105a: Already exists
bc38caa0f5b9: Already exists
2910811b6c42: Already exists
36505266dcc6: Already exists
849ad9beac6e: Pulling fs layer
a64c327529cb: Pulling fs layer
dccdba305be8: Pulling fs layer
2fb40d36dab0: Pulling fs layer
8474173a1351: Pulling fs layer
ed21aabcd3c4: Pulling fs layer
d28a873c3e04: Pulling fs layer
3af36d90a2a7: Pulling fs layer
785cf3cd5146: Pulling fs layer
ffa164d92e08: Pulling fs layer
cd15290c07a4: Pulling fs layer
9377ddac7bc2: Pulling fs layer
2fb40d36dab0: Waiting
8474173a1351: Waiting
ed21aabcd3c4: Waiting
d28a873c3e04: Waiting
3af36d90a2a7: Waiting
785cf3cd5146: Waiting
ffa164d92e08: Waiting
cd15290c07a4: Waiting
9377ddac7bc2: Waiting
dccdba305be8: Verifying Checksum
dccdba305be8: Download complete
a64c327529cb: Verifying Checksum
a64c327529cb: Download complete
8474173a1351: Verifying Checksum
8474173a1351: Download complete
ed21aabcd3c4: Verifying Checksum
ed21aabcd3c4: Download complete
d28a873c3e04: Verifying Checksum
d28a873c3e04: Download complete
2fb40d36dab0: Verifying Checksum
2fb40d36dab0: Download complete
785cf3cd5146: Verifying Checksum
785cf3cd5146: Download complete
3af36d90a2a7: Verifying Checksum
3af36d90a2a7: Download complete
ffa164d92e08: Verifying Checksum
ffa164d92e08: Download complete
9377ddac7bc2: Verifying Checksum
9377ddac7bc2: Download complete
849ad9beac6e: Verifying Checksum
849ad9beac6e: Download complete
cd15290c07a4: Verifying Checksum
cd15290c07a4: Download complete
849ad9beac6e: Pull complete
a64c327529cb: Pull complete
dccdba305be8: Pull complete
2fb40d36dab0: Pull complete
8474173a1351: Pull complete
ed21aabcd3c4: Pull complete
d28a873c3e04: Pull complete
3af36d90a2a7: Pull complete
785cf3cd5146: Pull complete
ffa164d92e08: Pull complete
cd15290c07a4: Pull complete
9377ddac7bc2: Pull complete
Digest: sha256:c1c502ed4f1611f79104f39cc954c4905b734f55f132b88bdceef39993d3a67a
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> a8d0f992657a
Step 2/5 : RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
---> Running in 78dc420b8704
Collecting fire
Downloading fire-0.3.1.tar.gz (81 kB)
Collecting cloudml-hypertune
Downloading cloudml-hypertune-0.1.0.dev6.tar.gz (3.2 kB)
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
Requirement already satisfied, skipping upgrade: six in /opt/conda/lib/python3.7/site-packages (from fire) (1.15.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Requirement already satisfied, skipping upgrade: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.4.1)
Requirement already satisfied, skipping upgrade: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.18.1)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.1)
Requirement already satisfied, skipping upgrade: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2020.1)
Building wheels for collected packages: fire, cloudml-hypertune, termcolor
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.3.1-py2.py3-none-any.whl size=111005 sha256=fbc9f81e5a939879b97d89aecbd37334812ff55d3d02c81914dfd809a964bff2
Stored in directory: /root/.cache/pip/wheels/95/38/e1/8b62337a8ecf5728bdc1017e828f253f7a9cf25db999861bec
Building wheel for cloudml-hypertune (setup.py): started
Building wheel for cloudml-hypertune (setup.py): finished with status 'done'
Created wheel for cloudml-hypertune: filename=cloudml_hypertune-0.1.0.dev6-py2.py3-none-any.whl size=3986 sha256=a5794cca601753ff905481c166c6003fe397315ad8ed02809e3bc31759d0c9cd
Stored in directory: /root/.cache/pip/wheels/a7/ff/87/e7bed0c2741fe219b3d6da67c2431d7f7fedb183032e00f81e
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4830 sha256=3d861b4bd5908cf2e7b0c9d703ad16731488c9b1f3807deadb5fcc0198b84cbc
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built fire cloudml-hypertune termcolor
[91mERROR: visions 0.4.4 has requirement pandas>=0.25.3, but you'll have pandas 0.24.2 which is incompatible.
[0m[91mERROR: pandas-profiling 2.8.0 has requirement pandas!=1.0.0,!=1.0.1,!=1.0.2,>=0.25.3, but you'll have pandas 0.24.2 which is incompatible.
[0mInstalling collected packages: termcolor, fire, cloudml-hypertune, scikit-learn, pandas
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 0.23.1
Uninstalling scikit-learn-0.23.1:
Successfully uninstalled scikit-learn-0.23.1
Attempting uninstall: pandas
Found existing installation: pandas 1.0.4
Uninstalling pandas-1.0.4:
Successfully uninstalled pandas-1.0.4
Successfully installed cloudml-hypertune-0.1.0.dev6 fire-0.3.1 pandas-0.24.2 scikit-learn-0.20.4 termcolor-1.1.0
Removing intermediate container 78dc420b8704
---> 55fb554cae7b
Step 3/5 : WORKDIR /app
---> Running in 4bca7d654a95
Removing intermediate container 4bca7d654a95
---> d868d1df928c
Step 4/5 : COPY train.py .
---> 903d9ff5e0e5
Step 5/5 : ENTRYPOINT ["python", "train.py"]
---> Running in 1820eb1b40d6
Removing intermediate container 1820eb1b40d6
---> c09ac239a27a
Successfully built c09ac239a27a
Successfully tagged gcr.io/notebooks-project/trainer_image:latest
PUSH
Pushing gcr.io/notebooks-project/trainer_image:latest
***** NOTICE *****
Alternative official `docker` images, including multiple versions across
multiple platforms, are maintained by the Docker Team. For details, please
visit https://hub.docker.com/_/docker.
***** END OF NOTICE *****
The push refers to repository [gcr.io/notebooks-project/trainer_image]
43e64d99cd1a: Preparing
9affa8619465: Preparing
c4235e9e7b22: Preparing
fefb87f62fdb: Preparing
e0df03809e31: Preparing
943d61a75b08: Preparing
9f9fba54787e: Preparing
8909b952983a: Preparing
b4752dfe51c5: Preparing
a5708141c1ce: Preparing
f4d992136ce9: Preparing
29946d6f4552: Preparing
42ff99c2af8e: Preparing
4457871e1192: Preparing
be5ae40b3f47: Preparing
28ba7458d04b: Preparing
838a37a24627: Preparing
a6ebef4a95c3: Preparing
b7f7d2967507: Preparing
943d61a75b08: Waiting
9f9fba54787e: Waiting
8909b952983a: Waiting
b4752dfe51c5: Waiting
a5708141c1ce: Waiting
f4d992136ce9: Waiting
29946d6f4552: Waiting
42ff99c2af8e: Waiting
4457871e1192: Waiting
be5ae40b3f47: Waiting
28ba7458d04b: Waiting
838a37a24627: Waiting
a6ebef4a95c3: Waiting
b7f7d2967507: Waiting
e0df03809e31: Layer already exists
fefb87f62fdb: Layer already exists
943d61a75b08: Layer already exists
9f9fba54787e: Layer already exists
8909b952983a: Layer already exists
b4752dfe51c5: Layer already exists
a5708141c1ce: Layer already exists
f4d992136ce9: Layer already exists
43e64d99cd1a: Pushed
9affa8619465: Pushed
42ff99c2af8e: Layer already exists
29946d6f4552: Layer already exists
4457871e1192: Layer already exists
28ba7458d04b: Layer already exists
be5ae40b3f47: Layer already exists
838a37a24627: Layer already exists
a6ebef4a95c3: Layer already exists
b7f7d2967507: Layer already exists
c4235e9e7b22: Pushed
latest: digest: sha256:1cc9ce24a72215a0cfc03a6cf13aa36b0f0531761843acb3f29e2c7444e541a2 size: 4293
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
1cf04f3f-5647-4406-89f1-c5c620a0ffcd 2020-06-11T03:39:34+00:00 3M32S gs://notebooks-project_cloudbuild/source/1591846773.8-b09a0d7fa65a4fdfbc4faf93d5ebc5d9.tgz gcr.io/notebooks-project/trainer_image (+1 more) SUCCESS
###Markdown
Build the base image for custom components
###Code
IMAGE_NAME='base_image'
TAG='latest'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image
###Output
Creating temporary tarball archive of 1 file(s) totalling 122 bytes before compression.
Uploading tarball of [base_image] to [gs://notebooks-project_cloudbuild/source/1591847003.79-5e83bfbd4f9646419f6624ebcd1b6e51.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/notebooks-project/builds/655fee5f-6db6-444a-8e94-68899cc7baf9].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/655fee5f-6db6-444a-8e94-68899cc7baf9?project=57195341408].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "655fee5f-6db6-444a-8e94-68899cc7baf9"
FETCHSOURCE
Fetching storage object: gs://notebooks-project_cloudbuild/source/1591847003.79-5e83bfbd4f9646419f6624ebcd1b6e51.tgz#1591847004226672
Copying gs://notebooks-project_cloudbuild/source/1591847003.79-5e83bfbd4f9646419f6624ebcd1b6e51.tgz#1591847004226672...
/ [1 files][ 228.0 B/ 228.0 B]
Operation completed over 1 objects/228.0 B.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
***** NOTICE *****
Alternative official `docker` images, including multiple versions across
multiple platforms, are maintained by the Docker Team. For details, please
visit https://hub.docker.com/_/docker.
***** END OF NOTICE *****
Sending build context to Docker daemon 2.048kB
Step 1/2 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
23884877105a: Already exists
bc38caa0f5b9: Already exists
2910811b6c42: Already exists
36505266dcc6: Already exists
849ad9beac6e: Pulling fs layer
a64c327529cb: Pulling fs layer
dccdba305be8: Pulling fs layer
2fb40d36dab0: Pulling fs layer
8474173a1351: Pulling fs layer
ed21aabcd3c4: Pulling fs layer
d28a873c3e04: Pulling fs layer
3af36d90a2a7: Pulling fs layer
785cf3cd5146: Pulling fs layer
ffa164d92e08: Pulling fs layer
cd15290c07a4: Pulling fs layer
9377ddac7bc2: Pulling fs layer
2fb40d36dab0: Waiting
8474173a1351: Waiting
ed21aabcd3c4: Waiting
d28a873c3e04: Waiting
3af36d90a2a7: Waiting
785cf3cd5146: Waiting
ffa164d92e08: Waiting
cd15290c07a4: Waiting
9377ddac7bc2: Waiting
dccdba305be8: Verifying Checksum
dccdba305be8: Download complete
a64c327529cb: Verifying Checksum
a64c327529cb: Download complete
8474173a1351: Verifying Checksum
8474173a1351: Download complete
2fb40d36dab0: Verifying Checksum
2fb40d36dab0: Download complete
ed21aabcd3c4: Verifying Checksum
ed21aabcd3c4: Download complete
d28a873c3e04: Verifying Checksum
d28a873c3e04: Download complete
3af36d90a2a7: Verifying Checksum
3af36d90a2a7: Download complete
ffa164d92e08: Verifying Checksum
ffa164d92e08: Download complete
785cf3cd5146: Verifying Checksum
785cf3cd5146: Download complete
9377ddac7bc2: Verifying Checksum
9377ddac7bc2: Download complete
849ad9beac6e: Verifying Checksum
849ad9beac6e: Download complete
cd15290c07a4: Verifying Checksum
cd15290c07a4: Download complete
849ad9beac6e: Pull complete
a64c327529cb: Pull complete
dccdba305be8: Pull complete
2fb40d36dab0: Pull complete
8474173a1351: Pull complete
ed21aabcd3c4: Pull complete
d28a873c3e04: Pull complete
3af36d90a2a7: Pull complete
785cf3cd5146: Pull complete
ffa164d92e08: Pull complete
cd15290c07a4: Pull complete
9377ddac7bc2: Pull complete
Digest: sha256:c1c502ed4f1611f79104f39cc954c4905b734f55f132b88bdceef39993d3a67a
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> a8d0f992657a
Step 2/2 : RUN pip install -U fire scikit-learn==0.20.4 pandas==0.24.2 kfp==0.2.5
---> Running in c50ca92fc732
Collecting fire
Downloading fire-0.3.1.tar.gz (81 kB)
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
Collecting kfp==0.2.5
Downloading kfp-0.2.5.tar.gz (116 kB)
Requirement already satisfied, skipping upgrade: six in /opt/conda/lib/python3.7/site-packages (from fire) (1.15.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Requirement already satisfied, skipping upgrade: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.18.1)
Requirement already satisfied, skipping upgrade: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.4.1)
Requirement already satisfied, skipping upgrade: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2020.1)
Requirement already satisfied, skipping upgrade: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.1)
Collecting urllib3<1.25,>=1.15
Downloading urllib3-1.24.3-py2.py3-none-any.whl (118 kB)
Requirement already satisfied, skipping upgrade: certifi in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (2020.4.5.1)
Requirement already satisfied, skipping upgrade: PyYAML in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (5.3.1)
Requirement already satisfied, skipping upgrade: google-cloud-storage>=1.13.0 in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (1.28.1)
Collecting kubernetes<=10.0.0,>=8.0.0
Downloading kubernetes-10.0.0-py2.py3-none-any.whl (1.5 MB)
Requirement already satisfied, skipping upgrade: PyJWT>=1.6.4 in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (1.7.1)
Requirement already satisfied, skipping upgrade: cryptography>=2.4.2 in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (2.9.2)
Requirement already satisfied, skipping upgrade: google-auth>=1.6.1 in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (1.14.3)
Collecting requests_toolbelt>=0.8.0
Downloading requests_toolbelt-0.9.1-py2.py3-none-any.whl (54 kB)
Collecting cloudpickle==1.1.1
Downloading cloudpickle-1.1.1-py2.py3-none-any.whl (17 kB)
Collecting kfp-server-api<=0.1.40,>=0.1.18
Downloading kfp-server-api-0.1.40.tar.gz (38 kB)
Collecting argo-models==2.2.1a
Downloading argo-models-2.2.1a0.tar.gz (28 kB)
Requirement already satisfied, skipping upgrade: jsonschema>=3.0.1 in /opt/conda/lib/python3.7/site-packages (from kfp==0.2.5) (3.2.0)
Collecting tabulate==0.8.3
Downloading tabulate-0.8.3.tar.gz (46 kB)
Collecting click==7.0
Downloading Click-7.0-py2.py3-none-any.whl (81 kB)
Collecting Deprecated
Downloading Deprecated-1.2.10-py2.py3-none-any.whl (8.7 kB)
Collecting strip-hints
Downloading strip-hints-0.1.9.tar.gz (30 kB)
Requirement already satisfied, skipping upgrade: google-resumable-media<0.6dev,>=0.5.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage>=1.13.0->kfp==0.2.5) (0.5.0)
Requirement already satisfied, skipping upgrade: google-cloud-core<2.0dev,>=1.2.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage>=1.13.0->kfp==0.2.5) (1.3.0)
Requirement already satisfied, skipping upgrade: requests-oauthlib in /opt/conda/lib/python3.7/site-packages (from kubernetes<=10.0.0,>=8.0.0->kfp==0.2.5) (1.2.0)
Requirement already satisfied, skipping upgrade: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /opt/conda/lib/python3.7/site-packages (from kubernetes<=10.0.0,>=8.0.0->kfp==0.2.5) (0.57.0)
Requirement already satisfied, skipping upgrade: setuptools>=21.0.0 in /opt/conda/lib/python3.7/site-packages (from kubernetes<=10.0.0,>=8.0.0->kfp==0.2.5) (47.1.1.post20200529)
Requirement already satisfied, skipping upgrade: requests in /opt/conda/lib/python3.7/site-packages (from kubernetes<=10.0.0,>=8.0.0->kfp==0.2.5) (2.23.0)
Requirement already satisfied, skipping upgrade: cffi!=1.11.3,>=1.8 in /opt/conda/lib/python3.7/site-packages (from cryptography>=2.4.2->kfp==0.2.5) (1.14.0)
Requirement already satisfied, skipping upgrade: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==0.2.5) (4.1.0)
Requirement already satisfied, skipping upgrade: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==0.2.5) (0.2.7)
Requirement already satisfied, skipping upgrade: rsa<4.1,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth>=1.6.1->kfp==0.2.5) (4.0)
Requirement already satisfied, skipping upgrade: importlib-metadata; python_version < "3.8" in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==0.2.5) (1.6.0)
Requirement already satisfied, skipping upgrade: attrs>=17.4.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==0.2.5) (19.3.0)
Requirement already satisfied, skipping upgrade: pyrsistent>=0.14.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema>=3.0.1->kfp==0.2.5) (0.16.0)
Requirement already satisfied, skipping upgrade: wrapt<2,>=1.10 in /opt/conda/lib/python3.7/site-packages (from Deprecated->kfp==0.2.5) (1.11.2)
Requirement already satisfied, skipping upgrade: wheel in /opt/conda/lib/python3.7/site-packages (from strip-hints->kfp==0.2.5) (0.34.2)
Requirement already satisfied, skipping upgrade: google-api-core<2.0.0dev,>=1.16.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==0.2.5) (1.17.0)
Requirement already satisfied, skipping upgrade: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib->kubernetes<=10.0.0,>=8.0.0->kfp==0.2.5) (3.0.1)
Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests->kubernetes<=10.0.0,>=8.0.0->kfp==0.2.5) (3.0.4)
Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests->kubernetes<=10.0.0,>=8.0.0->kfp==0.2.5) (2.9)
Requirement already satisfied, skipping upgrade: pycparser in /opt/conda/lib/python3.7/site-packages (from cffi!=1.11.3,>=1.8->cryptography>=2.4.2->kfp==0.2.5) (2.20)
Requirement already satisfied, skipping upgrade: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth>=1.6.1->kfp==0.2.5) (0.4.8)
Requirement already satisfied, skipping upgrade: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata; python_version < "3.8"->jsonschema>=3.0.1->kfp==0.2.5) (3.1.0)
Requirement already satisfied, skipping upgrade: protobuf>=3.4.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==0.2.5) (3.11.4)
Requirement already satisfied, skipping upgrade: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2.0.0dev,>=1.16.0->google-cloud-core<2.0dev,>=1.2.0->google-cloud-storage>=1.13.0->kfp==0.2.5) (1.51.0)
Building wheels for collected packages: fire, kfp, termcolor, kfp-server-api, argo-models, tabulate, strip-hints
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.3.1-py2.py3-none-any.whl size=111005 sha256=d7c79c163eb0d55510b05de517f4cfbfa21d03e09cdaa9d3f30ce382c773760e
Stored in directory: /root/.cache/pip/wheels/95/38/e1/8b62337a8ecf5728bdc1017e828f253f7a9cf25db999861bec
Building wheel for kfp (setup.py): started
Building wheel for kfp (setup.py): finished with status 'done'
Created wheel for kfp: filename=kfp-0.2.5-py3-none-any.whl size=159978 sha256=3bc3f3ebc18508acc0631db09c22183fe29759f370af3997044c2af97e6bfe23
Stored in directory: /root/.cache/pip/wheels/98/74/7e/0a882d654bdf82d039460ab5c6adf8724ae56e277de7c0eaea
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4830 sha256=77ee7325511093f39dfc231ac1f4d1d40a6d7f7a7f31ce037a04242a795e74df
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Building wheel for kfp-server-api (setup.py): started
Building wheel for kfp-server-api (setup.py): finished with status 'done'
Created wheel for kfp-server-api: filename=kfp_server_api-0.1.40-py3-none-any.whl size=102468 sha256=e016c321806d17772b2f4cceb28abea0366acc97924c2d297a9f809b3c01c95d
Stored in directory: /root/.cache/pip/wheels/01/e3/43/3972dea76ee89e35f090b313817089043f2609236cf560069d
Building wheel for argo-models (setup.py): started
Building wheel for argo-models (setup.py): finished with status 'done'
Created wheel for argo-models: filename=argo_models-2.2.1a0-py3-none-any.whl size=57307 sha256=8766702cbd1937fe4bbf4e8ea17a14b06b475be8918a406794b5b97e1bb3b8ab
Stored in directory: /root/.cache/pip/wheels/a9/4b/fd/cdd013bd2ad1a7162ecfaf954e9f1bb605174a20e3c02016b7
Building wheel for tabulate (setup.py): started
Building wheel for tabulate (setup.py): finished with status 'done'
Created wheel for tabulate: filename=tabulate-0.8.3-py3-none-any.whl size=23378 sha256=5c78b9c8b74f972166970d044df23c4c19b6069c14837b522a295e03bbb7c7b0
Stored in directory: /root/.cache/pip/wheels/b8/a2/a6/812a8a9735b090913e109133c7c20aaca4cf07e8e18837714f
Building wheel for strip-hints (setup.py): started
Building wheel for strip-hints (setup.py): finished with status 'done'
Created wheel for strip-hints: filename=strip_hints-0.1.9-py2.py3-none-any.whl size=20993 sha256=e336715437f0901e51c7da8c4d3501a67cc19822a786bf699a39730e46fa6f8a
Stored in directory: /root/.cache/pip/wheels/2d/b8/4e/a3ec111d2db63cec88121bd7c0ab1a123bce3b55dd19dda5c1
Successfully built fire kfp termcolor kfp-server-api argo-models tabulate strip-hints
[91mERROR: visions 0.4.4 has requirement pandas>=0.25.3, but you'll have pandas 0.24.2 which is incompatible.
[0m[91mERROR: pandas-profiling 2.8.0 has requirement pandas!=1.0.0,!=1.0.1,!=1.0.2,>=0.25.3, but you'll have pandas 0.24.2 which is incompatible.
[0m[91mERROR: jupyterlab-git 0.10.1 has requirement nbdime<2.0.0,>=1.1.0, but you'll have nbdime 2.0.0 which is incompatible.
[0m[91mERROR: distributed 2.17.0 has requirement cloudpickle>=1.3.0, but you'll have cloudpickle 1.1.1 which is incompatible.
[0mInstalling collected packages: termcolor, fire, scikit-learn, pandas, urllib3, kubernetes, requests-toolbelt, cloudpickle, kfp-server-api, argo-models, tabulate, click, Deprecated, strip-hints, kfp
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 0.23.1
Uninstalling scikit-learn-0.23.1:
Successfully uninstalled scikit-learn-0.23.1
Attempting uninstall: pandas
Found existing installation: pandas 1.0.4
Uninstalling pandas-1.0.4:
Successfully uninstalled pandas-1.0.4
Attempting uninstall: urllib3
Found existing installation: urllib3 1.25.9
Uninstalling urllib3-1.25.9:
Successfully uninstalled urllib3-1.25.9
Attempting uninstall: kubernetes
Found existing installation: kubernetes 11.0.0
Uninstalling kubernetes-11.0.0:
Successfully uninstalled kubernetes-11.0.0
Attempting uninstall: cloudpickle
Found existing installation: cloudpickle 1.4.1
Uninstalling cloudpickle-1.4.1:
Successfully uninstalled cloudpickle-1.4.1
Attempting uninstall: click
Found existing installation: click 7.1.2
Uninstalling click-7.1.2:
Successfully uninstalled click-7.1.2
Successfully installed Deprecated-1.2.10 argo-models-2.2.1a0 click-7.0 cloudpickle-1.1.1 fire-0.3.1 kfp-0.2.5 kfp-server-api-0.1.40 kubernetes-10.0.0 pandas-0.24.2 requests-toolbelt-0.9.1 scikit-learn-0.20.4 strip-hints-0.1.9 tabulate-0.8.3 termcolor-1.1.0 urllib3-1.24.3
Removing intermediate container c50ca92fc732
---> 9b741d6ffdae
Successfully built 9b741d6ffdae
Successfully tagged gcr.io/notebooks-project/base_image:latest
PUSH
Pushing gcr.io/notebooks-project/base_image:latest
***** NOTICE *****
Alternative official `docker` images, including multiple versions across
multiple platforms, are maintained by the Docker Team. For details, please
visit https://hub.docker.com/_/docker.
***** END OF NOTICE *****
The push refers to repository [gcr.io/notebooks-project/base_image]
ebe49f7a1a65: Preparing
fefb87f62fdb: Preparing
e0df03809e31: Preparing
943d61a75b08: Preparing
9f9fba54787e: Preparing
8909b952983a: Preparing
b4752dfe51c5: Preparing
a5708141c1ce: Preparing
f4d992136ce9: Preparing
29946d6f4552: Preparing
42ff99c2af8e: Preparing
4457871e1192: Preparing
be5ae40b3f47: Preparing
28ba7458d04b: Preparing
838a37a24627: Preparing
a6ebef4a95c3: Preparing
b7f7d2967507: Preparing
8909b952983a: Waiting
b4752dfe51c5: Waiting
a5708141c1ce: Waiting
f4d992136ce9: Waiting
29946d6f4552: Waiting
42ff99c2af8e: Waiting
4457871e1192: Waiting
be5ae40b3f47: Waiting
28ba7458d04b: Waiting
838a37a24627: Waiting
a6ebef4a95c3: Waiting
b7f7d2967507: Waiting
e0df03809e31: Layer already exists
943d61a75b08: Layer already exists
9f9fba54787e: Layer already exists
fefb87f62fdb: Layer already exists
f4d992136ce9: Layer already exists
b4752dfe51c5: Layer already exists
a5708141c1ce: Layer already exists
8909b952983a: Layer already exists
4457871e1192: Layer already exists
be5ae40b3f47: Layer already exists
29946d6f4552: Layer already exists
42ff99c2af8e: Layer already exists
b7f7d2967507: Layer already exists
838a37a24627: Layer already exists
a6ebef4a95c3: Layer already exists
28ba7458d04b: Layer already exists
ebe49f7a1a65: Pushed
latest: digest: sha256:97858b86be8ed63eeb0cff2e9faec4146762a340aeadd2fab21c848cfebe67df size: 3878
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
655fee5f-6db6-444a-8e94-68899cc7baf9 2020-06-11T03:43:24+00:00 3M49S gs://notebooks-project_cloudbuild/source/1591847003.79-5e83bfbd4f9646419f6624ebcd1b6e51.tgz gcr.io/notebooks-project/base_image (+1 more) SUCCESS
###Markdown
Compile the pipelineYou can compile the DSL using an API from the **KFP SDK** or using the **KFP** compiler.To compile the pipeline DSL using the **KFP** compiler. Set the pipeline's compile time settingsThe pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the `user-gcp-sa` secret of the Kubernetes namespace hosting Kubeflow Pipelines. If you want to use the `user-gcp-sa` service account you change the value of `USE_KFP_SA` to `True`.Note that the default AI Platform Pipelines configuration does not define the `user-gcp-sa` secret.
###Code
USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
###Output
env: USE_KFP_SA=False
env: BASE_IMAGE=gcr.io/notebooks-project/base_image:latest
env: TRAINER_IMAGE=gcr.io/notebooks-project/trainer_image:latest
env: COMPONENT_URL_SEARCH_PREFIX=https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/
env: RUNTIME_VERSION=1.15
env: PYTHON_VERSION=3.7
###Markdown
Use the CLI compiler to compile the pipeline
###Code
!dsl-compile --py pipeline/covertype_training_pipeline.py --output covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
The result is the `covertype_training_pipeline.yaml` file.
###Code
!head covertype_training_pipeline.yaml
###Output
"apiVersion": |-
argoproj.io/v1alpha1
"kind": |-
Workflow
"metadata":
"annotations":
"pipelines.kubeflow.org/pipeline_spec": |-
{"description": "The pipeline training and deploying the Covertype classifierpipeline_yaml", "inputs": [{"name": "project_id"}, {"name": "region"}, {"name": "source_table_name"}, {"name": "gcs_root"}, {"name": "dataset_id"}, {"name": "evaluation_metric_name"}, {"name": "evaluation_metric_threshold"}, {"name": "model_id"}, {"name": "version_id"}, {"name": "replace_existing_version"}, {"default": "\n{\n \"hyperparameters\": {\n \"goal\": \"MAXIMIZE\",\n \"maxTrials\": 6,\n \"maxParallelTrials\": 3,\n \"hyperparameterMetricTag\": \"accuracy\",\n \"enableTrialEarlyStopping\": True,\n \"params\": [\n {\n \"parameterName\": \"max_iter\",\n \"type\": \"DISCRETE\",\n \"discreteValues\": [500, 1000]\n },\n {\n \"parameterName\": \"alpha\",\n \"type\": \"DOUBLE\",\n \"minValue\": 0.0001,\n \"maxValue\": 0.001,\n \"scaleType\": \"UNIT_LINEAR_SCALE\"\n }\n ]\n }\n}\n", "name": "hypertune_settings", "optional": true}, {"default": "US", "name": "dataset_location", "optional": true}], "name": "Covertype Classifier Training"}
"generateName": |-
covertype-classifier-training-
###Markdown
Deploy the pipeline package
###Code
PIPELINE_NAME='covertype_continuous_training'
!kfp --endpoint $ENDPOINT pipeline upload \
-p $PIPELINE_NAME \
covertype_training_pipeline.yaml
###Output
Pipeline ad9617fc-d3fa-48da-8efa-d73a64bb033b has been submitted
Pipeline Details
------------------
ID ad9617fc-d3fa-48da-8efa-d73a64bb033b
Name covertype_continuous_training
Description
Uploaded at 2020-06-11T03:48:32+00:00
+-----------------------------+--------------------------------------------------+
| Parameter Name | Default Value |
+=============================+==================================================+
| project_id | |
+-----------------------------+--------------------------------------------------+
| region | |
+-----------------------------+--------------------------------------------------+
| source_table_name | |
+-----------------------------+--------------------------------------------------+
| gcs_root | |
+-----------------------------+--------------------------------------------------+
| dataset_id | |
+-----------------------------+--------------------------------------------------+
| evaluation_metric_name | |
+-----------------------------+--------------------------------------------------+
| evaluation_metric_threshold | |
+-----------------------------+--------------------------------------------------+
| model_id | |
+-----------------------------+--------------------------------------------------+
| version_id | |
+-----------------------------+--------------------------------------------------+
| replace_existing_version | |
+-----------------------------+--------------------------------------------------+
| hypertune_settings | { |
| | "hyperparameters": { |
| | "goal": "MAXIMIZE", |
| | "maxTrials": 6, |
| | "maxParallelTrials": 3, |
| | "hyperparameterMetricTag": "accuracy", |
| | "enableTrialEarlyStopping": True, |
| | "params": [ |
| | { |
| | "parameterName": "max_iter", |
| | "type": "DISCRETE", |
| | "discreteValues": [500, 1000] |
| | }, |
| | { |
| | "parameterName": "alpha", |
| | "type": "DOUBLE", |
| | "minValue": 0.0001, |
| | "maxValue": 0.001, |
| | "scaleType": "UNIT_LINEAR_SCALE" |
| | } |
| | ] |
| | } |
| | } |
+-----------------------------+--------------------------------------------------+
| dataset_location | US |
+-----------------------------+--------------------------------------------------+
###Markdown
Submitting pipeline runsYou can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run. List the pipelines in AI Platform Pipelines
###Code
!kfp --endpoint $ENDPOINT pipeline list
###Output
+--------------------------------------+-------------------------------------------------+---------------------------+
| Pipeline ID | Name | Uploaded at |
+======================================+=================================================+===========================+
| ad9617fc-d3fa-48da-8efa-d73a64bb033b | covertype_continuous_training | 2020-06-11T03:48:32+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| 50523945-306c-4383-9437-1e317b9ea1c2 | my_pipeline | 2020-06-04T14:33:58+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| 8b2535b6-b492-43c2-8dff-069ab1a49161 | [Demo] TFX - Iris classification pipeline | 2020-06-04T14:03:00+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| cbb9ee4d-b0d0-4045-9bc3-daa11b0611ce | [Tutorial] DSL - Control structures | 2020-06-04T14:02:59+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| 8bb371dc-46ef-46d1-affb-c5931974dde3 | [Tutorial] Data passing in python components | 2020-06-04T14:02:58+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| 261c6275-609c-4f01-abfa-6bd5e8e97e2b | [Demo] TFX - Taxi tip prediction model trainer | 2020-06-04T14:02:57+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
| 08868046-77ab-4399-8c42-9349c81f701b | [Demo] XGBoost - Training with confusion matrix | 2020-06-04T14:02:56+00:00 |
+--------------------------------------+-------------------------------------------------+---------------------------+
###Markdown
Submit a runFind the ID of the `covertype_continuous_training` pipeline you uploaded in the previous step and update the value of `PIPELINE_ID` .
###Code
PIPELINE_ID='ad9617fc-d3fa-48da-8efa-d73a64bb033b'
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_001'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'splits'
EVALUATION_METRIC = 'accuracy'
EVALUATION_METRIC_THRESHOLD = '0.69'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)
!kfp --endpoint $ENDPOINT run submit \
-e $EXPERIMENT_NAME \
-r $RUN_ID \
-p $PIPELINE_ID \
project_id=$PROJECT_ID \
gcs_root=$GCS_STAGING_PATH \
region=$REGION \
source_table_name=$SOURCE_TABLE \
dataset_id=$DATASET_ID \
evaluation_metric_name=$EVALUATION_METRIC \
evaluation_metric_threshold=$EVALUATION_METRIC_THRESHOLD \
model_id=$MODEL_ID \
version_id=$VERSION_ID \
replace_existing_version=$REPLACE_EXISTING_VERSION
###Output
Creating experiment Covertype_Classifier_Training.
Run 67e6ede2-40ad-436b-b5fa-372dfdda406d is submitted
+--------------------------------------+---------+----------+---------------------------+
| run id | name | status | created at |
+======================================+=========+==========+===========================+
| 67e6ede2-40ad-436b-b5fa-372dfdda406d | Run_001 | | 2020-06-11T03:50:07+00:00 |
+--------------------------------------+---------+----------+---------------------------+
###Markdown
Licensed under the Apache License, Version 2.0 (the \"License\");you may not use this file except in compliance with the License.You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Continuous training pipeline with KFP and Cloud AI Platform In this lab, you will build, deploy, and run a KFP pipeline that orchestrates **BigQuery** and **Cloud AI Platform** services to train a **scikit-learn** model. Understanding the pipeline designThe pipeline source code can be found in the `pipeline` folder.
###Code
!ls -la pipeline
###Output
_____no_output_____
###Markdown
The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the `covertype_training_pipeline.py` file. The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
###Code
!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
###Output
_____no_output_____
###Markdown
The pipeline uses a mix of custom and pre-build components.- Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution: - [BigQuery query component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/bigquery/query) - [AI Platform Training component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/train) - [AI Platform Deploy component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/deploy)- Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's [Lightweight Python Components](https://www.kubeflow.org/docs/pipelines/sdk/lightweight-python-components/) mechanism. The code for the components is in the `helper_components.py` file: - **Retrieve Best Run**. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job. - **Evaluate Model**. This component evaluates a *sklearn* trained model using a provided metric and a testing dataset. The custom components execute in a container image defined in `base_image/Dockerfile`.
###Code
!cat base_image/Dockerfile
###Output
_____no_output_____
###Markdown
The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in `trainer_image/Dockerfile`.
###Code
!cat trainer_image/Dockerfile
###Output
_____no_output_____
###Markdown
Building and deploying the pipelineBefore deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on [Argo Workflow](https://github.com/argoproj/argo), which is expressed in YAML. Configure environment settingsUpdate the below constants with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the `hostedkfp-default-` prefix.- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the *SETTINGS* for your instance2. Use the value of the `host` variable in the *Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD* section of the *SETTINGS* window.
###Code
REGION = 'us-central1'
ENDPOINT = '337dd39580cbcbd2-dot-us-central2.pipelines.googleusercontent.com'
ARTIFACT_STORE_URI = 'gs://env-test200-artifact-store'
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
###Output
_____no_output_____
###Markdown
Build the trainer image
###Code
IMAGE_NAME='trainer_image'
TAG='latest'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image
###Output
_____no_output_____
###Markdown
Build the base image for custom components
###Code
IMAGE_NAME='base_image'
TAG='latest'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image
###Output
_____no_output_____
###Markdown
Compile the pipelineYou can compile the DSL using an API from the **KFP SDK** or using the **KFP** compiler.To compile the pipeline DSL using the **KFP** compiler. Set the pipeline's compile time settingsThe pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the `user-gcp-sa` secret of the Kubernetes namespace hosting Kubeflow Pipelines. If you want to use the `user-gcp-sa` service account you change the value of `USE_KFP_SA` to `True`.Note that the default AI Platform Pipelines configuration does not define the `user-gcp-sa` secret.
###Code
USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
###Output
_____no_output_____
###Markdown
Use the CLI compiler to compile the pipeline
###Code
!dsl-compile --py pipeline/covertype_training_pipeline.py --output covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
The result is the `covertype_training_pipeline.yaml` file.
###Code
!head covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Deploy the pipeline package
###Code
PIPELINE_NAME='covertype_continuous_training'
!kfp --endpoint $ENDPOINT pipeline upload \
-p $PIPELINE_NAME \
covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Submitting pipeline runsYou can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run. List the pipelines in AI Platform Pipelines
###Code
!kfp --endpoint $ENDPOINT pipeline list
###Output
_____no_output_____
###Markdown
Submit a runFind the ID of the `covertype_continuous_training` pipeline you uploaded in the previous step and update the value of `PIPELINE_ID` .
###Code
PIPELINE_ID='0918568d-758c-46cf-9752-e04a4403cd84'
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_001'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'splits'
EVALUATION_METRIC = 'accuracy'
EVALUATION_METRIC_THRESHOLD = '0.69'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)
!kfp --endpoint $ENDPOINT run submit \
-e $EXPERIMENT_NAME \
-r $RUN_ID \
-p $PIPELINE_ID \
project_id=$PROJECT_ID \
gcs_root=$GCS_STAGING_PATH \
region=$REGION \
source_table_name=$SOURCE_TABLE \
dataset_id=$DATASET_ID \
evaluation_metric_name=$EVALUATION_METRIC \
evaluation_metric_threshold=$EVALUATION_METRIC_THRESHOLD \
model_id=$MODEL_ID \
version_id=$VERSION_ID \
replace_existing_version=$REPLACE_EXISTING_VERSION
###Output
_____no_output_____
###Markdown
Continuous training pipeline with Kubeflow Pipeline and AI Platform **Learning Objectives:**1. Learn how to use Kubeflow Pipeline(KFP) pre-build components (BiqQuery, AI Platform training and predictions)1. Learn how to use KFP lightweight python components1. Learn how to build a KFP with these components1. Learn how to compile, upload, and run a KFP with the command lineIn this lab, you will build, deploy, and run a KFP pipeline that orchestrates **BigQuery** and **AI Platform** services to train, tune, and deploy a **scikit-learn** model. Understanding the pipeline design The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the `covertype_training_pipeline.py` file that we will generate below.The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
###Code
!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
###Output
_____no_output_____
###Markdown
The pipeline uses a mix of custom and pre-build components.- Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution: - [BigQuery query component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/bigquery/query) - [AI Platform Training component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/train) - [AI Platform Deploy component](https://github.com/kubeflow/pipelines/tree/0.2.5/components/gcp/ml_engine/deploy)- Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's [Lightweight Python Components](https://www.kubeflow.org/docs/pipelines/sdk/lightweight-python-components/) mechanism. The code for the components is in the `helper_components.py` file: - **Retrieve Best Run**. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job. - **Evaluate Model**. This component evaluates a *sklearn* trained model using a provided metric and a testing dataset.
###Code
%%writefile ./pipeline/covertype_training_pipeline.py
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""KFP orchestrating BigQuery and Cloud AI Platform services."""
import os
from helper_components import evaluate_model
from helper_components import retrieve_best_run
from jinja2 import Template
import kfp
from kfp.components import func_to_container_op
from kfp.dsl.types import Dict
from kfp.dsl.types import GCPProjectID
from kfp.dsl.types import GCPRegion
from kfp.dsl.types import GCSPath
from kfp.dsl.types import String
from kfp.gcp import use_gcp_secret
# Defaults and environment settings
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
TRAINING_FILE_PATH = 'datasets/training/data.csv'
VALIDATION_FILE_PATH = 'datasets/validation/data.csv'
TESTING_FILE_PATH = 'datasets/testing/data.csv'
# Parameter defaults
SPLITS_DATASET_ID = 'splits'
HYPERTUNE_SETTINGS = """
{
"hyperparameters": {
"goal": "MAXIMIZE",
"maxTrials": 6,
"maxParallelTrials": 3,
"hyperparameterMetricTag": "accuracy",
"enableTrialEarlyStopping": True,
"params": [
{
"parameterName": "max_iter",
"type": "DISCRETE",
"discreteValues": [500, 1000]
},
{
"parameterName": "alpha",
"type": "DOUBLE",
"minValue": 0.0001,
"maxValue": 0.001,
"scaleType": "UNIT_LINEAR_SCALE"
}
]
}
}
"""
# Helper functions
def generate_sampling_query(source_table_name, num_lots, lots):
"""Prepares the data sampling query."""
sampling_query_template = """
SELECT *
FROM
`{{ source_table }}` AS cover
WHERE
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})
"""
query = Template(sampling_query_template).render(
source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])
return query
# Create component factories
component_store = kfp.components.ComponentStore(
local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])
bigquery_query_op = component_store.load_component('bigquery/query')
mlengine_train_op = component_store.load_component('ml_engine/train')
mlengine_deploy_op = component_store.load_component('ml_engine/deploy')
retrieve_best_run_op = func_to_container_op(
retrieve_best_run, base_image=BASE_IMAGE)
evaluate_model_op = func_to_container_op(evaluate_model, base_image=BASE_IMAGE)
@kfp.dsl.pipeline(
name='Covertype Classifier Training',
description='The pipeline training and deploying the Covertype classifierpipeline_yaml'
)
def covertype_train(project_id,
region,
source_table_name,
gcs_root,
dataset_id,
evaluation_metric_name,
evaluation_metric_threshold,
model_id,
version_id,
replace_existing_version,
hypertune_settings=HYPERTUNE_SETTINGS,
dataset_location='US'):
"""Orchestrates training and deployment of an sklearn model."""
# Create the training split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])
training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)
create_training_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=training_file_path,
dataset_location=dataset_location)
# Create the validation split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[8])
validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)
create_validation_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=validation_file_path,
dataset_location=dataset_location)
# Create the testing split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[9])
testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)
create_testing_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=testing_file_path,
dataset_location=dataset_location)
# Tune hyperparameters
tune_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'
]
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',
kfp.dsl.RUN_ID_PLACEHOLDER)
hypertune = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=tune_args,
training_input=hypertune_settings)
# Retrieve the best trial
get_best_trial = retrieve_best_run_op(
project_id, hypertune.outputs['job_id'])
# Train the model on a combined training and validation datasets
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)
train_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--alpha',
get_best_trial.outputs['alpha'], '--max_iter',
get_best_trial.outputs['max_iter'], '--hptune', 'False'
]
train_model = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=train_args)
# Evaluate the model on the testing split
eval_model = evaluate_model_op(
dataset_path=str(create_testing_split.outputs['output_gcs_path']),
model_path=str(train_model.outputs['job_dir']),
metric_name=evaluation_metric_name)
# Deploy the model if the primary metric is better than threshold
with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):
deploy_model = mlengine_deploy_op(
model_uri=train_model.outputs['job_dir'],
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
replace_existing_version=replace_existing_version)
# Configure the pipeline to run using the service account defined
# in the user-gcp-sa k8s secret
if USE_KFP_SA == 'True':
kfp.dsl.get_pipeline_conf().add_op_transformer(
use_gcp_secret('user-gcp-sa'))
###Output
_____no_output_____
###Markdown
The custom components execute in a container image defined in `base_image/Dockerfile`.
###Code
!cat base_image/Dockerfile
###Output
_____no_output_____
###Markdown
The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in `trainer_image/Dockerfile`.
###Code
!cat trainer_image/Dockerfile
###Output
_____no_output_____
###Markdown
Building and deploying the pipelineBefore deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on [Argo Workflow](https://github.com/argoproj/argo), which is expressed in YAML. Configure environment settingsUpdate the below constants with the settings reflecting your lab environment. - `REGION` - the compute region for AI Platform Training and Prediction- `ARTIFACT_STORE` - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to `qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default`.- `ENDPOINT` - set the `ENDPOINT` constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines/clusters) page in the Google Cloud Console.1. Open the **SETTINGS** for your instance2. Use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD** section of the **SETTINGS** window.Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
###Code
!gsutil ls
###Output
_____no_output_____
###Markdown
**HINT:** For **ENDPOINT**, use the value of the `host` variable in the **Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SDK** section of the **SETTINGS** window.For **ARTIFACT_STORE_URI**, copyย theย bucketย nameย whichย startsย withย theย qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-defaultย prefixย fromย theย previousย cellย output. Your copied value should look like **'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default'**
###Code
REGION = 'us-central1'
ENDPOINT = '337dd39580cbcbd2-dot-us-central2.pipelines.googleusercontent.com' #ย TO DO: REPLACEย WITHย YOURย ENDPOINT
ARTIFACT_STORE_URI = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' #ย TO DO: REPLACEย WITHย YOURย ARTIFACT_STOREย NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
###Output
_____no_output_____
###Markdown
Build the trainer image
###Code
IMAGE_NAME='trainer_image'
TAG='latest'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
###Output
_____no_output_____
###Markdown
**Note**: Please ignore any **incompatibility ERROR** that may appear for the packages visions as it will not affect the lab's functionality.
###Code
!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image
###Output
_____no_output_____
###Markdown
Build the base image for custom components
###Code
IMAGE_NAME='base_image'
TAG='latest'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image
###Output
_____no_output_____
###Markdown
Compile the pipelineYou can compile the DSL using an API from the **KFP SDK** or using the **KFP** compiler.To compile the pipeline DSL using the **KFP** compiler. Set the pipeline's compile time settingsThe pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the `user-gcp-sa` secret of the Kubernetes namespace hosting KFP. If you want to use the `user-gcp-sa` service account you change the value of `USE_KFP_SA` to `True`.Note that the default AI Platform Pipelines configuration does not define the `user-gcp-sa` secret.
###Code
USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
###Output
_____no_output_____
###Markdown
Use the CLI compiler to compile the pipeline
###Code
!dsl-compile --py pipeline/covertype_training_pipeline.py --output covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
The result is the `covertype_training_pipeline.yaml` file.
###Code
!head covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Deploy the pipeline package
###Code
PIPELINE_NAME='covertype_continuous_training'
!kfp --endpoint $ENDPOINT pipeline upload \
-p $PIPELINE_NAME \
covertype_training_pipeline.yaml
###Output
_____no_output_____
###Markdown
Submitting pipeline runsYou can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run. List the pipelines in AI Platform Pipelines
###Code
!kfp --endpoint $ENDPOINT pipeline list
###Output
_____no_output_____
###Markdown
Submit a runFind the ID of the `covertype_continuous_training` pipeline you uploaded in the previous step and update the value of `PIPELINE_ID` .
###Code
PIPELINE_ID='0918568d-758c-46cf-9752-e04a4403cd84' #ย TO DO: REPLACEย WITHย YOURย PIPELINE ID
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_001'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'splits'
EVALUATION_METRIC = 'accuracy'
EVALUATION_METRIC_THRESHOLD = '0.69'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)
###Output
_____no_output_____
###Markdown
Run the pipeline using theย `kfp`ย command line by retrieving the variables from the environment to pass to the pipeline where:- EXPERIMENT_NAME is set to the experiment used to run the pipeline. You can choose any name you want. If the experiment does not exist it will be created by the command- RUN_ID is the name of the run. You can use an arbitrary name- PIPELINE_ID is the id of your pipeline. Use the value retrieved by the `kfp pipeline list` command- GCS_STAGING_PATH is the URI to the Cloud Storage location used by the pipeline to store intermediate files. By default, it is set to the `staging` folder in your artifact store.- REGION is a compute region for AI Platform Training and Prediction. You should be already familiar with these and other parameters passed to the command. If not go back and review the pipeline code.
###Code
!kfp --endpoint $ENDPOINT run submit \
-e $EXPERIMENT_NAME \
-r $RUN_ID \
-p $PIPELINE_ID \
project_id=$PROJECT_ID \
gcs_root=$GCS_STAGING_PATH \
region=$REGION \
source_table_name=$SOURCE_TABLE \
dataset_id=$DATASET_ID \
evaluation_metric_name=$EVALUATION_METRIC \
evaluation_metric_threshold=$EVALUATION_METRIC_THRESHOLD \
model_id=$MODEL_ID \
version_id=$VERSION_ID \
replace_existing_version=$REPLACE_EXISTING_VERSION
###Output
_____no_output_____ |
sine_function.ipynb | ###Markdown
###Code
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import SGD
import tensorflow as tf
import matplotlib.pyplot as plt
from numpy.random import seed
from tensorflow import set_random_seed
seed(1)
set_random_seed(2)
x = np.random.uniform(low=0,high=360,size=10000)
y = 1+np.sin(np.deg2rad(x))
model = Sequential()
model.add(Dense(4, input_shape=(1,), kernel_initializer='uniform', activation='relu'))
model.add(Dense(60,kernel_initializer='uniform', activation='relu'))
## CHANGING THE ACTIVATION TO ANYTHING OTHER THAN linear CAUSES THE MODEL TO NOT CONVERGE; WHY?
model.add(Dense(1, kernel_initializer='uniform', activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(x,y, epochs=100, batch_size=32, verbose=0)
plt.plot(history.history['loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.show()
# print model.summary()
loss_and_metrics = model.evaluate(x, y)
print loss_and_metrics
y1 = model.predict(x)
plt.scatter(x, y,label='test data')
plt.scatter(x, y1,label="predicted")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Approximating a sine Function Import
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, SimpleRNN
np.random.seed(0)
plt.figure(figsize=(12,8))
t = np.arange(0,1500)
x = np.sin(0.015*t) + np.random.uniform(low=-1, high=1, size=(1500,))
x_actual = np.sin(0.015*t)
plt.plot(x)
plt.plot(x_actual)
plt.show()
###Output
_____no_output_____
###Markdown
Normalize
###Code
normalizer = MinMaxScaler(feature_range=(0, 1))
x = (np.reshape(x, (-1, 1)))
x = normalizer.fit_transform(x)
print(x)
###Output
_____no_output_____
###Markdown
Create Dataset
###Code
train = x[0:1000]
test = x[1000:]
print(train.shape)
def createDataset(data, step):
X, Y =[], []
for i in range(len(data)-step):
X.append(data[i:i+step])
Y.append(data[i+step])
return np.array(X), np.array(Y)
step = 10
trainX,trainY = createDataset(train,step)
testX,testY = createDataset(test,step)
print(trainX[0])
print(trainY[0])
print(trainX.shape)
###Output
(990, 10, 1)
###Markdown
Model Creation
###Code
model = Sequential()
model.add(SimpleRNN(units=1, activation="tanh"))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='rmsprop')
history = model.fit(trainX,trainY, epochs=500, batch_size=16, verbose=2)
plt.figure(figsize=(12,8))
loss = history.history['loss']
plt.plot(loss)
plt.show()
###Output
_____no_output_____
###Markdown
Prediction
###Code
trainPredict = normalizer.inverse_transform(model.predict(trainX))
testPredict= normalizer.inverse_transform(model.predict(testX))
predicted= np.concatenate((trainPredict,testPredict))
x = normalizer.inverse_transform(x)
plt.figure(figsize=(12,8))
plt.plot(x)
plt.plot(predicted)
plt.axvline(len(trainX), c="r")
plt.show()
###Output
_____no_output_____ |
Bayesian_discrete_filter.ipynb | ###Markdown
Bayesian discrete filterBelief - > Update based on conditional probablityref : https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master
###Code
import numpy as np
import copy
from filterpy.discrete_bayes import normalize
#import ipympl for interactive plot
%matplotlib inline
import matplotlib.pyplot as plt
def barplot(x,y,ylim=(0,1)):
plt.bar(x,y)
plt.ylim(ylim)
plt.xticks(x)
plt.xlabel('Positions')
plt.ylabel('Probablity')
plt.show()
positions = np.arange(0,10)
print(positions)
###Output
[0 1 2 3 4 5 6 7 8 9]
###Markdown
Sensor1- triggering Sensor1 signifies that tracked object is at any of the following positions (0,1,8)- Sensor2 is triggered for other positions- Probablity of the event at any of the above points is 1/3
###Code
possible_sensor1_positions = np.array([1, 1, 0, 0, 0, 0, 0, 0, 1, 0])
total_possible_positions = np.sum(possible_sensor1_positions)
proablity_target_at_positions = possible_sensor1_positions*(1/total_possible_positions)
barplot(positions,proablity_target_at_positions)
###Output
_____no_output_____
###Markdown
Conditional probablity- Assuming extra data about the target is known - Initial sensor trigger is Sensor 1 - The target has moved 1 step right - Final sensor trigger is Sensor 1 How can the probablity distribution be updated- Only one possible solution is available based on the above conditions. i.e, position 1
###Code
updated_possible_sensor1_positions = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 0])
total_possible_positions = np.sum(updated_possible_sensor1_positions)
proablity_target_at_positions = updated_possible_sensor1_positions*(1/total_possible_positions)
barplot(positions,proablity_target_at_positions)
###Output
_____no_output_____
###Markdown
Based on the conditions the probability of the target has been pin pointed Uncertainity- Accounting for sensor noise Assumptions- Sensor causes a faulty reading once in every 4 readings (uncertainity = 0.25).Trigger of sensor 1 doens't produce a probablity of 1/3 at every position because of uncertainity
###Code
# belief of event is a uniform distribution
belief = np.array([1./10]*10)
barplot(positions,belief)
###Output
_____no_output_____
###Markdown
Triggering sensor1- probablity of tracked object at 0,1,8 when sensor1 is triggered = 1/3- probablity of sensor1 being triggered correctly = 9/10- Updated probablity of tracked object at 0,1,8 when sensor1 is triggered = (1/3)*(3/4)
###Code
def update_belief(possible_sensor_positions, belief, value, likelyhood):
updated_belief = copy.copy(belief)
for i, val in enumerate(possible_sensor_positions):
if val == value:
updated_belief[i] *= likelyhood
else:
updated_belief[i] *= (1-likelyhood)
return updated_belief
possible_sensor1_probablity = update_belief(possible_sensor1_positions, belief, 1, (3/4))
barplot(positions,possible_sensor1_probablity)
print('total probablity = {}'.format(sum(possible_sensor1_probablity)))
###Output
_____no_output_____
###Markdown
- Normalize the distribution so that total probablity is 1
###Code
normalized_belief = copy.copy(possible_sensor1_probablity)
normalized_belief = normalize(normalized_belief)
barplot(positions,normalized_belief)
print('total probablity = {}'.format(sum(normalized_belief)))
def move_predict(belief, move):
""" move the position by `move` spaces, where positive is
to the right, and negative is to the left
"""
n = len(belief)
result = np.zeros(n)
for i in range(n):
result[i] = belief[(i-move) % n]
return result
###Output
_____no_output_____
###Markdown
Condition- Movement right considered 100% certain- probablity distribution is moved to right
###Code
moved_belief = copy.copy(normalized_belief)
moved_belief = move_predict(moved_belief,1)
barplot(positions,moved_belief)
###Output
_____no_output_____
###Markdown
Sensor 1 triggered again
###Code
new_position_likelyhood = possible_sensor1_probablity*moved_belief
new_position_probablity = normalize(new_position_likelyhood)
barplot(positions,new_position_probablity)
###Output
_____no_output_____ |
BERT_base_nonlinearlayers.ipynb | ###Markdown
Imports
###Code
!pip install transformers==3.0.0
!pip install emoji
import gc
import os
import emoji as emoji
import re
import string
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report, accuracy_score
from transformers import AutoModel
from transformers import BertModel, BertTokenizer
import warnings
warnings.filterwarnings('ignore')
!git clone https://github.com/hafezgh/Hate-Speech-Detection-in-Social-Media
###Output
Cloning into 'Hate-Speech-Detection-in-Social-Media'...
remote: Enumerating objects: 20, done.[K
remote: Counting objects: 100% (12/12), done.[K
remote: Compressing objects: 100% (12/12), done.[K
remote: Total 20 (delta 5), reused 0 (delta 0), pack-reused 8[K
Unpacking objects: 100% (20/20), done.
###Markdown
Model
###Code
class BERT_Arch(nn.Module):
def __init__(self, bert, mode='deep_fc'):
super(BERT_Arch, self).__init__()
self.bert = BertModel.from_pretrained('bert-base-uncased')
self.mode = mode
if mode == 'cnn':
# CNN
self.conv = nn.Conv2d(in_channels=13, out_channels=13, kernel_size=(3, 768), padding='valid')
self.relu = nn.ReLU()
# change the kernel size either to (3,1), e.g. 1D max pooling
# or remove it altogether
self.pool = nn.MaxPool2d(kernel_size=(3, 1), stride=1)
self.dropout = nn.Dropout(0.1)
# be careful here, this needs to be changed according to your max pooling
# without pooling: 443, with 3x1 pooling: 416
# FC
self.fc = nn.Linear(416, 3)
self.flat = nn.Flatten()
elif mode == 'rnn':
### RNN
self.lstm = nn.LSTM(768, 256, batch_first=True, bidirectional=True)
## FC
self.fc = nn.Linear(256*2, 3)
elif mode == 'shallow_fc':
self.fc = nn.Linear(768, 3)
elif mode == 'deep_fc':
self.leaky_relu = nn.LeakyReLU()
self.fc1 = nn.Linear(768, 768)
self.fc2 = nn.Linear(768, 768)
self.fc3 = nn.Linear(768, 3)
else:
raise NotImplementedError("Unsupported extension!")
self.softmax = nn.LogSoftmax(dim=1)
def forward(self, sent_id, mask):
sequence_output, _, all_layers = self.bert(sent_id, attention_mask=mask, output_hidden_states=True)
if self.mode == 'cnn':
x = torch.transpose(torch.cat(tuple([t.unsqueeze(0) for t in all_layers]), 0), 0, 1)
x = self.pool(self.dropout(self.relu(self.conv(self.dropout(x)))))
x = self.fc(self.dropout(self.flat(self.dropout(x))))
elif self.mode == 'rnn':
lstm_output, (h,c) = self.lstm(sequence_output)
hidden = torch.cat((lstm_output[:,-1, :256],lstm_output[:,0, 256:]),dim=-1)
x = self.fc(hidden.view(-1,256*2))
elif self.mode == 'shallow_fc':
x = self.fc(sequence_output[:,0,:])
elif self.mode == 'deep_fc':
x = self.fc1(sequence_output[:,0,:])
x = self.leaky_relu(x)
x = self.fc2(x)
x = self.leaky_relu(x)
x = self.fc3(x)
else:
raise NotImplementedError("Unsupported extension!")
gc.collect()
torch.cuda.empty_cache()
del all_layers
c = self.softmax(x)
return c
def read_dataset():
data = pd.read_csv("Hate-Speech-Detection-in-Social-Media/labeled_data.csv")
data = data.drop(['count', 'hate_speech', 'offensive_language', 'neither'], axis=1)
#data = data.loc[0:9599,:]
print(len(data))
return data['tweet'].tolist(), data['class']
def pre_process_dataset(values):
new_values = list()
# Emoticons
emoticons = [':-)', ':)', '(:', '(-:', ':))', '((:', ':-D', ':D', 'X-D', 'XD', 'xD', 'xD', '<3', '</3', ':\*',
';-)',
';)', ';-D', ';D', '(;', '(-;', ':-(', ':(', '(:', '(-:', ':,(', ':\'(', ':"(', ':((', ':D', '=D',
'=)',
'(=', '=(', ')=', '=-O', 'O-=', ':o', 'o:', 'O:', 'O:', ':-o', 'o-:', ':P', ':p', ':S', ':s', ':@',
':>',
':<', '^_^', '^.^', '>.>', 'T_T', 'T-T', '-.-', '*.*', '~.~', ':*', ':-*', 'xP', 'XP', 'XP', 'Xp',
':-|',
':->', ':-<', '$_$', '8-)', ':-P', ':-p', '=P', '=p', ':*)', '*-*', 'B-)', 'O.o', 'X-(', ')-X']
for value in values:
# Remove dots
text = value.replace(".", "").lower()
text = re.sub(r"[^a-zA-Z?.!,ยฟ]+", " ", text)
users = re.findall("[@]\w+", text)
for user in users:
text = text.replace(user, "<user>")
urls = re.findall(r'(https?://[^\s]+)', text)
if len(urls) != 0:
for url in urls:
text = text.replace(url, "<url >")
for emo in text:
if emo in emoji.UNICODE_EMOJI:
text = text.replace(emo, "<emoticon >")
for emo in emoticons:
text = text.replace(emo, "<emoticon >")
numbers = re.findall('[0-9]+', text)
for number in numbers:
text = text.replace(number, "<number >")
text = text.replace('#', "<hashtag >")
text = re.sub(r"([?.!,ยฟ])", r" ", text)
text = "".join(l for l in text if l not in string.punctuation)
text = re.sub(r'[" "]+', " ", text)
new_values.append(text)
return new_values
def data_process(data, labels):
input_ids = []
attention_masks = []
bert_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
for sentence in data:
bert_inp = bert_tokenizer.__call__(sentence, max_length=36,
padding='max_length', pad_to_max_length=True,
truncation=True, return_token_type_ids=False)
input_ids.append(bert_inp['input_ids'])
attention_masks.append(bert_inp['attention_mask'])
#del bert_tokenizer
#gc.collect()
#torch.cuda.empty_cache()
input_ids = np.asarray(input_ids)
attention_masks = np.array(attention_masks)
labels = np.array(labels)
return input_ids, attention_masks, labels
def load_and_process():
data, labels = read_dataset()
num_of_labels = len(labels.unique())
input_ids, attention_masks, labels = data_process(pre_process_dataset(data), labels)
return input_ids, attention_masks, labels
# function to train the model
def train():
model.train()
total_loss, total_accuracy = 0, 0
# empty list to save model predictions
total_preds = []
# iterate over batches
total = len(train_dataloader)
for i, batch in enumerate(train_dataloader):
step = i+1
percent = "{0:.2f}".format(100 * (step / float(total)))
lossp = "{0:.2f}".format(total_loss/(total*batch_size))
filledLength = int(100 * step // total)
bar = 'โ' * filledLength + '>' *(filledLength < 100) + '.' * (99 - filledLength)
print(f'\rBatch {step}/{total} |{bar}| {percent}% complete, loss={lossp}, accuracy={total_accuracy}', end='')
# push the batch to gpu
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
del batch
gc.collect()
torch.cuda.empty_cache()
# clear previously calculated gradients
model.zero_grad()
# get model predictions for the current batch
#sent_id = torch.tensor(sent_id).to(device).long()
preds = model(sent_id, mask)
# compute the loss between actual and predicted values
loss = cross_entropy(preds, labels)
# add on to the total loss
total_loss += float(loss.item())
# backward pass to calculate the gradients
loss.backward()
# clip the the gradients to 1.0. It helps in preventing the exploding gradient problem
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# update parameters
optimizer.step()
# model predictions are stored on GPU. So, push it to CPU
#preds = preds.detach().cpu().numpy()
# append the model predictions
#total_preds.append(preds)
total_preds.append(preds.detach().cpu().numpy())
gc.collect()
torch.cuda.empty_cache()
# compute the training loss of the epoch
avg_loss = total_loss / (len(train_dataloader)*batch_size)
# predictions are in the form of (no. of batches, size of batch, no. of classes).
# reshape the predictions in form of (number of samples, no. of classes)
total_preds = np.concatenate(total_preds, axis=0)
# returns the loss and predictions
return avg_loss, total_preds
# function for evaluating the model
def evaluate():
print("\n\nEvaluating...")
# deactivate dropout layers
model.eval()
total_loss, total_accuracy = 0, 0
# empty list to save the model predictions
total_preds = []
# iterate over batches
total = len(val_dataloader)
for i, batch in enumerate(val_dataloader):
step = i+1
percent = "{0:.2f}".format(100 * (step / float(total)))
lossp = "{0:.2f}".format(total_loss/(total*batch_size))
filledLength = int(100 * step // total)
bar = 'โ' * filledLength + '>' * (filledLength < 100) + '.' * (99 - filledLength)
print(f'\rBatch {step}/{total} |{bar}| {percent}% complete, loss={lossp}, accuracy={total_accuracy}', end='')
# push the batch to gpu
batch = [t.to(device) for t in batch]
sent_id, mask, labels = batch
del batch
gc.collect()
torch.cuda.empty_cache()
# deactivate autograd
with torch.no_grad():
# model predictions
preds = model(sent_id, mask)
# compute the validation loss between actual and predicted values
loss = cross_entropy(preds, labels)
total_loss += float(loss.item())
#preds = preds.detach().cpu().numpy()
#total_preds.append(preds)
total_preds.append(preds.detach().cpu().numpy())
gc.collect()
torch.cuda.empty_cache()
# compute the validation loss of the epoch
avg_loss = total_loss / (len(val_dataloader)*batch_size)
# reshape the predictions in form of (number of samples, no. of classes)
total_preds = np.concatenate(total_preds, axis=0)
return avg_loss, total_preds
###Output
_____no_output_____
###Markdown
Train
###Code
### Extension mode
MODE = 'deep_fc'
# Specify the GPU
# Setting up the device for GPU usage
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print(device)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Load Data-set ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
input_ids, attention_masks, labels = load_and_process()
df = pd.DataFrame(list(zip(input_ids, attention_masks)), columns=['input_ids', 'attention_masks'])
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ class distribution ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# class = class label for majority of CF users. 0 - hate speech 1 - offensive language 2 - neither
# ~~~~~~~~~~ Split train data-set into train, validation and test sets ~~~~~~~~~~#
train_text, temp_text, train_labels, temp_labels = train_test_split(df, labels,
random_state=2018, test_size=0.2, stratify=labels)
val_text, test_text, val_labels, test_labels = train_test_split(temp_text, temp_labels,
random_state=2018, test_size=0.5, stratify=temp_labels)
del temp_text
gc.collect()
torch.cuda.empty_cache()
train_count = len(train_labels)
test_count = len(test_labels)
val_count = len(val_labels)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# ~~~~~~~~~~~~~~~~~~~~~ Import BERT Model and BERT Tokenizer ~~~~~~~~~~~~~~~~~~~~~#
# import BERT-base pretrained model
bert = AutoModel.from_pretrained('bert-base-uncased')
# bert = AutoModel.from_pretrained('bert-base-uncased')
# Load the BERT tokenizer
#tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Tokenization ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# for train set
train_seq = torch.tensor(train_text['input_ids'].tolist())
train_mask = torch.tensor(train_text['attention_masks'].tolist())
train_y = torch.tensor(train_labels.tolist())
# for validation set
val_seq = torch.tensor(val_text['input_ids'].tolist())
val_mask = torch.tensor(val_text['attention_masks'].tolist())
val_y = torch.tensor(val_labels.tolist())
# for test set
test_seq = torch.tensor(test_text['input_ids'].tolist())
test_mask = torch.tensor(test_text['attention_masks'].tolist())
test_y = torch.tensor(test_labels.tolist())
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Create DataLoaders ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
# define a batch size
batch_size = 32
# wrap tensors
train_data = TensorDataset(train_seq, train_mask, train_y)
# sampler for sampling the data during training
train_sampler = RandomSampler(train_data)
# dataLoader for train set
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
# wrap tensors
val_data = TensorDataset(val_seq, val_mask, val_y)
# sampler for sampling the data during training
val_sampler = SequentialSampler(val_data)
# dataLoader for validation set
val_dataloader = DataLoader(val_data, sampler=val_sampler, batch_size=batch_size)
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Freeze BERT Parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# freeze all the parameters
for param in bert.parameters():
param.requires_grad = False
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~#
# pass the pre-trained BERT to our define architecture
model = BERT_Arch(bert, mode=MODE)
# push the model to GPU
model = model.to(device)
# optimizer from hugging face transformers
from transformers import AdamW
# define the optimizer
optimizer = AdamW(model.parameters(), lr=2e-5)
#from sklearn.utils.class_weight import compute_class_weight
# compute the class weights
#class_wts = compute_class_weight('balanced', np.unique(train_labels), train_labels)
#print(class_wts)
# convert class weights to tensor
#weights = torch.tensor(class_wts, dtype=torch.float)
#weights = weights.to(device)
# loss function
#cross_entropy = nn.NLLLoss(weight=weights)
cross_entropy = nn.NLLLoss()
# set initial loss to infinite
best_valid_loss = float('inf')
# empty lists to store training and validation loss of each epoch
#train_losses = []
#valid_losses = []
#if os.path.isfile("/content/drive/MyDrive/saved_weights.pth") == False:
#if os.path.isfile("saved_weights.pth") == False:
# number of training epochs
epochs = 3
current = 1
# for each epoch
while current <= epochs:
print(f'\nEpoch {current} / {epochs}:')
# train model
train_loss, _ = train()
# evaluate model
valid_loss, _ = evaluate()
# save the best model
if valid_loss < best_valid_loss:
best_valid_loss = valid_loss
#torch.save(model.state_dict(), 'saved_weights.pth')
# append training and validation loss
#train_losses.append(train_loss)
#valid_losses.append(valid_loss)
print(f'\n\nTraining Loss: {train_loss:.3f}')
print(f'Validation Loss: {valid_loss:.3f}')
current = current + 1
#else:
#print("Got weights!")
# load weights of best model
#model.load_state_dict(torch.load("saved_weights.pth"))
#model.load_state_dict(torch.load("/content/drive/MyDrive/saved_weights.pth"), strict=False)
# get predictions for test data
gc.collect()
torch.cuda.empty_cache()
with torch.no_grad():
preds = model(test_seq.to(device), test_mask.to(device))
#preds = model(test_seq, test_mask)
preds = preds.detach().cpu().numpy()
print("Performance:")
# model's performance
preds = np.argmax(preds, axis=1)
print('Classification Report')
print(classification_report(test_y, preds))
print("Accuracy: " + str(accuracy_score(test_y, preds)))
###Output
Epoch 1 / 3:
Batch 620/620 |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 100.00% complete, loss=0.01, accuracy=0
Evaluating...
Batch 78/78 |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 100.00% complete, loss=0.01, accuracy=0
Training Loss: 0.010
Validation Loss: 0.008
Epoch 2 / 3:
Batch 620/620 |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 100.00% complete, loss=0.01, accuracy=0
Evaluating...
Batch 78/78 |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 100.00% complete, loss=0.01, accuracy=0
Training Loss: 0.007
Validation Loss: 0.008
Epoch 3 / 3:
Batch 620/620 |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 100.00% complete, loss=0.01, accuracy=0
Evaluating...
Batch 78/78 |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 100.00% complete, loss=0.01, accuracy=0
Training Loss: 0.006
Validation Loss: 0.008
Performance:
Classification Report
precision recall f1-score support
0 0.48 0.32 0.39 143
1 0.93 0.97 0.95 1919
2 0.91 0.84 0.87 417
accuracy 0.91 2479
macro avg 0.77 0.71 0.74 2479
weighted avg 0.90 0.91 0.90 2479
Accuracy: 0.9108511496571198
|
modules/autoencoder_mednist.ipynb | ###Markdown
Autoencoder network with MedNIST DatasetThis notebook illustrates the use of an autoencoder in MONAI for the purpose of image deblurring/denoising. Learning objectivesThis will go through the steps of:* Loading the data from a remote source* Using a lambda to create a dictionary of images* Using MONAI's in-built AutoEncoder[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/autoencoder_mednist.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[pillow, tqdm]"
###Output
_____no_output_____
###Markdown
1. Imports and configuration
###Code
import logging
import os
import shutil
import sys
import tempfile
import random
import numpy as np
from tqdm import trange
import matplotlib.pyplot as plt
import torch
from skimage.util import random_noise
from monai.apps import download_and_extract
from monai.config import print_config
from monai.data import CacheDataset, DataLoader
from monai.networks.nets import AutoEncoder
from monai.transforms import (
AddChannelD,
Compose,
LoadImageD,
RandFlipD,
RandRotateD,
RandZoomD,
ScaleIntensityD,
ToTensorD,
Lambda,
)
from monai.utils import set_determinism
print_config()
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
set_determinism(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create small visualistaion function
def plot_ims(ims, shape=None, figsize=(10, 10), titles=None):
shape = (1, len(ims)) if shape is None else shape
plt.subplots(*shape, figsize=figsize)
for i, im in enumerate(ims):
plt.subplot(*shape, i + 1)
im = plt.imread(im) if isinstance(im, str) else torch.squeeze(im)
plt.imshow(im, cmap='gray')
if titles is not None:
plt.title(titles[i])
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
2. Get the dataThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),[the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
resource = "https://www.dropbox.com/s/5wwskxctvcxiuea/MedNIST.tar.gz?dl=1"
md5 = "0bc7306e7427e00ad1c5526a6677552d"
compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
# scan_type could be AbdomenCT BreastMRI CXR ChestCT Hand HeadCT
scan_type = "Hand"
im_dir = os.path.join(data_dir, scan_type)
all_filenames = [os.path.join(im_dir, filename)
for filename in os.listdir(im_dir)]
random.shuffle(all_filenames)
# Visualise a few of them
rand_images = np.random.choice(all_filenames, 8, replace=False)
plot_ims(rand_images, shape=(2, 4))
# Split into training and testing
test_frac = 0.2
num_test = int(len(all_filenames) * test_frac)
num_train = len(all_filenames) - num_test
train_datadict = [{"im": fname} for fname in all_filenames[:num_train]]
test_datadict = [{"im": fname} for fname in all_filenames[-num_test:]]
print(f"total number of images: {len(all_filenames)}")
print(f"number of images for training: {len(train_datadict)}")
print(f"number of images for testing: {len(test_datadict)}")
###Output
total number of images: 10000
number of images for training: 8000
number of images for testing: 2000
###Markdown
3. Create the image transform chainTo train the autoencoder to de-blur/de-noise our images, we'll want to pass the degraded image into the encoder, but in the loss function, we'll do the comparison with the original, undegraded version. In this sense, the loss function will be minimised when the encode and decode steps manage to remove the degradation.Other than the fact that one version of the image is degraded and the other is not, we want them to be identical, meaning they need to be generated from the same transforms. The easiest way to do this is via dictionary transforms, where at the end, we have a lambda function that will return a dictionary containing the three images โ the original, the Gaussian blurred and the noisy (salt and pepper).
###Code
NoiseLambda = Lambda(lambda d: {
"orig": d["im"],
"gaus": torch.tensor(
random_noise(d["im"], mode='gaussian'), dtype=torch.float32),
"s&p": torch.tensor(random_noise(d["im"], mode='s&p', salt_vs_pepper=0.1)),
})
train_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
RandRotateD(keys=["im"], range_x=np.pi / 12, prob=0.5, keep_size=True),
RandFlipD(keys=["im"], spatial_axis=0, prob=0.5),
RandZoomD(keys=["im"], min_zoom=0.9, max_zoom=1.1, prob=0.5),
ToTensorD(keys=["im"]),
NoiseLambda,
]
)
test_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
ToTensorD(keys=["im"]),
NoiseLambda,
]
)
###Output
_____no_output_____
###Markdown
Create dataset and dataloaderHold data and present batches during training.
###Code
batch_size = 300
num_workers = 10
train_ds = CacheDataset(train_datadict, train_transforms,
num_workers=num_workers)
train_loader = DataLoader(train_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
test_ds = CacheDataset(test_datadict, test_transforms, num_workers=num_workers)
test_loader = DataLoader(test_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
# Get image original and its degraded versions
def get_single_im(ds):
loader = torch.utils.data.DataLoader(
ds, batch_size=1, num_workers=10, shuffle=True)
itera = iter(loader)
return next(itera)
data = get_single_im(train_ds)
plot_ims([data['orig'], data['gaus'], data['s&p']],
titles=['orig', 'Gaussian', 's&p'])
def train(dict_key_for_training, max_epochs=10, learning_rate=1e-3):
model = AutoEncoder(
dimensions=2,
in_channels=1,
out_channels=1,
channels=(4, 8, 16, 32),
strides=(2, 2, 2, 2),
).to(device)
# Create loss fn and optimiser
loss_function = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), learning_rate)
epoch_loss_values = []
t = trange(
max_epochs,
desc=f"{dict_key_for_training} -- epoch 0, avg loss: inf", leave=True)
for epoch in t:
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step += 1
inputs = batch_data[dict_key_for_training].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, batch_data['orig'].to(device))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
t.set_description(
f"{dict_key_for_training} -- epoch {epoch + 1}"
+ f", average loss: {epoch_loss:.4f}")
return model, epoch_loss_values
max_epochs = 50
training_types = ['orig', 'gaus', 's&p']
models = []
epoch_losses = []
for training_type in training_types:
model, epoch_loss = train(training_type, max_epochs=max_epochs)
models.append(model)
epoch_losses.append(epoch_loss)
plt.figure()
plt.title("Epoch Average Loss")
plt.xlabel("epoch")
for y, label in zip(epoch_losses, training_types):
x = list(range(1, len(y) + 1))
line, = plt.plot(x, y)
line.set_label(label)
plt.legend()
data = get_single_im(test_ds)
recons = []
for model, training_type in zip(models, training_types):
im = data[training_type]
recon = model(im.to(device)).detach().cpu()
recons.append(recon)
plot_ims(
[data['orig'], data['gaus'], data['s&p']] + recons,
titles=['orig', 'Gaussian', 'S&P'] +
["recon w/\n" + x for x in training_types],
shape=(2, len(training_types)))
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
Autoencoder network with MedNIST DatasetThis notebook illustrates the use of an autoencoder in MONAI for the purpose of image deblurring/denoising. Learning objectivesThis will go through the steps of:* Loading the data from a remote source* Using a lambda to create a dictionary of images* Using MONAI's in-built AutoEncoder[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/autoencoder_mednist.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[pillow, tqdm]"
###Output
_____no_output_____
###Markdown
1. Imports and configuration
###Code
import logging
import os
import shutil
import sys
import tempfile
import random
import numpy as np
from tqdm import trange
import matplotlib.pyplot as plt
import torch
from skimage.util import random_noise
from monai.apps import download_and_extract
from monai.config import print_config
from monai.data import CacheDataset, DataLoader
from monai.networks.nets import AutoEncoder
from monai.transforms import (
AddChannelD,
Compose,
LoadImageD,
RandFlipD,
RandRotateD,
RandZoomD,
ScaleIntensityD,
EnsureTypeD,
Lambda,
)
from monai.utils import set_determinism
print_config()
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
set_determinism(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create small visualistaion function
def plot_ims(ims, shape=None, figsize=(10, 10), titles=None):
shape = (1, len(ims)) if shape is None else shape
plt.subplots(*shape, figsize=figsize)
for i, im in enumerate(ims):
plt.subplot(*shape, i + 1)
im = plt.imread(im) if isinstance(im, str) else torch.squeeze(im)
plt.imshow(im, cmap='gray')
if titles is not None:
plt.title(titles[i])
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
2. Get the dataThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),[the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
resource = "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/MedNIST.tar.gz"
md5 = "0bc7306e7427e00ad1c5526a6677552d"
compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
# scan_type could be AbdomenCT BreastMRI CXR ChestCT Hand HeadCT
scan_type = "Hand"
im_dir = os.path.join(data_dir, scan_type)
all_filenames = [os.path.join(im_dir, filename)
for filename in os.listdir(im_dir)]
random.shuffle(all_filenames)
# Visualise a few of them
rand_images = np.random.choice(all_filenames, 8, replace=False)
plot_ims(rand_images, shape=(2, 4))
# Split into training and testing
test_frac = 0.2
num_test = int(len(all_filenames) * test_frac)
num_train = len(all_filenames) - num_test
train_datadict = [{"im": fname} for fname in all_filenames[:num_train]]
test_datadict = [{"im": fname} for fname in all_filenames[-num_test:]]
print(f"total number of images: {len(all_filenames)}")
print(f"number of images for training: {len(train_datadict)}")
print(f"number of images for testing: {len(test_datadict)}")
###Output
total number of images: 10000
number of images for training: 8000
number of images for testing: 2000
###Markdown
3. Create the image transform chainTo train the autoencoder to de-blur/de-noise our images, we'll want to pass the degraded image into the encoder, but in the loss function, we'll do the comparison with the original, undegraded version. In this sense, the loss function will be minimised when the encode and decode steps manage to remove the degradation.Other than the fact that one version of the image is degraded and the other is not, we want them to be identical, meaning they need to be generated from the same transforms. The easiest way to do this is via dictionary transforms, where at the end, we have a lambda function that will return a dictionary containing the three images โ the original, the Gaussian blurred and the noisy (salt and pepper).
###Code
NoiseLambda = Lambda(lambda d: {
"orig": d["im"],
"gaus": torch.tensor(
random_noise(d["im"], mode='gaussian'), dtype=torch.float32),
"s&p": torch.tensor(random_noise(d["im"], mode='s&p', salt_vs_pepper=0.1)),
})
train_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
RandRotateD(keys=["im"], range_x=np.pi / 12, prob=0.5, keep_size=True),
RandFlipD(keys=["im"], spatial_axis=0, prob=0.5),
RandZoomD(keys=["im"], min_zoom=0.9, max_zoom=1.1, prob=0.5),
EnsureTypeD(keys=["im"]),
NoiseLambda,
]
)
test_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
EnsureTypeD(keys=["im"]),
NoiseLambda,
]
)
###Output
_____no_output_____
###Markdown
Create dataset and dataloaderHold data and present batches during training.
###Code
batch_size = 300
num_workers = 10
train_ds = CacheDataset(train_datadict, train_transforms,
num_workers=num_workers)
train_loader = DataLoader(train_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
test_ds = CacheDataset(test_datadict, test_transforms, num_workers=num_workers)
test_loader = DataLoader(test_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
# Get image original and its degraded versions
def get_single_im(ds):
loader = torch.utils.data.DataLoader(
ds, batch_size=1, num_workers=10, shuffle=True)
itera = iter(loader)
return next(itera)
data = get_single_im(train_ds)
plot_ims([data['orig'], data['gaus'], data['s&p']],
titles=['orig', 'Gaussian', 's&p'])
def train(dict_key_for_training, max_epochs=10, learning_rate=1e-3):
model = AutoEncoder(
spatial_dims=2,
in_channels=1,
out_channels=1,
channels=(4, 8, 16, 32),
strides=(2, 2, 2, 2),
).to(device)
# Create loss fn and optimiser
loss_function = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), learning_rate)
epoch_loss_values = []
t = trange(
max_epochs,
desc=f"{dict_key_for_training} -- epoch 0, avg loss: inf", leave=True)
for epoch in t:
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step += 1
inputs = batch_data[dict_key_for_training].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, batch_data['orig'].to(device))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
t.set_description(
f"{dict_key_for_training} -- epoch {epoch + 1}"
+ f", average loss: {epoch_loss:.4f}")
return model, epoch_loss_values
max_epochs = 50
training_types = ['orig', 'gaus', 's&p']
models = []
epoch_losses = []
for training_type in training_types:
model, epoch_loss = train(training_type, max_epochs=max_epochs)
models.append(model)
epoch_losses.append(epoch_loss)
plt.figure()
plt.title("Epoch Average Loss")
plt.xlabel("epoch")
for y, label in zip(epoch_losses, training_types):
x = list(range(1, len(y) + 1))
line, = plt.plot(x, y)
line.set_label(label)
plt.legend()
data = get_single_im(test_ds)
recons = []
for model, training_type in zip(models, training_types):
im = data[training_type]
recon = model(im.to(device)).detach().cpu()
recons.append(recon)
plot_ims(
[data['orig'], data['gaus'], data['s&p']] + recons,
titles=['orig', 'Gaussian', 'S&P'] +
["recon w/\n" + x for x in training_types],
shape=(2, len(training_types)))
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
Autoencoder network with MedNIST DatasetThis notebook illustrates the use of an autoencoder in MONAI for the purpose of image deblurring/denoising. Learning objectivesThis will go through the steps of:* Loading the data from a remote source* Using a lambda to create a dictionary of images* Using MONAI's in-built AutoEncoder [](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/autoencoder_mednist.ipynb) 1. Imports and configuration
###Code
import logging
import os
import shutil
import sys
import tempfile
import random
import numpy as np
from tqdm import trange
import matplotlib.pyplot as plt
import torch
from skimage.util import random_noise
from monai.apps import download_and_extract
from monai.config import print_config
from monai.data import CacheDataset, DataLoader
from monai.networks.nets import AutoEncoder
from monai.transforms import (
AddChannelD,
Compose,
LoadPNGD,
RandFlipD,
RandRotateD,
RandZoomD,
ScaleIntensityD,
ToTensorD,
Lambda,
)
from monai.utils import set_determinism
print_config()
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
set_determinism(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create small visualistaion function
def plot_ims(ims, shape=None, figsize=(10,10), titles=None):
shape = (1,len(ims)) if shape is None else shape
plt.subplots(*shape, figsize=figsize)
for i,im in enumerate(ims):
plt.subplot(*shape,i+1)
im = plt.imread(im) if isinstance(im, str) else torch.squeeze(im)
plt.imshow(im, cmap='gray')
if titles is not None:
plt.title(titles[i])
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
2. Get the dataThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),[the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
resource = "https://www.dropbox.com/s/5wwskxctvcxiuea/MedNIST.tar.gz?dl=1"
md5 = "0bc7306e7427e00ad1c5526a6677552d"
compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
scan_type = "Hand" # could be AbdomenCT BreastMRI CXR ChestCT Hand HeadCT
im_dir = os.path.join(data_dir, scan_type)
all_filenames = [os.path.join(im_dir, filename) for filename in os.listdir(im_dir)]
random.shuffle(all_filenames)
# Visualise a few of them
rand_images = np.random.choice(all_filenames, 8, replace=False)
plot_ims(rand_images, shape=(2,4))
# Split into training and testing
test_frac = 0.2
num_test = int(len(all_filenames) * test_frac)
num_train = len(all_filenames) - num_test
train_datadict = [{"im": fname} for fname in all_filenames[:num_train]]
test_datadict = [{"im": fname} for fname in all_filenames[-num_test:]]
print(f"total number of images: {len(all_filenames)}")
print(f"number of images for training: {len(train_datadict)}")
print(f"number of images for testing: {len(test_datadict)}")
###Output
total number of images: 10000
number of images for training: 8000
number of images for testing: 2000
###Markdown
3. Create the image transform chainTo train the autoencoder to de-blur/de-noise our images, we'll want to pass the degraded image into the encoder, but in the loss function, we'll do the comparison with the original, undegraded version. In this sense, the loss function will be minimised when the encode and decode steps manage to remove the degradation.Other than the fact that one version of the image is degraded and the other is not, we want them to be identical, meaning they need to be generated from the same transforms. The easiest way to do this is via dictionary transforms, where at the end, we have a lambda function that will return a dictionary containing the three images โ the original, the Gaussian blurred and the noisy (salt and pepper).
###Code
NoiseLambda = Lambda(lambda d: {
"orig":d["im"],
"gaus":torch.tensor(random_noise(d["im"], mode='gaussian'), dtype=torch.float32),
"s&p":torch.tensor(random_noise(d["im"], mode='s&p', salt_vs_pepper=0.1)),
})
train_transforms = Compose(
[
LoadPNGD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
RandRotateD(keys=["im"], range_x=np.pi / 12, prob=0.5, keep_size=True),
RandFlipD(keys=["im"], spatial_axis=0, prob=0.5),
RandZoomD(keys=["im"], min_zoom=0.9, max_zoom=1.1, prob=0.5),
ToTensorD(keys=["im"]),
NoiseLambda,
]
)
test_transforms = Compose(
[
LoadPNGD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
ToTensorD(keys=["im"]),
NoiseLambda,
]
)
###Output
_____no_output_____
###Markdown
Create dataset and dataloaderHold data and present batches during training.
###Code
batch_size = 300
num_workers = 10
train_ds = CacheDataset(train_datadict, train_transforms, num_workers=num_workers)
train_loader = DataLoader(train_ds, batch_size=batch_size, shuffle=True, num_workers=num_workers)
test_ds = CacheDataset(test_datadict, test_transforms, num_workers=num_workers)
test_loader = DataLoader(test_ds, batch_size=batch_size, shuffle=True, num_workers=num_workers)
# Get image original and its degraded versions
def get_single_im(ds):
loader = torch.utils.data.DataLoader(ds, batch_size=1, num_workers=10, shuffle=True)
itera = iter(loader)
return next(itera)
data = get_single_im(train_ds)
plot_ims([data['orig'], data['gaus'], data['s&p']], titles=['orig', 'Gaussian', 's&p'])
def train(dict_key_for_training, epoch_num=10, learning_rate=1e-3):
model = AutoEncoder(
dimensions=2,
in_channels=1,
out_channels=1,
channels=(4, 8, 16, 32),
strides=(2, 2, 2, 2),
).to(device)
# Create loss fn and optimiser
loss_function = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), learning_rate)
epoch_loss_values = list()
t = trange(epoch_num, desc=f"{dict_key_for_training} -- epoch 0, avg loss: inf", leave=True)
for epoch in t:
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step += 1
inputs = batch_data[dict_key_for_training].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, batch_data['orig'].to(device))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
t.set_description(f"{dict_key_for_training} -- epoch {epoch + 1}, average loss: {epoch_loss:.4f}")
return model, epoch_loss_values
epoch_num = 50
training_types = ['orig', 'gaus', 's&p']
models = []
epoch_losses = []
for training_type in training_types:
model, epoch_loss = train(training_type, epoch_num=epoch_num)
models.append(model)
epoch_losses.append(epoch_loss)
plt.figure()
plt.title("Epoch Average Loss")
plt.xlabel("epoch")
for y, label in zip(epoch_losses, training_types):
x = list(range(1, len(y)+1))
line, = plt.plot(x, y)
line.set_label(label)
plt.legend();
data = get_single_im(test_ds)
recons = []
for model, training_type in zip(models, training_types):
im = data[training_type]
recon = model(im.to(device)).detach().cpu()
recons.append(recon)
plot_ims(
[data['orig'], data['gaus'], data['s&p']] + recons,
titles=['orig', 'Gaussian', 'S&P'] + ["recon w/\n" + x for x in training_types],
shape=(2,len(training_types)))
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
Autoencoder network with MedNIST DatasetThis notebook illustrates the use of an autoencoder in MONAI for the purpose of image deblurring/denoising. Learning objectivesThis will go through the steps of:* Loading the data from a remote source* Using a lambda to create a dictionary of images* Using MONAI's in-built AutoEncoder[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/main/modules/autoencoder_mednist.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[pillow, tqdm]"
###Output
_____no_output_____
###Markdown
1. Imports and configuration
###Code
import logging
import os
import shutil
import sys
import tempfile
import random
import numpy as np
from tqdm import trange
import matplotlib.pyplot as plt
import torch
from skimage.util import random_noise
from monai.apps import download_and_extract
from monai.config import print_config
from monai.data import CacheDataset, DataLoader
from monai.networks.nets import AutoEncoder
from monai.transforms import (
AddChannelD,
Compose,
LoadImageD,
RandFlipD,
RandRotateD,
RandZoomD,
ScaleIntensityD,
EnsureTypeD,
Lambda,
)
from monai.utils import set_determinism
print_config()
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
set_determinism(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create small visualistaion function
def plot_ims(ims, shape=None, figsize=(10, 10), titles=None):
shape = (1, len(ims)) if shape is None else shape
plt.subplots(*shape, figsize=figsize)
for i, im in enumerate(ims):
plt.subplot(*shape, i + 1)
im = plt.imread(im) if isinstance(im, str) else torch.squeeze(im)
plt.imshow(im, cmap='gray')
if titles is not None:
plt.title(titles[i])
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
2. Get the dataThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),[the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
resource = "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/MedNIST.tar.gz"
md5 = "0bc7306e7427e00ad1c5526a6677552d"
compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
# scan_type could be AbdomenCT BreastMRI CXR ChestCT Hand HeadCT
scan_type = "Hand"
im_dir = os.path.join(data_dir, scan_type)
all_filenames = [os.path.join(im_dir, filename)
for filename in os.listdir(im_dir)]
random.shuffle(all_filenames)
# Visualise a few of them
rand_images = np.random.choice(all_filenames, 8, replace=False)
plot_ims(rand_images, shape=(2, 4))
# Split into training and testing
test_frac = 0.2
num_test = int(len(all_filenames) * test_frac)
num_train = len(all_filenames) - num_test
train_datadict = [{"im": fname} for fname in all_filenames[:num_train]]
test_datadict = [{"im": fname} for fname in all_filenames[-num_test:]]
print(f"total number of images: {len(all_filenames)}")
print(f"number of images for training: {len(train_datadict)}")
print(f"number of images for testing: {len(test_datadict)}")
###Output
total number of images: 10000
number of images for training: 8000
number of images for testing: 2000
###Markdown
3. Create the image transform chainTo train the autoencoder to de-blur/de-noise our images, we'll want to pass the degraded image into the encoder, but in the loss function, we'll do the comparison with the original, undegraded version. In this sense, the loss function will be minimised when the encode and decode steps manage to remove the degradation.Other than the fact that one version of the image is degraded and the other is not, we want them to be identical, meaning they need to be generated from the same transforms. The easiest way to do this is via dictionary transforms, where at the end, we have a lambda function that will return a dictionary containing the three images โ the original, the Gaussian blurred and the noisy (salt and pepper).
###Code
NoiseLambda = Lambda(lambda d: {
"orig": d["im"],
"gaus": torch.tensor(
random_noise(d["im"], mode='gaussian'), dtype=torch.float32),
"s&p": torch.tensor(random_noise(d["im"], mode='s&p', salt_vs_pepper=0.1)),
})
train_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
RandRotateD(keys=["im"], range_x=np.pi / 12, prob=0.5, keep_size=True),
RandFlipD(keys=["im"], spatial_axis=0, prob=0.5),
RandZoomD(keys=["im"], min_zoom=0.9, max_zoom=1.1, prob=0.5),
EnsureTypeD(keys=["im"]),
NoiseLambda,
]
)
test_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
EnsureTypeD(keys=["im"]),
NoiseLambda,
]
)
###Output
_____no_output_____
###Markdown
Create dataset and dataloaderHold data and present batches during training.
###Code
batch_size = 300
num_workers = 10
train_ds = CacheDataset(train_datadict, train_transforms,
num_workers=num_workers)
train_loader = DataLoader(train_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
test_ds = CacheDataset(test_datadict, test_transforms, num_workers=num_workers)
test_loader = DataLoader(test_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
# Get image original and its degraded versions
def get_single_im(ds):
loader = torch.utils.data.DataLoader(
ds, batch_size=1, num_workers=10, shuffle=True)
itera = iter(loader)
return next(itera)
data = get_single_im(train_ds)
plot_ims([data['orig'], data['gaus'], data['s&p']],
titles=['orig', 'Gaussian', 's&p'])
def train(dict_key_for_training, max_epochs=10, learning_rate=1e-3):
model = AutoEncoder(
spatial_dims=2,
in_channels=1,
out_channels=1,
channels=(4, 8, 16, 32),
strides=(2, 2, 2, 2),
).to(device)
# Create loss fn and optimiser
loss_function = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), learning_rate)
epoch_loss_values = []
t = trange(
max_epochs,
desc=f"{dict_key_for_training} -- epoch 0, avg loss: inf", leave=True)
for epoch in t:
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step += 1
inputs = batch_data[dict_key_for_training].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, batch_data['orig'].to(device))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
t.set_description(
f"{dict_key_for_training} -- epoch {epoch + 1}"
+ f", average loss: {epoch_loss:.4f}")
return model, epoch_loss_values
max_epochs = 50
training_types = ['orig', 'gaus', 's&p']
models = []
epoch_losses = []
for training_type in training_types:
model, epoch_loss = train(training_type, max_epochs=max_epochs)
models.append(model)
epoch_losses.append(epoch_loss)
plt.figure()
plt.title("Epoch Average Loss")
plt.xlabel("epoch")
for y, label in zip(epoch_losses, training_types):
x = list(range(1, len(y) + 1))
line, = plt.plot(x, y)
line.set_label(label)
plt.legend()
data = get_single_im(test_ds)
recons = []
for model, training_type in zip(models, training_types):
im = data[training_type]
recon = model(im.to(device)).detach().cpu()
recons.append(recon)
plot_ims(
[data['orig'], data['gaus'], data['s&p']] + recons,
titles=['orig', 'Gaussian', 'S&P'] +
["recon w/\n" + x for x in training_types],
shape=(2, len(training_types)))
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
Autoencoder network with MedNIST DatasetThis notebook illustrates the use of an autoencoder in MONAI for the purpose of image deblurring/denoising. Learning objectivesThis will go through the steps of:* Loading the data from a remote source* Using a lambda to create a dictionary of images* Using MONAI's in-built AutoEncoder [](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/autoencoder_mednist.ipynb) Setup environment
###Code
%pip install -q "monai[pillow, tqdm]"
###Output
_____no_output_____
###Markdown
1. Imports and configuration
###Code
import logging
import os
import shutil
import sys
import tempfile
import random
import numpy as np
from tqdm import trange
import matplotlib.pyplot as plt
import torch
from skimage.util import random_noise
from monai.apps import download_and_extract
from monai.config import print_config
from monai.data import CacheDataset, DataLoader
from monai.networks.nets import AutoEncoder
from monai.transforms import (
AddChannelD,
Compose,
LoadImageD,
RandFlipD,
RandRotateD,
RandZoomD,
ScaleIntensityD,
ToTensorD,
Lambda,
)
from monai.utils import set_determinism
print_config()
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
set_determinism(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create small visualistaion function
def plot_ims(ims, shape=None, figsize=(10,10), titles=None):
shape = (1,len(ims)) if shape is None else shape
plt.subplots(*shape, figsize=figsize)
for i,im in enumerate(ims):
plt.subplot(*shape,i+1)
im = plt.imread(im) if isinstance(im, str) else torch.squeeze(im)
plt.imshow(im, cmap='gray')
if titles is not None:
plt.title(titles[i])
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
2. Get the dataThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),[the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
resource = "https://www.dropbox.com/s/5wwskxctvcxiuea/MedNIST.tar.gz?dl=1"
md5 = "0bc7306e7427e00ad1c5526a6677552d"
compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
scan_type = "Hand" # could be AbdomenCT BreastMRI CXR ChestCT Hand HeadCT
im_dir = os.path.join(data_dir, scan_type)
all_filenames = [os.path.join(im_dir, filename) for filename in os.listdir(im_dir)]
random.shuffle(all_filenames)
# Visualise a few of them
rand_images = np.random.choice(all_filenames, 8, replace=False)
plot_ims(rand_images, shape=(2,4))
# Split into training and testing
test_frac = 0.2
num_test = int(len(all_filenames) * test_frac)
num_train = len(all_filenames) - num_test
train_datadict = [{"im": fname} for fname in all_filenames[:num_train]]
test_datadict = [{"im": fname} for fname in all_filenames[-num_test:]]
print(f"total number of images: {len(all_filenames)}")
print(f"number of images for training: {len(train_datadict)}")
print(f"number of images for testing: {len(test_datadict)}")
###Output
total number of images: 10000
number of images for training: 8000
number of images for testing: 2000
###Markdown
3. Create the image transform chainTo train the autoencoder to de-blur/de-noise our images, we'll want to pass the degraded image into the encoder, but in the loss function, we'll do the comparison with the original, undegraded version. In this sense, the loss function will be minimised when the encode and decode steps manage to remove the degradation.Other than the fact that one version of the image is degraded and the other is not, we want them to be identical, meaning they need to be generated from the same transforms. The easiest way to do this is via dictionary transforms, where at the end, we have a lambda function that will return a dictionary containing the three images โ the original, the Gaussian blurred and the noisy (salt and pepper).
###Code
NoiseLambda = Lambda(lambda d: {
"orig":d["im"],
"gaus":torch.tensor(random_noise(d["im"], mode='gaussian'), dtype=torch.float32),
"s&p":torch.tensor(random_noise(d["im"], mode='s&p', salt_vs_pepper=0.1)),
})
train_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
RandRotateD(keys=["im"], range_x=np.pi / 12, prob=0.5, keep_size=True),
RandFlipD(keys=["im"], spatial_axis=0, prob=0.5),
RandZoomD(keys=["im"], min_zoom=0.9, max_zoom=1.1, prob=0.5),
ToTensorD(keys=["im"]),
NoiseLambda,
]
)
test_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
ToTensorD(keys=["im"]),
NoiseLambda,
]
)
###Output
_____no_output_____
###Markdown
Create dataset and dataloaderHold data and present batches during training.
###Code
batch_size = 300
num_workers = 10
train_ds = CacheDataset(train_datadict, train_transforms, num_workers=num_workers)
train_loader = DataLoader(train_ds, batch_size=batch_size, shuffle=True, num_workers=num_workers)
test_ds = CacheDataset(test_datadict, test_transforms, num_workers=num_workers)
test_loader = DataLoader(test_ds, batch_size=batch_size, shuffle=True, num_workers=num_workers)
# Get image original and its degraded versions
def get_single_im(ds):
loader = torch.utils.data.DataLoader(ds, batch_size=1, num_workers=10, shuffle=True)
itera = iter(loader)
return next(itera)
data = get_single_im(train_ds)
plot_ims([data['orig'], data['gaus'], data['s&p']], titles=['orig', 'Gaussian', 's&p'])
def train(dict_key_for_training, epoch_num=10, learning_rate=1e-3):
model = AutoEncoder(
dimensions=2,
in_channels=1,
out_channels=1,
channels=(4, 8, 16, 32),
strides=(2, 2, 2, 2),
).to(device)
# Create loss fn and optimiser
loss_function = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), learning_rate)
epoch_loss_values = list()
t = trange(epoch_num, desc=f"{dict_key_for_training} -- epoch 0, avg loss: inf", leave=True)
for epoch in t:
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step += 1
inputs = batch_data[dict_key_for_training].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, batch_data['orig'].to(device))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
t.set_description(f"{dict_key_for_training} -- epoch {epoch + 1}, average loss: {epoch_loss:.4f}")
return model, epoch_loss_values
epoch_num = 50
training_types = ['orig', 'gaus', 's&p']
models = []
epoch_losses = []
for training_type in training_types:
model, epoch_loss = train(training_type, epoch_num=epoch_num)
models.append(model)
epoch_losses.append(epoch_loss)
plt.figure()
plt.title("Epoch Average Loss")
plt.xlabel("epoch")
for y, label in zip(epoch_losses, training_types):
x = list(range(1, len(y)+1))
line, = plt.plot(x, y)
line.set_label(label)
plt.legend();
data = get_single_im(test_ds)
recons = []
for model, training_type in zip(models, training_types):
im = data[training_type]
recon = model(im.to(device)).detach().cpu()
recons.append(recon)
plot_ims(
[data['orig'], data['gaus'], data['s&p']] + recons,
titles=['orig', 'Gaussian', 'S&P'] + ["recon w/\n" + x for x in training_types],
shape=(2,len(training_types)))
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
Autoencoder network with MedNIST DatasetThis notebook illustrates the use of an autoencoder in MONAI for the purpose of image deblurring/denoising. Learning objectivesThis will go through the steps of:* Loading the data from a remote source* Using a lambda to create a dictionary of images* Using MONAI's in-built AutoEncoder[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/autoencoder_mednist.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[pillow, tqdm]"
###Output
_____no_output_____
###Markdown
1. Imports and configuration
###Code
import logging
import os
import shutil
import sys
import tempfile
import random
import numpy as np
from tqdm import trange
import matplotlib.pyplot as plt
import torch
from skimage.util import random_noise
from monai.apps import download_and_extract
from monai.config import print_config
from monai.data import CacheDataset, DataLoader
from monai.networks.nets import AutoEncoder
from monai.transforms import (
AddChannelD,
Compose,
LoadImageD,
RandFlipD,
RandRotateD,
RandZoomD,
ScaleIntensityD,
EnsureTypeD,
Lambda,
)
from monai.utils import set_determinism
print_config()
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
set_determinism(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create small visualistaion function
def plot_ims(ims, shape=None, figsize=(10, 10), titles=None):
shape = (1, len(ims)) if shape is None else shape
plt.subplots(*shape, figsize=figsize)
for i, im in enumerate(ims):
plt.subplot(*shape, i + 1)
im = plt.imread(im) if isinstance(im, str) else torch.squeeze(im)
plt.imshow(im, cmap='gray')
if titles is not None:
plt.title(titles[i])
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
2. Get the dataThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),[the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
resource = "https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE"
md5 = "0bc7306e7427e00ad1c5526a6677552d"
compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
# scan_type could be AbdomenCT BreastMRI CXR ChestCT Hand HeadCT
scan_type = "Hand"
im_dir = os.path.join(data_dir, scan_type)
all_filenames = [os.path.join(im_dir, filename)
for filename in os.listdir(im_dir)]
random.shuffle(all_filenames)
# Visualise a few of them
rand_images = np.random.choice(all_filenames, 8, replace=False)
plot_ims(rand_images, shape=(2, 4))
# Split into training and testing
test_frac = 0.2
num_test = int(len(all_filenames) * test_frac)
num_train = len(all_filenames) - num_test
train_datadict = [{"im": fname} for fname in all_filenames[:num_train]]
test_datadict = [{"im": fname} for fname in all_filenames[-num_test:]]
print(f"total number of images: {len(all_filenames)}")
print(f"number of images for training: {len(train_datadict)}")
print(f"number of images for testing: {len(test_datadict)}")
###Output
total number of images: 10000
number of images for training: 8000
number of images for testing: 2000
###Markdown
3. Create the image transform chainTo train the autoencoder to de-blur/de-noise our images, we'll want to pass the degraded image into the encoder, but in the loss function, we'll do the comparison with the original, undegraded version. In this sense, the loss function will be minimised when the encode and decode steps manage to remove the degradation.Other than the fact that one version of the image is degraded and the other is not, we want them to be identical, meaning they need to be generated from the same transforms. The easiest way to do this is via dictionary transforms, where at the end, we have a lambda function that will return a dictionary containing the three images โ the original, the Gaussian blurred and the noisy (salt and pepper).
###Code
NoiseLambda = Lambda(lambda d: {
"orig": d["im"],
"gaus": torch.tensor(
random_noise(d["im"], mode='gaussian'), dtype=torch.float32),
"s&p": torch.tensor(random_noise(d["im"], mode='s&p', salt_vs_pepper=0.1)),
})
train_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
RandRotateD(keys=["im"], range_x=np.pi / 12, prob=0.5, keep_size=True),
RandFlipD(keys=["im"], spatial_axis=0, prob=0.5),
RandZoomD(keys=["im"], min_zoom=0.9, max_zoom=1.1, prob=0.5),
EnsureTypeD(keys=["im"]),
NoiseLambda,
]
)
test_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
EnsureTypeD(keys=["im"]),
NoiseLambda,
]
)
###Output
_____no_output_____
###Markdown
Create dataset and dataloaderHold data and present batches during training.
###Code
batch_size = 300
num_workers = 10
train_ds = CacheDataset(train_datadict, train_transforms,
num_workers=num_workers)
train_loader = DataLoader(train_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
test_ds = CacheDataset(test_datadict, test_transforms, num_workers=num_workers)
test_loader = DataLoader(test_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
# Get image original and its degraded versions
def get_single_im(ds):
loader = torch.utils.data.DataLoader(
ds, batch_size=1, num_workers=10, shuffle=True)
itera = iter(loader)
return next(itera)
data = get_single_im(train_ds)
plot_ims([data['orig'], data['gaus'], data['s&p']],
titles=['orig', 'Gaussian', 's&p'])
def train(dict_key_for_training, max_epochs=10, learning_rate=1e-3):
model = AutoEncoder(
spatial_dims=2,
in_channels=1,
out_channels=1,
channels=(4, 8, 16, 32),
strides=(2, 2, 2, 2),
).to(device)
# Create loss fn and optimiser
loss_function = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), learning_rate)
epoch_loss_values = []
t = trange(
max_epochs,
desc=f"{dict_key_for_training} -- epoch 0, avg loss: inf", leave=True)
for epoch in t:
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step += 1
inputs = batch_data[dict_key_for_training].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, batch_data['orig'].to(device))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
t.set_description(
f"{dict_key_for_training} -- epoch {epoch + 1}"
+ f", average loss: {epoch_loss:.4f}")
return model, epoch_loss_values
max_epochs = 50
training_types = ['orig', 'gaus', 's&p']
models = []
epoch_losses = []
for training_type in training_types:
model, epoch_loss = train(training_type, max_epochs=max_epochs)
models.append(model)
epoch_losses.append(epoch_loss)
plt.figure()
plt.title("Epoch Average Loss")
plt.xlabel("epoch")
for y, label in zip(epoch_losses, training_types):
x = list(range(1, len(y) + 1))
line, = plt.plot(x, y)
line.set_label(label)
plt.legend()
data = get_single_im(test_ds)
recons = []
for model, training_type in zip(models, training_types):
im = data[training_type]
recon = model(im.to(device)).detach().cpu()
recons.append(recon)
plot_ims(
[data['orig'], data['gaus'], data['s&p']] + recons,
titles=['orig', 'Gaussian', 'S&P'] +
["recon w/\n" + x for x in training_types],
shape=(2, len(training_types)))
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
Autoencoder network with MedNIST DatasetThis notebook illustrates the use of an autoencoder in MONAI for the purpose of image deblurring/denoising. Learning objectivesThis will go through the steps of:* Loading the data from a remote source* Using a lambda to create a dictionary of images* Using MONAI's in-built AutoEncoder[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/autoencoder_mednist.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q monai[pillow, tqdm]
###Output
_____no_output_____
###Markdown
1. Imports and configuration
###Code
import logging
import os
import shutil
import sys
import tempfile
import random
import numpy as np
from tqdm import trange
import matplotlib.pyplot as plt
import torch
from skimage.util import random_noise
from monai.apps import download_and_extract
from monai.config import print_config
from monai.data import CacheDataset, DataLoader
from monai.networks.nets import AutoEncoder
from monai.transforms import (
AddChannelD,
Compose,
LoadImageD,
RandFlipD,
RandRotateD,
RandZoomD,
ScaleIntensityD,
ToTensorD,
Lambda,
)
from monai.utils import set_determinism
print_config()
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
set_determinism(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create small visualistaion function
def plot_ims(ims, shape=None, figsize=(10, 10), titles=None):
shape = (1, len(ims)) if shape is None else shape
plt.subplots(*shape, figsize=figsize)
for i, im in enumerate(ims):
plt.subplot(*shape, i + 1)
im = plt.imread(im) if isinstance(im, str) else torch.squeeze(im)
plt.imshow(im, cmap='gray')
if titles is not None:
plt.title(titles[i])
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
2. Get the dataThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),[the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
resource = "https://www.dropbox.com/s/5wwskxctvcxiuea/MedNIST.tar.gz?dl=1"
md5 = "0bc7306e7427e00ad1c5526a6677552d"
compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
# scan_type could be AbdomenCT BreastMRI CXR ChestCT Hand HeadCT
scan_type = "Hand"
im_dir = os.path.join(data_dir, scan_type)
all_filenames = [os.path.join(im_dir, filename)
for filename in os.listdir(im_dir)]
random.shuffle(all_filenames)
# Visualise a few of them
rand_images = np.random.choice(all_filenames, 8, replace=False)
plot_ims(rand_images, shape=(2, 4))
# Split into training and testing
test_frac = 0.2
num_test = int(len(all_filenames) * test_frac)
num_train = len(all_filenames) - num_test
train_datadict = [{"im": fname} for fname in all_filenames[:num_train]]
test_datadict = [{"im": fname} for fname in all_filenames[-num_test:]]
print(f"total number of images: {len(all_filenames)}")
print(f"number of images for training: {len(train_datadict)}")
print(f"number of images for testing: {len(test_datadict)}")
###Output
total number of images: 10000
number of images for training: 8000
number of images for testing: 2000
###Markdown
3. Create the image transform chainTo train the autoencoder to de-blur/de-noise our images, we'll want to pass the degraded image into the encoder, but in the loss function, we'll do the comparison with the original, undegraded version. In this sense, the loss function will be minimised when the encode and decode steps manage to remove the degradation.Other than the fact that one version of the image is degraded and the other is not, we want them to be identical, meaning they need to be generated from the same transforms. The easiest way to do this is via dictionary transforms, where at the end, we have a lambda function that will return a dictionary containing the three images โ the original, the Gaussian blurred and the noisy (salt and pepper).
###Code
NoiseLambda = Lambda(lambda d: {
"orig": d["im"],
"gaus": torch.tensor(
random_noise(d["im"], mode='gaussian'), dtype=torch.float32),
"s&p": torch.tensor(random_noise(d["im"], mode='s&p', salt_vs_pepper=0.1)),
})
train_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
RandRotateD(keys=["im"], range_x=np.pi / 12, prob=0.5, keep_size=True),
RandFlipD(keys=["im"], spatial_axis=0, prob=0.5),
RandZoomD(keys=["im"], min_zoom=0.9, max_zoom=1.1, prob=0.5),
ToTensorD(keys=["im"]),
NoiseLambda,
]
)
test_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
ToTensorD(keys=["im"]),
NoiseLambda,
]
)
###Output
_____no_output_____
###Markdown
Create dataset and dataloaderHold data and present batches during training.
###Code
batch_size = 300
num_workers = 10
train_ds = CacheDataset(train_datadict, train_transforms,
num_workers=num_workers)
train_loader = DataLoader(train_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
test_ds = CacheDataset(test_datadict, test_transforms, num_workers=num_workers)
test_loader = DataLoader(test_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
# Get image original and its degraded versions
def get_single_im(ds):
loader = torch.utils.data.DataLoader(
ds, batch_size=1, num_workers=10, shuffle=True)
itera = iter(loader)
return next(itera)
data = get_single_im(train_ds)
plot_ims([data['orig'], data['gaus'], data['s&p']],
titles=['orig', 'Gaussian', 's&p'])
def train(dict_key_for_training, max_epochs=10, learning_rate=1e-3):
model = AutoEncoder(
dimensions=2,
in_channels=1,
out_channels=1,
channels=(4, 8, 16, 32),
strides=(2, 2, 2, 2),
).to(device)
# Create loss fn and optimiser
loss_function = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), learning_rate)
epoch_loss_values = []
t = trange(
max_epochs,
desc=f"{dict_key_for_training} -- epoch 0, avg loss: inf", leave=True)
for epoch in t:
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step += 1
inputs = batch_data[dict_key_for_training].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, batch_data['orig'].to(device))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
t.set_description(
f"{dict_key_for_training} -- epoch {epoch + 1}"
+ f", average loss: {epoch_loss:.4f}")
return model, epoch_loss_values
max_epochs = 50
training_types = ['orig', 'gaus', 's&p']
models = []
epoch_losses = []
for training_type in training_types:
model, epoch_loss = train(training_type, max_epochs=max_epochs)
models.append(model)
epoch_losses.append(epoch_loss)
plt.figure()
plt.title("Epoch Average Loss")
plt.xlabel("epoch")
for y, label in zip(epoch_losses, training_types):
x = list(range(1, len(y) + 1))
line, = plt.plot(x, y)
line.set_label(label)
plt.legend()
data = get_single_im(test_ds)
recons = []
for model, training_type in zip(models, training_types):
im = data[training_type]
recon = model(im.to(device)).detach().cpu()
recons.append(recon)
plot_ims(
[data['orig'], data['gaus'], data['s&p']] + recons,
titles=['orig', 'Gaussian', 'S&P'] +
["recon w/\n" + x for x in training_types],
shape=(2, len(training_types)))
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
Autoencoder network with MedNIST DatasetThis notebook illustrates the use of an autoencoder in MONAI for the purpose of image deblurring/denoising. Learning objectivesThis will go through the steps of:* Loading the data from a remote source* Using a lambda to create a dictionary of images* Using MONAI's in-built AutoEncoder[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/autoencoder_mednist.ipynb) Setup environment
###Code
!python -c "import monai" || pip install -q "monai-weekly[pillow, tqdm]"
###Output
_____no_output_____
###Markdown
1. Imports and configuration
###Code
import logging
import os
import shutil
import sys
import tempfile
import random
import numpy as np
from tqdm import trange
import matplotlib.pyplot as plt
import torch
from skimage.util import random_noise
from monai.apps import download_and_extract
from monai.config import print_config
from monai.data import CacheDataset, DataLoader
from monai.networks.nets import AutoEncoder
from monai.transforms import (
AddChannelD,
Compose,
LoadImageD,
RandFlipD,
RandRotateD,
RandZoomD,
ScaleIntensityD,
EnsureTypeD,
Lambda,
)
from monai.utils import set_determinism
print_config()
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
set_determinism(0)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Create small visualistaion function
def plot_ims(ims, shape=None, figsize=(10, 10), titles=None):
shape = (1, len(ims)) if shape is None else shape
plt.subplots(*shape, figsize=figsize)
for i, im in enumerate(ims):
plt.subplot(*shape, i + 1)
im = plt.imread(im) if isinstance(im, str) else torch.squeeze(im)
plt.imshow(im, cmap='gray')
if titles is not None:
plt.title(titles[i])
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
2. Get the dataThe MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions),[the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4),and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).The dataset is kindly made available by [Dr. Bradley J. Erickson M.D., Ph.D.](https://www.mayo.edu/research/labs/radiology-informatics/overview) (Department of Radiology, Mayo Clinic)under the Creative Commons [CC BY-SA 4.0 license](https://creativecommons.org/licenses/by-sa/4.0/).
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
resource = "https://drive.google.com/uc?id=1QsnnkvZyJPcbRoV_ArW8SnE1OTuoVbKE"
md5 = "0bc7306e7427e00ad1c5526a6677552d"
compressed_file = os.path.join(root_dir, "MedNIST.tar.gz")
data_dir = os.path.join(root_dir, "MedNIST")
if not os.path.exists(data_dir):
download_and_extract(resource, compressed_file, root_dir, md5)
# scan_type could be AbdomenCT BreastMRI CXR ChestCT Hand HeadCT
scan_type = "Hand"
im_dir = os.path.join(data_dir, scan_type)
all_filenames = [os.path.join(im_dir, filename)
for filename in os.listdir(im_dir)]
random.shuffle(all_filenames)
# Visualise a few of them
rand_images = np.random.choice(all_filenames, 8, replace=False)
plot_ims(rand_images, shape=(2, 4))
# Split into training and testing
test_frac = 0.2
num_test = int(len(all_filenames) * test_frac)
num_train = len(all_filenames) - num_test
train_datadict = [{"im": fname} for fname in all_filenames[:num_train]]
test_datadict = [{"im": fname} for fname in all_filenames[-num_test:]]
print(f"total number of images: {len(all_filenames)}")
print(f"number of images for training: {len(train_datadict)}")
print(f"number of images for testing: {len(test_datadict)}")
###Output
total number of images: 10000
number of images for training: 8000
number of images for testing: 2000
###Markdown
3. Create the image transform chainTo train the autoencoder to de-blur/de-noise our images, we'll want to pass the degraded image into the encoder, but in the loss function, we'll do the comparison with the original, undegraded version. In this sense, the loss function will be minimised when the encode and decode steps manage to remove the degradation.Other than the fact that one version of the image is degraded and the other is not, we want them to be identical, meaning they need to be generated from the same transforms. The easiest way to do this is via dictionary transforms, where at the end, we have a lambda function that will return a dictionary containing the three images โ the original, the Gaussian blurred and the noisy (salt and pepper).
###Code
NoiseLambda = Lambda(lambda d: {
"orig": d["im"],
"gaus": torch.tensor(
random_noise(d["im"], mode='gaussian'), dtype=torch.float32),
"s&p": torch.tensor(random_noise(d["im"], mode='s&p', salt_vs_pepper=0.1)),
})
train_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
RandRotateD(keys=["im"], range_x=np.pi / 12, prob=0.5, keep_size=True),
RandFlipD(keys=["im"], spatial_axis=0, prob=0.5),
RandZoomD(keys=["im"], min_zoom=0.9, max_zoom=1.1, prob=0.5),
EnsureTypeD(keys=["im"]),
NoiseLambda,
]
)
test_transforms = Compose(
[
LoadImageD(keys=["im"]),
AddChannelD(keys=["im"]),
ScaleIntensityD(keys=["im"]),
EnsureTypeD(keys=["im"]),
NoiseLambda,
]
)
###Output
_____no_output_____
###Markdown
Create dataset and dataloaderHold data and present batches during training.
###Code
batch_size = 300
num_workers = 10
train_ds = CacheDataset(train_datadict, train_transforms,
num_workers=num_workers)
train_loader = DataLoader(train_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
test_ds = CacheDataset(test_datadict, test_transforms, num_workers=num_workers)
test_loader = DataLoader(test_ds, batch_size=batch_size,
shuffle=True, num_workers=num_workers)
# Get image original and its degraded versions
def get_single_im(ds):
loader = torch.utils.data.DataLoader(
ds, batch_size=1, num_workers=10, shuffle=True)
itera = iter(loader)
return next(itera)
data = get_single_im(train_ds)
plot_ims([data['orig'], data['gaus'], data['s&p']],
titles=['orig', 'Gaussian', 's&p'])
def train(dict_key_for_training, max_epochs=10, learning_rate=1e-3):
model = AutoEncoder(
dimensions=2,
in_channels=1,
out_channels=1,
channels=(4, 8, 16, 32),
strides=(2, 2, 2, 2),
).to(device)
# Create loss fn and optimiser
loss_function = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), learning_rate)
epoch_loss_values = []
t = trange(
max_epochs,
desc=f"{dict_key_for_training} -- epoch 0, avg loss: inf", leave=True)
for epoch in t:
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step += 1
inputs = batch_data[dict_key_for_training].to(device)
optimizer.zero_grad()
outputs = model(inputs)
loss = loss_function(outputs, batch_data['orig'].to(device))
loss.backward()
optimizer.step()
epoch_loss += loss.item()
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
t.set_description(
f"{dict_key_for_training} -- epoch {epoch + 1}"
+ f", average loss: {epoch_loss:.4f}")
return model, epoch_loss_values
max_epochs = 50
training_types = ['orig', 'gaus', 's&p']
models = []
epoch_losses = []
for training_type in training_types:
model, epoch_loss = train(training_type, max_epochs=max_epochs)
models.append(model)
epoch_losses.append(epoch_loss)
plt.figure()
plt.title("Epoch Average Loss")
plt.xlabel("epoch")
for y, label in zip(epoch_losses, training_types):
x = list(range(1, len(y) + 1))
line, = plt.plot(x, y)
line.set_label(label)
plt.legend()
data = get_single_im(test_ds)
recons = []
for model, training_type in zip(models, training_types):
im = data[training_type]
recon = model(im.to(device)).detach().cpu()
recons.append(recon)
plot_ims(
[data['orig'], data['gaus'], data['s&p']] + recons,
titles=['orig', 'Gaussian', 'S&P'] +
["recon w/\n" + x for x in training_types],
shape=(2, len(training_types)))
###Output
_____no_output_____
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____ |
0.15/_downloads/plot_introduction.ipynb | ###Markdown
Basic MEG and EEG data processing=================================MNE-Python reimplements most of MNE-C's (the original MNE command line utils)functionality and offers transparent scripting.On top of that it extends MNE-C's functionality considerably(customize events, compute contrasts, group statistics, time-frequencyanalysis, EEG-sensor space analyses, etc.) It uses the same files as standardMNE unix commands: no need to convert your files to a new system or database.What you can do with MNE Python------------------------------- - **Raw data visualization** to visualize recordings, can also use *mne_browse_raw* for extended functionality (see `ch_browse`) - **Epoching**: Define epochs, baseline correction, handle conditions etc. - **Averaging** to get Evoked data - **Compute SSP projectors** to remove ECG and EOG artifacts - **Compute ICA** to remove artifacts or select latent sources. - **Maxwell filtering** to remove environmental noise. - **Boundary Element Modeling**: single and three-layer BEM model creation and solution computation. - **Forward modeling**: BEM computation and mesh creation (see `ch_forward`) - **Linear inverse solvers** (dSPM, sLORETA, MNE, LCMV, DICS) - **Sparse inverse solvers** (L1/L2 mixed norm MxNE, Gamma Map, Time-Frequency MxNE) - **Connectivity estimation** in sensor and source space - **Visualization of sensor and source space data** - **Time-frequency** analysis with Morlet wavelets (induced power, intertrial coherence, phase lock value) also in the source space - **Spectrum estimation** using multi-taper method - **Mixed Source Models** combining cortical and subcortical structures - **Dipole Fitting** - **Decoding** multivariate pattern analyis of M/EEG topographies - **Compute contrasts** between conditions, between sensors, across subjects etc. - **Non-parametric statistics** in time, space and frequency (including cluster-level) - **Scripting** (batch and parallel computing)What you're not supposed to do with MNE Python---------------------------------------------- - **Brain and head surface segmentation** for use with BEM models -- use Freesurfer.NoteThis package is based on the FIF file format from Neuromag. It can read and convert CTF, BTI/4D, KIT and various EEG formats to FIF.Installation of the required materials---------------------------------------See `install_python_and_mne_python`.NoteThe expected location for the MNE-sample data is ``~/mne_data``. If you downloaded data and an example asks you whether to download it again, make sure the data reside in the examples directory and you run the script from its current directory. From IPython e.g. say:: cd examples/preprocessing %run plot_find_ecg_artifacts.pyFrom raw data to evoked data----------------------------Now, launch `ipython`_ (Advanced Python shell) using the QT backend, whichis best supported across systems:: $ ipython --matplotlib=qtFirst, load the mne package:NoteIn IPython, you can press **shift-enter** with a given cell selected to execute it and advance to the next cell:
###Code
import mne
###Output
_____no_output_____
###Markdown
If you'd like to turn information status messages off:
###Code
mne.set_log_level('WARNING')
###Output
_____no_output_____
###Markdown
But it's generally a good idea to leave them on:
###Code
mne.set_log_level('INFO')
###Output
_____no_output_____
###Markdown
You can set the default level by setting the environment variable"MNE_LOGGING_LEVEL", or by having mne-python write preferences to a file:
###Code
mne.set_config('MNE_LOGGING_LEVEL', 'WARNING', set_env=True)
###Output
_____no_output_____
###Markdown
Note that the location of the mne-python preferences file (for easier manualediting) can be found using:
###Code
mne.get_config_path()
###Output
_____no_output_____
###Markdown
By default logging messages print to the console, but look at:func:`mne.set_log_file` to save output to a file.Access raw data^^^^^^^^^^^^^^^
###Code
from mne.datasets import sample # noqa
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
print(raw_fname)
###Output
_____no_output_____
###Markdown
NoteThe MNE sample dataset should be downloaded automatically but be patient (approx. 2GB)Read data from file:
###Code
raw = mne.io.read_raw_fif(raw_fname)
print(raw)
print(raw.info)
###Output
_____no_output_____
###Markdown
Look at the channels in raw:
###Code
print(raw.ch_names)
###Output
_____no_output_____
###Markdown
Read and plot a segment of raw data
###Code
start, stop = raw.time_as_index([100, 115]) # 100 s to 115 s data segment
data, times = raw[:, start:stop]
print(data.shape)
print(times.shape)
data, times = raw[2:20:3, start:stop] # access underlying data
raw.plot()
###Output
_____no_output_____
###Markdown
Save a segment of 150s of raw data (MEG only):
###Code
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True,
exclude='bads')
raw.save('sample_audvis_meg_raw.fif', tmin=0, tmax=150, picks=picks,
overwrite=True)
###Output
_____no_output_____
###Markdown
Define and read epochs^^^^^^^^^^^^^^^^^^^^^^First extract events:
###Code
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5])
###Output
_____no_output_____
###Markdown
Note that, by default, we use stim_channel='STI 014'. If you have a differentsystem (e.g., a newer system that uses channel 'STI101' by default), you canuse the following to set the default stim channel to use for finding events:
###Code
mne.set_config('MNE_STIM_CHANNEL', 'STI101', set_env=True)
###Output
_____no_output_____
###Markdown
Events are stored as a 2D numpy array where the first column is the timeinstant and the last one is the event number. It is therefore easy tomanipulate.Define epochs parameters:
###Code
event_id = dict(aud_l=1, aud_r=2) # event trigger and conditions
tmin = -0.2 # start of each epoch (200ms before the trigger)
tmax = 0.5 # end of each epoch (500ms after the trigger)
###Output
_____no_output_____
###Markdown
Exclude some channels (original bads + 2 more):
###Code
raw.info['bads'] += ['MEG 2443', 'EEG 053']
###Output
_____no_output_____
###Markdown
The variable raw.info['bads'] is just a python list.Pick the good channels, excluding raw.info['bads']:
###Code
picks = mne.pick_types(raw.info, meg=True, eeg=True, eog=True, stim=False,
exclude='bads')
###Output
_____no_output_____
###Markdown
Alternatively one can restrict to magnetometers or gradiometers with:
###Code
mag_picks = mne.pick_types(raw.info, meg='mag', eog=True, exclude='bads')
grad_picks = mne.pick_types(raw.info, meg='grad', eog=True, exclude='bads')
###Output
_____no_output_____
###Markdown
Define the baseline period:
###Code
baseline = (None, 0) # means from the first instant to t = 0
###Output
_____no_output_____
###Markdown
Define peak-to-peak rejection parameters for gradiometers, magnetometersand EOG:
###Code
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
###Output
_____no_output_____
###Markdown
Read epochs:
###Code
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=False, reject=reject)
print(epochs)
###Output
_____no_output_____
###Markdown
Get single epochs for one condition:
###Code
epochs_data = epochs['aud_l'].get_data()
print(epochs_data.shape)
###Output
_____no_output_____
###Markdown
epochs_data is a 3D array of dimension (55 epochs, 365 channels, 106 timeinstants).Scipy supports read and write of matlab files. You can save your singletrials with:
###Code
from scipy import io # noqa
io.savemat('epochs_data.mat', dict(epochs_data=epochs_data), oned_as='row')
###Output
_____no_output_____
###Markdown
or if you want to keep all the information about the data you can save yourepochs in a fif file:
###Code
epochs.save('sample-epo.fif')
###Output
_____no_output_____
###Markdown
and read them later with:
###Code
saved_epochs = mne.read_epochs('sample-epo.fif')
###Output
_____no_output_____
###Markdown
Compute evoked responses for auditory responses by averaging and plot it:
###Code
evoked = epochs['aud_l'].average()
print(evoked)
evoked.plot()
###Output
_____no_output_____
###Markdown
.. topic:: Exercise 1. Extract the max value of each epoch
###Code
max_in_each_epoch = [e.max() for e in epochs['aud_l']] # doctest:+ELLIPSIS
print(max_in_each_epoch[:4]) # doctest:+ELLIPSIS
###Output
_____no_output_____
###Markdown
It is also possible to read evoked data stored in a fif file:
###Code
evoked_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked1 = mne.read_evokeds(
evoked_fname, condition='Left Auditory', baseline=(None, 0), proj=True)
###Output
_____no_output_____
###Markdown
Or another one stored in the same file:
###Code
evoked2 = mne.read_evokeds(
evoked_fname, condition='Right Auditory', baseline=(None, 0), proj=True)
###Output
_____no_output_____
###Markdown
Two evoked objects can be contrasted using :func:`mne.combine_evoked`.This function can use ``weights='equal'``, which provides a simpleelement-by-element subtraction (and sets the``mne.Evoked.nave`` attribute properly based on the underlying numberof trials) using either equivalent call:
###Code
contrast = mne.combine_evoked([evoked1, evoked2], weights=[0.5, -0.5])
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
###Output
_____no_output_____
###Markdown
To do a weighted sum based on the number of averages, which will giveyou what you would have gotten from pooling all trials together in:class:`mne.Epochs` before creating the :class:`mne.Evoked` instance,you can use ``weights='nave'``:
###Code
average = mne.combine_evoked([evoked1, evoked2], weights='nave')
print(contrast)
###Output
_____no_output_____
###Markdown
Instead of dealing with mismatches in the number of averages, we can usetrial-count equalization before computing a contrast, which can have somebenefits in inverse imaging (note that here ``weights='nave'`` willgive the same result as ``weights='equal'``):
###Code
epochs_eq = epochs.copy().equalize_event_counts(['aud_l', 'aud_r'])[0]
evoked1, evoked2 = epochs_eq['aud_l'].average(), epochs_eq['aud_r'].average()
print(evoked1)
print(evoked2)
contrast = mne.combine_evoked([evoked1, -evoked2], weights='equal')
print(contrast)
###Output
_____no_output_____
###Markdown
Time-Frequency: Induced power and inter trial coherence^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Define parameters:
###Code
import numpy as np # noqa
n_cycles = 2 # number of cycles in Morlet wavelet
freqs = np.arange(7, 30, 3) # frequencies of interest
###Output
_____no_output_____
###Markdown
Compute induced power and phase-locking values and plot gradiometers:
###Code
from mne.time_frequency import tfr_morlet # noqa
power, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles,
return_itc=True, decim=3, n_jobs=1)
power.plot([power.ch_names.index('MEG 1332')])
###Output
_____no_output_____
###Markdown
Inverse modeling: MNE and dSPM on evoked and raw data^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Import the required functions:
###Code
from mne.minimum_norm import apply_inverse, read_inverse_operator # noqa
###Output
_____no_output_____
###Markdown
Read the inverse operator:
###Code
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
inverse_operator = read_inverse_operator(fname_inv)
###Output
_____no_output_____
###Markdown
Define the inverse parameters:
###Code
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM"
###Output
_____no_output_____
###Markdown
Compute the inverse solution:
###Code
stc = apply_inverse(evoked, inverse_operator, lambda2, method)
###Output
_____no_output_____
###Markdown
Save the source time courses to disk:
###Code
stc.save('mne_dSPM_inverse')
###Output
_____no_output_____
###Markdown
Now, let's compute dSPM on a raw file within a label:
###Code
fname_label = data_path + '/MEG/sample/labels/Aud-lh.label'
label = mne.read_label(fname_label)
###Output
_____no_output_____
###Markdown
Compute inverse solution during the first 15s:
###Code
from mne.minimum_norm import apply_inverse_raw # noqa
start, stop = raw.time_as_index([0, 15]) # read the first 15s of data
stc = apply_inverse_raw(raw, inverse_operator, lambda2, method, label,
start, stop)
###Output
_____no_output_____
###Markdown
Save result in stc files:
###Code
stc.save('mne_dSPM_raw_inverse_Aud')
###Output
_____no_output_____
###Markdown
What else can you do?^^^^^^^^^^^^^^^^^^^^^ - detect heart beat QRS component - detect eye blinks and EOG artifacts - compute SSP projections to remove ECG or EOG artifacts - compute Independent Component Analysis (ICA) to remove artifacts or select latent sources - estimate noise covariance matrix from Raw and Epochs - visualize cross-trial response dynamics using epochs images - compute forward solutions - estimate power in the source space - estimate connectivity in sensor and source space - morph stc from one brain to another for group studies - compute mass univariate statistics base on custom contrasts - visualize source estimates - export raw, epochs, and evoked data to other python data analysis libraries e.g. pandas - and many more things ...Want to know more ?^^^^^^^^^^^^^^^^^^^Browse `the examples gallery `_.
###Code
print("Done!")
###Output
_____no_output_____ |
Notes_and_Queries_DB_PART_2.ipynb | ###Markdown
Alternatively, we can view the results as a Python dictionary:
###Code
read_sql(q, db.conn).to_dict(orient="records")[:3]
###Output
_____no_output_____
###Markdown
For each file containg the search text for a particular issue, we also need a routine to extract the page level content. Which is to say, we need to chunk the content based on character indices associated with the first and last characters on each page in the corresponding search text file. This essentially boils down to:- grabbing the page index values;- grabbing the page search text;- chunking the search text according to the page index values. We can apply a page chunker at the document level, paginating the content file, and adding things to the database.The following function will load
###Code
%%writefile ia_utils/chunk_page_text.py
from pandas import read_sql
from ia_utils.get_txt_from_file import get_txt_from_file
def chunk_page_text(db, id_val):
"""Chunk text according to page_index values."""
q = f'SELECT * FROM pages_metadata WHERE id="{id_val}"'
page_indexes = read_sql(q, db.conn).to_dict(orient="records")
text = get_txt_from_file(id_val)
for ix in page_indexes:
ix["page_text"] = text[ix["page_char_start"]:ix["page_char_end"]].strip()
return page_indexes
###Output
Writing ia_utils/chunk_page_text.py
###Markdown
Let's see if we've managed to pull out some page text:
###Code
from ia_utils.chunk_page_text import chunk_page_text
# Create a sample index ID
sample_id_val = sample_records[0]["id"]
# Get the chunked text back as part of the metadata record
sample_pages = chunk_page_text(db, sample_id_val)
sample_pages[:3]
###Output
_____no_output_____
###Markdown
Modifying the `pages_metadata` Table in the DatabaseUsing the `chunk_page_text()` function, we can add page content to our pages metadata *in-memory*. But what if we want to add it to the database. The `pages_metadata` already exists, but does not include a `text` column. However, we can modify that table to include just such a column:
###Code
db["pages_metadata"].add_column("page_text", str)
###Output
_____no_output_____
###Markdown
We can also enable a full text search facility over the table. Our interest is primarily in searching over the `page_text`, but if we include a couple of other columns, that can help us key into records in other tables.
###Code
# Enable full text search
# This creates an extra virtual table to support the full text search
db["pages_metadata_fts"].drop(ignore=True)
db["pages_metadata"].enable_fts(["id", "page_idx", "page_text"], create_triggers=True, tokenize="porter")
###Output
_____no_output_____
###Markdown
We can now update the records in the `pages_metadata` table so they include the `page_text`:
###Code
q = f'SELECT DISTINCT(id) FROM pages_metadata;'
id_vals = read_sql(q, db.conn).to_dict(orient="records")
for sample_id_val in id_vals:
updated_pages = chunk_page_text(db, sample_id_val["id"])
db["pages_metadata"].upsert_all(updated_pages, pk=("id", "page_idx"))
###Output
_____no_output_____
###Markdown
We should now be able to search at the page level:
###Code
search_term = "customs"
q = f"""
SELECT * FROM pages_metadata_fts
WHERE pages_metadata_fts MATCH {db.quote(search_term)};
"""
read_sql(q, db.conn)
###Output
_____no_output_____
###Markdown
We can then bring in additional columns from the original `pages_metadata` table:
###Code
search_term = "customs"
q = f"""
SELECT page_num, page_leaf_num, pages_metadata_fts.* FROM pages_metadata_fts, pages_metadata
WHERE pages_metadata_fts MATCH {db.quote(search_term)}
AND pages_metadata.id = pages_metadata_fts.id
AND pages_metadata.page_idx = pages_metadata_fts.page_idx;
"""
read_sql(q, db.conn)
###Output
_____no_output_____
###Markdown
Generating a Full Text Searchable Database for *Notes & Queries*Whilst the PDF documents corresponding to each issue of *Notes and Queries* are quite large files, the searchable, OCR retrieved text documents are much smaller and can be easily added to a full-text searchable database.We can create a simple, file based SQLite database that will provide a full-text search facility over each issue of *Notes & Queries*.Recall that we previously downloaded the metadata for issues of *Notes & Queries* held by the Internet Archive to a CSV file.We can load that metadata in from the CSV file using the function we created and put into a simple Python package directory previously:
###Code
from ia_utils.open_metadata_records import open_metadata_records
data_records = open_metadata_records()
data_records[:3]
###Output
_____no_output_____
###Markdown
We also saved the data to a simple local database, so we could alternatively retrieve the data from there.First open up a connection to the database:
###Code
from sqlite_utils import Database
db_name = "nq_demo.db"
db = Database(db_name)
###Output
_____no_output_____
###Markdown
And then make a simple query onto it:
###Code
from pandas import read_sql
q = "SELECT * FROM metadata;"
data_records_from_db = read_sql(q, db.conn)
data_records_from_db.head(3)
###Output
_____no_output_____
###Markdown
Adding an `issues` Table to the DatabaseWe already have a metadata table in the database, but we can also add more tables to it.For at least the 19th century issues of *Notes & Queries*, a file is available for each issue that contains searchable text extracted from that issue. If we download those text files and add them to our own database, then we can create our own full text searchable database over the content of those issues.Let's create a simple table structure for the searchable text extracted from each issue of *Notes & Queries* containing the content and a unique identifier for each record.We can relate this table to the metadata table through a *foreign key*. What this means is that for each entry in the issues table, we also expect to find an entry in the metadata table under the same identifier value.We will also create a full text search table associated with the table:
###Code
%%writefile ia_utils/create_db_table_issues.py
def create_db_table_issues(db, drop=True):
"""Create an issues database table and an associated full-text search table."""
table_name = "issues"
# If required, drop any previously defined tables of the same name
if drop:
db[table_name].drop(ignore=True)
db[f"{table_name}_fts"].drop(ignore=True)
elif db[table_name].exists():
print(f"Table {table_name} exists...")
return
# Create the table structure for the simple issues table
db[table_name].create({
"id": str,
"content": str
}, pk=("id"), foreign_keys=[ ("id", "metadata", "id"), # local-table-id, foreign-table, foreign-table-id)
])
# Enable full text search
# This creates an extra virtual table (issues_fts) to support the full text search
# A stemmer is applied to support the efficacy of the full-text searching
db[table_name].enable_fts(["id", "content"],
create_triggers=True, tokenize="porter")
###Output
Overwriting ia_utils/create_db_table_issues.py
###Markdown
Load that function in from the local package and call it:
###Code
from ia_utils.create_db_table_issues import create_db_table_issues
create_db_table_issues(db)
###Output
_____no_output_____
###Markdown
To add the content data to the database, we need to download the searchable text associated with each record from the Internet Archive.Before we add the data in bulk, let's do a dummy run of the steps we need to follow.First, we need to download the full text file from the Internet Archive, given a record identifier. We'll use the first data record to provide us with the identifier:
###Code
data_records[0]
###Output
_____no_output_____
###Markdown
The download step takes the identifier and requests the `OCR Search Text` file.We will download the Internet Archive files to the directory we specified previously.
###Code
from pathlib import Path
# Create download dir file path, as before
dirname = "ia-downloads" # This is a default
p = Path(dirname)
###Output
_____no_output_____
###Markdown
And now download the text file for the sample record:
###Code
# Import the necessary packages
from internetarchive import download
download(data_records[0]['id'], destdir=p, silent = True,
formats=["OCR Search Text"])
###Output
_____no_output_____
###Markdown
Recall that the files are download into a directory with a name that corresponds to the record identifier.The data files are actually download as compressed archive files, as we can see if we review the download directory we saved our test download to:
###Code
import os
os.listdir( p / data_records[0]['id'])
###Output
_____no_output_____
###Markdown
We now need to uncompress the `.txt.gz` file to access the fully formed text file.The `gzip` package provides us with the utility we need to access the contents of the archive file.In fact, we don't need to actually uncompress the file into the directory, we can open it and extract its contents "in memory".
###Code
%%writefile ia_utils/get_txt_from_file.py
from pathlib import Path
import gzip
# Create a simple function to make it even easier to extract the full text content
def get_txt_from_file(id_val, dirname="ia-downloads", typ="searchtext"):
"""Retrieve text from downloaded text file."""
if typ=="searchtext":
p_ = Path(dirname) / id_val / f'{id_val}_hocr_searchtext.txt.gz'
f = gzip.open(p_,'rb')
content = f.read().decode('utf-8')
elif typ=="djvutxt":
p_ = Path(dirname) / id_val / f'{id_val}_djvu.txt'
content = p_.read_text()
else:
content = ""
return content
###Output
Overwriting ia_utils/get_txt_from_file.py
###Markdown
Let's see how it works, previewing the first 200 characters of the unarchived text file:
###Code
from ia_utils.get_txt_from_file import get_txt_from_file
get_txt_from_file(data_records[0]['id'])[:200]
###Output
_____no_output_____
###Markdown
If we inspect the text in more detail, we see there are various things in it that we might want to simplify. For example, quotation marks appear in various guises, such as opening and closing quotes of different flavours. We *could* normalise these to a simpler form (for example, "straight" quotes `'` and `"`), However, *if* opening and closing quotes are reliably recognised they do provide us with a simple text for matching text contained *within* the quotes. So for now, let's leave the originally detected quotes in place. Having got a method in place, let's now download the contents of the non-index issues for 1849.
###Code
q = """
SELECT id, title
FROM metadata
WHERE is_index = 0
AND strftime('%Y', datetime) = '1849'
"""
results = read_sql(q, db.conn)
results
###Output
_____no_output_____
###Markdown
The data is return from the `read_sql()` function as a *pandas* dataframe.This *pandas* package provides a very powerful set of tools for working with tabular data, including being able to iterate over he rows of the table and apply a function to each one.If we define a function to download the corresponding search text file from the Internet Archive and extract the text from the downloaded archive file, we can apply that function with a particular column value taken from each row of the dataframe and add the returned content to a new column in the same dataframe.Here's an example function:
###Code
%%writefile ia_utils/download_and_extract_text.py
from internetarchive import download
from ia_utils.get_txt_from_file import get_txt_from_file
def download_and_extract_text(id_val, p="ia-downloads", typ="searchtext", verbose=False):
"""Download search text from Internet Archive, extract the text and return it."""
if verbose:
print(f"Downloading {id_val} issue text")
if typ=="searchtext":
download(id_val, destdir=p, silent = True,
formats=["OCR Search Text"])
elif typ=="djvutxt":
download(id_val, destdir=p, silent = True,
formats=["DjVuTXT"])
else:
return ''
text = get_txt_from_file(id_val, typ=typ)
return text
###Output
Overwriting ia_utils/download_and_extract_text.py
###Markdown
The Python *pandas* package natively provides an `apply()` function. However, the `tqdm` progress bar package also provides an "apply with progress bar" function, `.progress_apply()` if we enable the appropriate extensions:
###Code
# Dowload the tqdm progrss bar tools
from tqdm.notebook import tqdm
#And enable the pandas extensions
tqdm.pandas()
###Output
_____no_output_____
###Markdown
Let's apply our `download_and_extract_text()` function to each row of our records table for 1849, keeping track of progress with a progress bar:
###Code
from ia_utils.download_and_extract_text import download_and_extract_text
results['content'] = results["id"].progress_apply(download_and_extract_text)
results
###Output
_____no_output_____
###Markdown
We can now add that data table directly to our database using the *pandas* `.to_sql()` method:
###Code
# Add the issue database table
table_name = "issues"
results[["id", "content"]].to_sql(table_name, db.conn, index=False, if_exists="append")
###Output
_____no_output_____
###Markdown
*Note that this recipe does not represent a very efficient way of handling things: the pandas dataframe is held in memory, so as we add more rows, the memory requirements to store the data increase. A more efficient approach might be to create a function that retrieves each file, adds its contents to the database, and then perhaps even deletes the downloaded file, rather than adding the content to the in-memory dataframe.* Let's see if we can query it, first at the basic table level:
###Code
q = """
SELECT id, content
FROM issues
WHERE LOWER(content) LIKE "%customs%"
"""
read_sql(q, db.conn)
###Output
_____no_output_____
###Markdown
This is not overly helpful, perhaps. We can do better with the full text search, which will also allow us to return a snippet around the first, or highest ranked, location of any matched search terms:
###Code
search_term = "customs"
q = f"""
SELECT id, snippet(issues_fts, -1, "__", "__", "...", 10) as clip
FROM issues_fts WHERE issues_fts MATCH {db.quote(search_term)} ;
"""
read_sql(q, db.conn)
###Output
_____no_output_____
###Markdown
This is okay as far as is goes: we can identify *issues* of *Notes and Queries* that contain a particular search term, retrieve the whole document, and even display a concordance for the first (or highest ranking) occurrence of the search term(s) to provide context for the response. But it's not ideal. For example, to display a concordance of each term in the full text document that matches our search term, we need to generate our own concordance, which may be difficulat where matches are inexact (for example if the match relies on stemming). There are also many pages in each issue of *Notes and Queries* and it would be useful if we could get the result at a better level of granularity.The `ouseful_sqlite_search_utils` package includes various functions for allowing us to tunnel into a text document to retrieve The tools aren't necessarily the *fastest* utilities to run, particularly on large databases, but they get their eventually.One particular utility will split a document into sentences and return each sentence on a separate row of a newly created virtual table. We can then search within these values for our search term, although we are limited to running *exact match* queries, rather than the more forgiving full text search queries:
###Code
from ouseful_sqlite_search_utils import snippets
snippets.register_snippets(db.conn)
q = """
SELECT * FROM
(SELECT id, sentence
FROM issues, get_sentences(1, NULL, issues.content)
WHERE issues.id = "sim_notes-and-queries_1849-11-10_1_2")
WHERE sentence LIKE "% custom %"
"""
# Show the full result record in each case
read_sql(q, db.conn).to_dict(orient="records")
###Output
Couldn't import dot_parser, loading of dot files will not be possible.
###Markdown
Extracting PagesTo make for more efficient granular searching, it would be useful if our content was stored in a more granular way.Ideally, we would extract items at the "article" level, but there is no simple way of chunking the document at this level. We could process it to extract items at the sentence or paragraph level and add those to their own table, but that might be *too* granular.However, by inspection of the files available for each issue, there appears to be another level of organisation that we can access: the *page* level. *Page* metadata is provided in the the form of two files:- `OCR Page Index`: downloaded as a compressed `.gz` file the expanded file contains a list of lists. Each inner list contains four integers and each page has an associated inner list. The first and second integers in each inner list are the character count in the search text file representing the first and last characters on the corresponding page;- `Page Numbers JSON`: the pages numbers JSON file, which is downloaded as an uncompressed JSON file contains a JSON object with a `"pages"` attribute that returns a list of records; each record has four attributes: `"leafNum": int` (starting with index value 1), `"ocr_value": list` (a list of candidate OCR values), `"pageNumber": str` and `"confidence": float`. A top-level `"confidence"` attribute gives an indication of how likely it is that page numbers are available across the whole document.We also need the `OCR Search Text` file.Let's get a complete set of necessary files for a few sample records:
###Code
%%writefile ia_utils/download_ia_records_by_format.py
# Dowload the tqdm progress bar tools
from tqdm.notebook import tqdm
from pathlib import Path
from internetarchive import download
def download_ia_records_by_format(records, path=".", formats=None):
"""Download records from Internet Archive given ID and desired format(s)"""
formats = formats if formats else ["OCR Search Text", "OCR Page Index", "Page Numbers JSON"]
for record in tqdm(records):
_id = record['id']
download(_id, destdir=path,
formats=formats,
silent = True)
from ia_utils.download_ia_records_by_format import download_ia_records_by_format
# Grab page counts and page structure files
sample_records = data_records[:5]
download_ia_records_by_format(sample_records, p)
###Output
_____no_output_____
###Markdown
We now need to figure out how to open and parse the page index and page numbers files, and check the lists are the correct lengths.The Python `zip` function lets us "zip" together elements from different, parallel lists. We can also insert the same item, repeatedly, into each row using the `itertools.repeat()` function to generate as many repetitions of the same character as are required:
###Code
import itertools
###Output
_____no_output_____
###Markdown
Example of using `itertools.repeat()`:
###Code
# Example of list
list(zip(itertools.repeat("a"), [1, 2], ["x","y"]))
###Output
_____no_output_____
###Markdown
We can now use this approach to create a zipped combination of the record ID values, page numbers and page character indexes.
###Code
import gzip
import json
import itertools
#for record in tqdm(sample_records):
record = sample_records[0]
id_val = record['id']
p_ = Path(dirname) / id_val
# Get the page numbers
with open(p_ / f'{id_val}_page_numbers.json', 'r') as f:
page_numbers = json.load(f)
# Get the page character indexes
with gzip.open(p_ / f'{id_val}_hocr_pageindex.json.gz', 'rb') as g:
# The last element seems to be redundant
page_indexes = json.loads(g.read().decode('utf-8'))[:-1]
# Optionally text the record counts are the same for page numbers and character indexes
#assert len(page_indexes) == len(page_numbers['pages'])
# Preview the result
list(zip(itertools.repeat(id_val), page_numbers['pages'], page_indexes))[:5]
###Output
_____no_output_____
###Markdown
We could add this page related data directly to the pages table, or we could create another simple database table to store it.Here's what a separate table might look like:
###Code
%%writefile ia_utils/create_db_table_pages_metadata.py
def create_db_table_pages_metadata(db, drop=True):
if drop:
db["pages_metadata"].drop(ignore=True)
db["pages_metadata"].create({
"id": str,
"page_idx": int, # This is just a count as we work through the pages
"page_char_start": int,
"page_char_end": int,
"page_leaf_num": int,
"page_num": str, # This is to allow for things like Roman numerals
# Should we perhaps try to cast an int for the page number
# and have a page_num_str for the original ?
"page_num_conf": float # A confidence value relating to the page number detection
}, pk=("id", "page_idx")) # compound foreign keys not currently available via sqlite_utils?
###Output
Overwriting ia_utils/create_db_table_pages_metadata.py
###Markdown
Import that function from the local package and run it:
###Code
from ia_utils.create_db_table_pages_metadata import create_db_table_pages_metadata
create_db_table_pages_metadata(db)
###Output
_____no_output_____
###Markdown
The following function "zips" together the contents of the page index and page numbers files. Each "line item" is a rather unwieldy mixmatch of elements, but we'll deal with those in a moment:
###Code
%%writefile ia_utils/raw_pages_metadata.py
import itertools
import json
import gzip
from pathlib import Path
def raw_pages_metadata(id_val, dirname="ia-downloads"):
"""Get page metadata."""
p_ = Path(dirname) / id_val
# Get the page numbers
with open(p_ / f'{id_val}_page_numbers.json', 'r') as f:
# We can ignore the last value
page_numbers = json.load(f)
# Get the page character indexes
with gzip.open(p_ / f'{id_val}_hocr_pageindex.json.gz', 'rb') as g:
# The last element seems to be redundant
page_indexes = json.loads(g.read().decode('utf-8'))[:-1]
# Add the id and an index count
return zip(itertools.repeat(id_val), range(len(page_indexes)),
page_numbers['pages'], page_indexes)
###Output
Overwriting ia_utils/raw_pages_metadata.py
###Markdown
For each line item in the zipped datastructure, we can parse out values into a more readable data object:
###Code
%%writefile ia_utils/parse_page_metadata.py
def parse_page_metadata(item):
"""Parse out page attributes from the raw page metadata construct."""
_id = item[0]
page_idx = item[1]
_page_nums = item[2]
ix = item[3]
obj = {'id': _id,
'page_idx': page_idx, # Maintain our own count, just in case; should be page_leaf_num-1
'page_char_start': ix[0],
'page_char_end': ix[1],
'page_leaf_num': _page_nums['leafNum'],
'page_num': _page_nums['pageNumber'],
'page_num_conf':_page_nums['confidence']
}
return obj
###Output
Overwriting ia_utils/parse_page_metadata.py
###Markdown
Let's see how that looks:
###Code
from ia_utils.raw_pages_metadata import raw_pages_metadata
from ia_utils.parse_page_metadata import parse_page_metadata
sample_pages_metadata_item = raw_pages_metadata(id_val)
for pmi in sample_pages_metadata_item:
print(parse_page_metadata(pmi))
break
###Output
{'id': 'sim_notes-and-queries_1849-11-03_1_1', 'page_idx': 0, 'page_char_start': 0, 'page_char_end': 301, 'page_leaf_num': 1, 'page_num': '', 'page_num_conf': 0}
###Markdown
We can now trivially add the page metadata to the `pages_metadata` database table. Let's try it with our sample:
###Code
%%writefile ia_utils/add_page_metadata_to_db.py
from ia_utils.parse_page_metadata import parse_page_metadata
from ia_utils.raw_pages_metadata import raw_pages_metadata
def add_page_metadata_to_db(db, records, dirname="ia-downloads", verbose=False):
"""Add page metadata to database."""
for record in records:
id_val = record["id"]
if verbose:
print(id_val)
records = [parse_page_metadata(pmi) for pmi in raw_pages_metadata(id_val, dirname)]
# Add records to the database
db["pages_metadata"].insert_all(records)
###Output
Overwriting ia_utils/add_page_metadata_to_db.py
###Markdown
And run it with the page metadata records selected via a `id_val`:
###Code
from ia_utils.add_page_metadata_to_db import add_page_metadata_to_db
# Clear the db table
db["pages_metadata"].delete_where()
# Add the metadata to the table
add_page_metadata_to_db(db, sample_records)
###Output
_____no_output_____
###Markdown
Let's see how that looks:
###Code
from pandas import read_sql
q = "SELECT * FROM pages_metadata LIMIT 5"
read_sql(q, db.conn)
###Output
_____no_output_____ |
python/tkinter/05_Basic Widgets.ipynb | ###Markdown
Basic Widgets This chapter introduces you to the basic Tk widgets that you'll find in just about any user interface: frames, labels, buttons, checkbuttons, radiobuttons, entries and comboboxes. By the end, you'll know how to use all the widgets you'd ever need for a typical fill-in form type of user interface.This chapter (and those following that discuss more widgets) are meant to be read in order. Because there is so much commonality between many widgets, we'll introduce certain concepts in an earlier widget that will also apply to a later one. Rather than going over the same ground multiple times, we'll just refer back to when the concept was first introduced.At the same time, each widget will also refer to the widget roundup page for the specific widget, as well as the reference manual page, so feel free to jump around a bit too. ----------- Frame A frame is a widget that displays just as a simple rectangle. Frames are primarily used as a container for other widgets, which are under the control of a geometry manager such as grid. Frame Widgets Frames are created using the ttk.Frame function:```pythonframe = ttk.Frame(parent)```Frames can take several different configuration options which can alter how they are displayed. Requested Size Like any other widget, after creation it is added to the user interface via a (parent) geometry manager. Normally, the size that the frame will request from the geometry manager will be determined by the size and layout of any widgets that are contained in it (which are under the control of the geometry manager that manages the contents of the frame itself).If for some reason you want an empty frame that does not contain other widgets, you should instead explicitly set the size that the frame will request from its parent geometry manager using the "width" and/or "height" configuration options (otherwise you'll end up with a very small frame indeed).Normally, distances such as width and height are specified just as a number of pixels on the screen. You can also specify them via one of a number of suffixes. For example, "350" means 350 pixels, "350c" means 350 centimeters, "350i" means 350 inches, and "350p" means 350 printer's points (1/72 inch). Padding The "padding" configuration option is used to request extra space around the inside of the widget; this way if you're putting other widgets inside the frame, there will be a bit of a margin all the way around. A single number specifies the same padding all the way around, a list of two numbers lets you specify the horizontal then the vertical padding, and a list of four numbers lets you specify the left, top, right and bottom padding, in that order.```pythonframe['padding'] = (5,10)``` Borders You can display a border around the frame widget; you see this a lot where you might have a part of the user interface looking "sunken" or "raised" in relation to its surroundings. To do this, you need to set the "borderwidth" configuration option (which defaults to 0, so no border), as well as the "relief" option, which specifies the visual appearance of the border: "flat" (default), "raised", "sunken", "solid", "ridge", or "groove".```pythonframe['borderwidth'] = 2frame['relief'] = 'sunken'``` Changing Styles There is also a "style" configuration option, which is common to all of the themed widgets, which can let you control just about any aspect of their appearance or behavior. This is a bit more advanced, so we won't go into it right now.Styles mark a sharp departure from the way most aspects of a widget's visual appearance are changed in the "classic" Tk widgets. While in classic Tk you could provide a wide range of options to finely control every aspect of behavior (foreground color, background color, font, highlight thickness, selected foreground color, padding, etc.), in the new themed widgets these changes are done by changing styles, not adding options to each widget. As such, many of the options you may be familiar with in certain widgets are not present in their themed version. Given that overuse of such options was a key factor undermining the appearance of Tk applications, especially when moved across platforms, transitioning to themed widgets provides an opportune time to review and refine if and how such appearance changes are made.
###Code
from tkinter import *
from tkinter import ttk
root = Tk()
root.title("Frame")
# STANDARD OPTIONS
# class, cursor, style, takefocus
# WIDGET-SPECIFIC OPTIONS
# borderwidth, relief, padding, width, height
frame = ttk.Frame(root)
frame['width'] = 200 # frame็ๅฎฝๅบฆ
frame['height'] = 100 # frame็้ซๅบฆ
frame['padding'] = (3,3) # ๅกซๅ
frame['borderwidth'] = 5 # ่พนๆกๅฎฝๅบฆ
frame['relief'] = 'sunken' # ่พนๆก้ฃๆ ผ
frame.grid(column=0, row=0, sticky=(N, W, E, S))
frame2 = ttk.Frame(root)
frame2['width'] = 200
frame2['height'] = 100
frame2['padding'] = (5,5)
frame2['borderwidth'] = 1
frame2['relief'] = 'sunken'
frame2.grid(column=1, row=1, sticky=(N, W, E, S))
root.mainloop()
###Output
_____no_output_____
###Markdown
------------ Label A label is a widget that displays text or images, typically that the user will just view but not otherwise interact with. Labels are used for such things as identifying controls or other parts of the user interface, providing textual feedback or results, etc.  Labels are created using the ttk.Label function, and typically their contents are set up at the same time:```pythonlabel = ttk.Label(parent, text='Full name:')```Like frames, labels can take several different configuration options which can alter how they are displayed. Displaying Text The "text" configuration option shown above when creating the label is the most commonly used, particularly when the label is purely decorative or explanatory. You can of course change this option at any time, not only when first creating the label.You can also have the widget monitor a variable in your script, so that anytime the variable changes, the label will display the new value of the variable; this is done with the "textvariable" option: ```pythonresultsContents = StringVar()label['textvariable'] = resultsContentsresultsContents.set('New value to display')```Tkinter only allows you to attach to an instance of the "StringVar" class, which contains all the logic to watch for changes, communicate them back and forth between the variable and Tk, and so on. You need to read or write the current value using the "get" and "set" methods. Displaying Images You can also display an image in a label instead of text; if you just want an image sitting in your interface, this is normally the way to do it. We'll go into images in more detail in a later chapter, but for now, let's assume you want to display a GIF image that is sitting in a file on disk. This is a two-step process, first creating an image "object", and then telling the label to use that object via its "image" configuration option: ```pythonimage = PhotoImage(file='myimage.gif')label['image'] = image```You can use both an image and text, as you'll often see in toolbar buttons, via the "compound" configuration option. The default value is "none", meaning display only the image if present, otherwise the text specified by the "text" or "textvariable" options. Other options are "text" (text only), "image" (image only), "center" (text in center of image), "top" (image above text), "left", "bottom", and "right". Layout While the overall layout of the label (i.e. where it is positioned within the user interface, and how large it is) is determined by the geometry manager, several options can help you control how the label will be displayed within the box the geometry manager gives it.If the box given to the label is larger than the label requires for its contents, you can use the "anchor" option to specify what edge or corner the label should be attached to, which would leave any empty space in the opposite edge or corner. Possible values are specified as compass directions: "n" (north, or top edge), "ne", (north-east, or top right corner), "e", "se", "s", "sw", "w", "nw" or "center".Labels can be used to display more than one line of text. This can be done by embedding carriage returns ("\n") in the "text"/"textvariable" string. You can also let the label wrap the string into multiple lines that are no longer than a given length (with the size specified as pixels, centimeters, etc.), by using the "wraplength" option.Multi-line labels are a replacement for the older "message" widgets in classic Tk.You can also control how the text is justified, by using the "justify" option, which can have the values "left", "center" or "right". If you only have a single line of text, this is pretty much the same as just using the "anchor" option, but is more useful with multiple lines of text. Fonts, Colors and More Like with frames, normally you don't want to touch things like the font and colors directly, but if you need to change them (e.g. to create a special type of label), this would be done via creating a new style, which is then used by the widget with the "style" option.Unlike most themed widgets, the label widget also provides explicit widget-specific options as an alternative; again, you'd use this only in special one-off cases, when using a style didn't necessarily make sense.You can specify the font used to display the label's text using the "font" configuration option. While we'll go into fonts in more detail in a later chapter, here are the names of some predefined fonts you can use:```TkDefaultFont The default for all GUI items not otherwise specified.TkTextFont Used for entry widgets, listboxes, etc.TkFixedFont A standard fixed-width font.TkMenuFont The font used for menu items.TkHeadingFont A font for column headings in lists and tables.TkCaptionFont A font for window and dialog caption bars.TkSmallCaptionFont A smaller caption font for subwindows or tool dialogs.TkIconFont A font for icon captions.TkTooltipFont A font for tooltips.```Because the choice of fonts is so platform specific, be careful of hardcoding them (font families, sizes, etc.); this is something else you'll see in a lot of older Tk programs that can make them look ugly.The foreground (text) and background color can also be changed via the "foreground" and "background" options. Colors are covered in detail later, but you can specify these as either color names (e.g. "red") or hex RGB codes (e.g. "ff340a").Labels also accept the "relief" option that was discussed for frames.
###Code
from tkinter import *
from tkinter import ttk
root = Tk()
root.title("Frame")
frame = ttk.Frame(root)
frame['width'] = 200 # frame็ๅฎฝๅบฆ
frame['height'] = 100 # frame็้ซๅบฆ
frame['padding'] = (30,30) # ๅกซๅ
frame['borderwidth'] = 5 # ่พนๆกๅฎฝๅบฆ
frame['relief'] = 'sunken' # ่พนๆก้ฃๆ ผ
frame.grid(column=0, row=0, sticky=(N, W, E, S))
label = ttk.Label(frame, text='Full name:')
label['anchor'] = 'e'
label['compound'] = 'bottom'
resultsContents = StringVar()
label['textvariable'] = resultsContents
resultsContents.set('New value to display')
image = PhotoImage(file='images/hello_world.png')
label['image'] = image
label.grid(column=1, row=2, sticky=E)
root.mainloop()
###Output
_____no_output_____
###Markdown
Button A button, unlike a frame or label, is very much designed for the user to interact with, and in particular, press to perform some action. Like labels, they can display text or images, but also have a whole range of new options used to control their behavior.  Buttons are created using the ttk.Button function, and typically their contents and command callback are set up at the same time:```pythonbutton = ttk.Button(parent, text='Okay', command=submitForm)```As with other widgets, buttons can take several different configuration options which can alter their appearance and behavior. Text or Image Buttons take the same "text", "textvariable" (rarely used), "image" and "compound" configuration options as labels, which control whether the button displays text and/or an image.Buttons have a "default" option, which tells Tk that the button is the default button in the user interface (i.e. the one that will be invoked if the user hits Enter or Return). Some platforms and styles will draw this with a different border or highlight. Set the option to "active" to specify this is a default button; the regular state is "normal." Note that setting this option doesn't create an event binding that will make the Return or Enter key activate the button; that you have to do yourself. The Command Callback The "command" option is used to provide an interface between the button's action and your application. When the user clicks the button, the script provided by the option is evaluated by the interpreter.You can also ask the button to invoke the command callback from your application. This is useful so that you don't need to repeat the command to be invoked several times in your program; so you know if you change the option on the button, you don't need to change it elsewhere too.```pythonbutton.invoke()``` Button State Buttons and many other widgets can be in a normal state where they can be pressed, but can also be put into a disabled state, where the button is greyed out and cannot be pressed. This is done when the button's command is not applicable at a given point in time.All themed widgets carry with them an internal state, which is a series of binary flags. You can set or clear these different flags, as well as check the current setting using the "state" and "instate" methods. Buttons make use of the "disabled" flag to control whether or not the user can press the button. For example: ```pythonbutton.state(['disabled']) set the disabled flag, disabling the buttonbutton.state(['!disabled']) clear the disabled flagbutton.instate(['disabled']) return true if the button is disabled, else falsebutton.instate(['!disabled']) return true if the button is not disabled, else falsebutton.instate(['!disabled'], cmd) execute 'cmd' if the button is not disabled```Note that these commands accept an array of state flags as their argument. Using "state"/"instate" replaces the older "state" configuration option (which took the values "normal" or "disabled"). This configuration option is actually still available in Tk 8.5, but "write-only", which means that changing the option calls the appropriate "state" command, but other changes made using the "state" command are not reflected in the option. This is only for compatibility reasons; you should change your code to use the new state vector.The full list of state flags available to themed widgets is: "active", "disabled", "focus", "pressed", "selected", "background", "readonly", "alternate", and "invalid". These are described in the themed widget reference; not all states are meaningful for all widgets. It's also possible to get fancy in the "state" and "instate" methods and specify multiple state flags at the same time.
###Code
from tkinter import *
from tkinter import ttk
import random
root = Tk()
root.title("Frame")
frame = ttk.Frame(root)
frame['width'] = 200 # frame็ๅฎฝๅบฆ
frame['height'] = 100 # frame็้ซๅบฆ
frame['padding'] = (30,30) # ๅกซๅ
frame['borderwidth'] = 5 # ่พนๆกๅฎฝๅบฆ
frame['relief'] = 'sunken' # ่พนๆก้ฃๆ ผ
frame.grid(column=0, row=0, sticky=(N, W, E, S))
resultsContents = StringVar() # ็จไบๆพ็คบ็ๅญ็ฌฆไธฒ
def clicked():
"""็ๆ้ๆบๆฐ๏ผๅนถๆพ็คบๅฐๆ้ฎไธใ
ๅฝ้ๆบๆฐไธบ0๏ผๆ้ฎ่ฎพไธบไธๅฏ็จ๏ผ้ๆบๆฐไธไธบ0๏ผๆพ็คบ้ๆบๆฐ
"""
rand_int = random.randint(0, 9)
if rand_int == 0:
button.state(['disabled'])
resultsContents.set('disabled')
else:
resultsContents.set('>>> {} <<<'.format(rand_int))
button = ttk.Button(frame, text='Full name:')
button['command'] = clicked # ๆ้ฎ็นๅปๅๆง่ก็ๅฝๆฐ
button['textvariable'] = resultsContents # ๅฐๅญ็ฌฆไธฒ็ปๅฎๅฐๆ้ฎ
resultsContents.set('random by click') # ่ฎพ็ฝฎๅๅงๅๅญ็ฌฆไธฒ
button.grid(column=0, row=0, sticky=E)
root.mainloop()
###Output
_____no_output_____
###Markdown
Checkbutton A checkbutton is like a regular button, except that not only can the user press it, which will invoke a command callback, but it also holds a binary value of some kind (i.e. a toggle). Checkbuttons are used all the time when a user is asked to choose between, e.g. two different values for an option.  Checkbuttons are created using the ttk.Checkbutton function, and typically set up at the same time:```pythonmeasureSystem = StringVar()check = ttk.Checkbutton(parent, text='Use Metric', command=metricChanged, variable=measureSystem, onvalue='metric', offvalue='imperial')```Checkbuttons use many of the same options as regular buttons, but add a few more. The "text", "textvariable", "image", and "compound" options control the display of the label (next to the checkbox itself), and the "state" and "instate" methods allow you to manipulate the "disabled" state flag to enable or disable the checkbutton. Similarly, the "command" option lets you specify a script to be called every time the user toggles the checkbutton, and the "invoke" method will also execute the same callback. Widget Value Unlike buttons, checkbuttons also hold a value. We've seen before how the "textvariable" option can be used to tie the label of a widget to a variable in your program; the "variable" option for checkbuttons behaves similarly, except it is used to read or change the current value of the widget, and updates whenever the widget is toggled. By default, checkbuttons use a value of "1" when the widget is checked, and "0" when not checked, but these can be changed to just about anything using the "onvalue" and "offvalue" options.What happens when the linked variable contains neither the on value or the off value (or even doesn't exist)? In that case, the checkbutton is put into a special "tristate" or indeterminate mode; you'll sometimes see this in user interfaces where the checkbox holds a single dash rather than being empty or holding a check mark. When in this state, the state flag "alternate" is set, so you can check for it with the "instate" method: ```pythoncheck.instate(['alternate'])```Because the checkbutton won't automatically set (or create) the linked variable, your program needs to make sure it sets the variable to the appropriate starting value.
###Code
from tkinter import *
from tkinter import ttk
import random
root = Tk()
root.title("Frame")
frame = ttk.Frame(root)
frame['width'] = 200 # frame็ๅฎฝๅบฆ
frame['height'] = 100 # frame็้ซๅบฆ
frame['padding'] = (30,30) # ๅกซๅ
frame['borderwidth'] = 5 # ่พนๆกๅฎฝๅบฆ
frame['relief'] = 'sunken' # ่พนๆก้ฃๆ ผ
frame.grid(column=0, row=0, sticky=(N, W, E, S))
resultsContents = StringVar() # ็จไบๆพ็คบ็ๅญ็ฌฆไธฒ
measureSystem = StringVar()
measureSystem.set("---")
def clicked():
"""็ๆ้ๆบๆฐ๏ผๅนถๆพ็คบๅฐๆ้ฎไธใ
ๅฝ้ๆบๆฐไธบ0๏ผๆ้ฎ่ฎพไธบไธๅฏ็จ๏ผ้ๆบๆฐไธไธบ0๏ผๆพ็คบ้ๆบๆฐ
"""
checkbutton_on = checkbutton.instate(['alternate'])
if checkbutton_on:
rand_int = random.randint(1, 9)
else:
rand_int = random.randint(-9, 1)
resultsContents.set('>>> {} <<<'.format(rand_int))
print(measureSystem.get())
checkbutton = ttk.Checkbutton(frame)
checkbutton['text'] = 'create int' # ๆฒกๆ้ไธญๆถ็ๅผ
checkbutton['onvalue'] = 'metric' # ้ไธญๅ๏ผๅฐvariableๅไธบ'onvalue'ๅฏนๅบ็ๅผ
checkbutton['offvalue'] = 'imperial' # ๅๆถๅ๏ผๅฐvariableๅไธบ'offvalue'ๅฏนๅบ็ๅผ
checkbutton['variable'] = measureSystem # ๆขๆฒกๆ้ไธญ๏ผไนๆฒกๆๅๆถ๏ผ ไธบvariableๅๅงๅ็ๅผ
checkbutton.grid(column=1, row=1, sticky=E)
button = ttk.Button(frame, text='Full name:')
button['command'] = clicked # ๆ้ฎ็นๅปๅๆง่ก็ๅฝๆฐ
button['textvariable'] = resultsContents # ๅฐๅญ็ฌฆไธฒ็ปๅฎๅฐๆ้ฎ
resultsContents.set('random by click') # ่ฎพ็ฝฎๅๅงๅๅญ็ฌฆไธฒ
button.grid(column=0, row=0, sticky=E)
label = ttk.Label(frame, text='Full name:')
label['textvariable'] = measureSystem
label.grid(column=1, row=0, sticky=E)
root.mainloop()
###Output
metric
metric
imperial
metric
imperial
imperial
imperial
###Markdown
Radiobutton A radiobutton lets you choose between one of a number of mutually exclusive choices; unlike a checkbutton, it is not limited to just two choices. Radiobuttons are always used together in a set and are a good option when the number of choices is fairly small, e.g. 3-5.  Radiobuttons are created using the ttk.Radiobutton function, and typically as a set:```pythonphone = StringVar()home = ttk.Radiobutton(parent, text='Home', variable=phone, value='home')office = ttk.Radiobutton(parent, text='Office', variable=phone, value='office')cell = ttk.Radiobutton(parent, text='Mobile', variable=phone, value='cell')```Radiobuttons share most of the same configuration options as checkbuttons. One exception is that the "onvalue" and "offvalue" options are replaced with a single "value" option. Each of the radiobuttons of the set will have the same linked variable, but a different value; when the variable has the given value, the radiobutton will be selected, otherwise unselected. When the linked variable does not exist, radiobuttons also display a "tristate" or indeterminate, which can be checked via the "alternate" state flag.
###Code
from tkinter import *
from tkinter import ttk
import random
root = Tk()
root.title("Frame")
frame = ttk.Frame(root)
frame['width'] = 200 # frame็ๅฎฝๅบฆ
frame['height'] = 100 # frame็้ซๅบฆ
frame['padding'] = (30,30) # ๅกซๅ
frame['borderwidth'] = 5 # ่พนๆกๅฎฝๅบฆ
frame['relief'] = 'sunken' # ่พนๆก้ฃๆ ผ
frame.grid(column=0, row=0, sticky=(N, W, E, S))
phone = StringVar() # ็จไบๆพ็คบ็ๅญ็ฌฆไธฒ
phone.set("please select")
home = ttk.Radiobutton(frame, text='Home', variable=phone, value='home')
home.grid(column=1, row=1, sticky=E)
office = ttk.Radiobutton(frame, text='Office', variable=phone, value='office')
office.grid(column=1, row=2, sticky=E)
cell = ttk.Radiobutton(frame, text='Mobile', variable=phone, value='cell')
cell.grid(column=1, row=3, sticky=E)
label = ttk.Label(frame, text='Full name:')
label['textvariable'] = phone
label.grid(column=1, row=0, sticky=E)
root.mainloop()
###Output
_____no_output_____
###Markdown
Entry An entry presents the user with a single line text field that they can use to type in a string value. These can be just about anything: their name, a city, a password, social security number, and so on.  Entries are created using the ttk.Entry function:```pythonusername = StringVar()name = ttk.Entry(parent, textvariable=username)```A "width" configuration option may be specified to provide the number of characters wide the entry should be, allowing you for example to provide a shorter entry for a zip or postal code. We've seen how checkbutton and radiobutton widgets have a value associated with them. Entries do as well, and that value is normally accessed through a linked variable specified by the "textvariable" configuration option. Note that unlike the various buttons, entries don't have a separate text or image beside them to identify them; use a separate label widget for that.You can also get or change the value of the entry widget directly, without going through the linked variable. The "get" method returns the current value, and the "delete" and "insert" methods let you change the contents, e.g. ```pythonprint('current value is %s' % name.get())name.delete(0,'end') delete between two indices, 0-basedname.insert(0, 'your name') insert new text at a given index```Note that entry widgets do not have a "command" option which will invoke a callback whenever the entry is changed. To watch for changes, you should watch for changes on the linked variable. See also "Validation", below. Passwords Entries can be used for passwords, where the actual contents are displayed as a bullet or other symbol. To do this, set the "show" configuration option to the character you'd like to display, e.g. "*". Widget States Like the various buttons, entries can also be put into a disabled state via the "state" command (and queried with "instate"). Entries can also use the state flag "readonly"; if set, users cannot change the entry, though they can still select the text in it (and copy it to the clipboard). There is also an "invalid" state, set if the entry widget fails validation, which leads us to... Validation validate (controls overall validation behavior) - none (default), key (on each keystroke, runs before - prevalidation), focus/focusin/focusout (runs after.. revalidation), all```* validatecommand script (script must return 1 or 0)* invalidcommand script (runs when validate command returns 0)- various substitutions in scripts.. most useful %P (new value of entry), %s (value of entry prior to editing)- the callbacks can also modify the entry using insert/delete, or modify -textvariable, which means the in progress edit is rejected in any case (since it would overwrite what we just set)* .e validate to force validation now```
###Code
from tkinter import *
from tkinter import ttk
import random
root = Tk()
root.title("Frame")
frame = ttk.Frame(root)
frame['width'] = 200 # frame็ๅฎฝๅบฆ
frame['height'] = 100 # frame็้ซๅบฆ
frame['padding'] = (30,30) # ๅกซๅ
frame['borderwidth'] = 5 # ่พนๆกๅฎฝๅบฆ
frame['relief'] = 'sunken' # ่พนๆก้ฃๆ ผ
frame.grid(column=0, row=0, sticky=(N, W, E, S))
username = StringVar()
name = ttk.Entry(frame, textvariable=username)
name.grid(column=0, row=1, sticky=(N, W, E, S))
def print_name():
print('current value is %s' % name.get())
def end():
name.delete(0, 'end') # delete between two indices, 0-based
name.insert(0, 'your name') # insert new text at a given index
button = ttk.Button(frame)
button['text'] = 'print name'
button['command'] = print_name
button.grid(column=0, row=2, sticky=(N, W, E, S))
button_end = ttk.Button(frame)
button_end['text'] = 'end'
button_end['command'] = end
button_end.grid(column=0, row=3, sticky=(N, W, E, S))
root.mainloop()
###Output
current value is 222
current value is 222
current value is your name
current value is your name
###Markdown
Combobox A combobox combines an entry with a list of choices available to the user. This lets them either choose from a set of values you've provided (e.g. typical settings), but also put in their own value (e.g. for less common cases you don't want to include in the list).  Comboboxes are created using the ttk.Combobox function:```pythoncountryvar = StringVar()country = ttk.Combobox(parent, textvariable=countryvar)```Like entries, the "textvariable" option links a variable in your program to the current value of the combobox. As with other widgets, you should initialize the linked variable in your own code. You can also get the current value using the "get" method, and change the current value using the "set" method (which takes a single argument, the new value).A combobox will generate a "" virtual event that you can bind to whenever its value changes.```pythoncountry.bind('>', function)``` Predefined Values You can provide a list of values the user can choose from using the "values" configuration option:```pythoncountry['values'] = ('USA', 'Canada', 'Australia')``` If set, the "readonly" state flag will restrict the user to making choices only from the list of predefined values, but not be able to enter their own (though if the current value of the combobox is not in the list, it won't be changed).If you're using the combobox in "readonly" mode, I'd recommend that when the value changes (i.e. on a ComboboxSelected event), that you call the "selection clear" method. It looks a bit odd visually without doing that.As a complement to the "get" and "set" methods, you can also use the "current" method to determine which item in the predefined values list is selected (call "current" with no arguments, it will return a 0-based index into the list, or -1 if the current value is not in the list), or select one of the items in the list (call "current" with a single 0-based index argument).Want to associate some other value with each item in the list, so that your program can refer to some actual meaningful value, but it gets displayed in the combobox as something else? You'll want to have a look at the section entitled "Keeping Extra Item Data" when we get to the discussion of listboxes in a couple of chapters from now.
###Code
from tkinter import *
from tkinter import ttk
import random
root = Tk()
root.title("Frame")
frame = ttk.Frame(root)
frame['width'] = 200 # frame็ๅฎฝๅบฆ
frame['height'] = 100 # frame็้ซๅบฆ
frame['padding'] = (30,30) # ๅกซๅ
frame['borderwidth'] = 5 # ่พนๆกๅฎฝๅบฆ
frame['relief'] = 'sunken' # ่พนๆก้ฃๆ ผ
frame.grid(column=0, row=0, sticky=(N, W, E, S))
def print_country(value):
print('current %s is %s' % (value, country.get()))
countryvar = StringVar()
country = ttk.Combobox(frame, textvariable=countryvar)
country.bind('<<ComboboxSelected>>', print_country)
country['values'] = ('USA', 'Canada', 'Australia')
country.grid(column=0, row=1, sticky=(N, W, E, S))
root.mainloop()
###Output
current <VirtualEvent event x=0 y=0> is USA
current <VirtualEvent event x=0 y=0> is Canada
current <VirtualEvent event x=0 y=0> is Australia
|
02_numpy/numpy.ipynb | ###Markdown
็ๆ0ๅ1็ๆฐ็ป
###Code
np.zeros(5)
np.zeros([2,3]) # ็ๆ2่ก3ๅ ๅ
็ด ็็ฑปๅfloat
np.ones([2,3])
###Output
_____no_output_____
###Markdown
ไป็ฐๆๆฐ็ป็ๆ
###Code
a = np.array([[1,2,3],[4,5,6]])
b = np.array(a) # ๆทฑๆท่ด
c = np.asarray(a) # ๆต
ๆท่ด ็ธๅฝไบๅผ็จ็a
a
b
c
a[0]=1
a
b
c
###Output
_____no_output_____
###Markdown
็ๆๅบๅฎ่ๅด็ๆฐ็ป
###Code
np.linspace(1,10,5) # ็ญๅทฎๆฐๅ ็ธ้ป็ๆฐ ๅทฎๅผไธๆ ท
np.arange(1,10,3) # ๆ็
งๆๅฎ็ๆญฅ้ฟ ็ๆๆฐๆฎ
np.logspace(1,5,5) # ็ญๆฏๆฐๅ ็ธ้ป็ๆฐ ๆฏๅผไธๆ ท 10^1 10^3
x1 = np.random.uniform(-1, 1, 100000000)
import matplotlib.pyplot as plt
plt.figure(figsize=(20,8),dpi=80)
plt.hist(x1,bins=1000)
plt.show()
x2 = np.random.normal(1.75, 1, 100000000)
plt.figure(figsize=(20,8),dpi=80)
plt.hist(x2,bins=1000)
plt.show()
x2 = np.random.normal(1.75, 1, 100000000)
x3 = np.random.normal(1.75, 5, 100000000)
plt.figure(figsize=(20,8),dpi=80)
plt.hist(x2,bins=1000,normed=True)
plt.hist(x3,bins=1000,normed=True,alpha=0.5)
plt.show()
stock_change = np.random.normal(0, 1, (8, 10))
stock_change
stock_change[0]
a=np.array([[1,2,3],[4,5,6]])
a
a[0]
a[0,0]
a[:,1]
a[0,0:2]
a=np.array([[[1,2,3],[4,5,6]],
[[3,2,1],[6,5,4]]])
a
a[0,0,0]
a[:,:,0]
a1=np.array([[1,2,3],[4,5,6]]) # [1,2,3,4,5,6]
a1
a2=a1.reshape([3,2]) # ไธไฟฎๆนๅๆฅ็ๆฐ็ป ่ฟๅๆฐ็ๆฐ็ป
a2
a1
a3=a1.resize([3,2]) # ไฟฎๆนๅๆฅ็ๆฐ็ป
a3
a1
a1.T
a1
a1.dtype
a4 = a1.astype(np.int32) # # ไธไฟฎๆนๅๆฅ็ๆฐ็ป ่ฟๅๆฐ็ๆฐ็ป
a4
a1.dtype
s=a1.tostring()
s
a5=np.fromstring(s,dtype=np.int64).reshape(3,2) # ่ฝฌๆขๆถ๏ผ้่ฆๆๅฎๆฏไธชๅ
็ด ็ๅ ็จ็ๅคงๅฐ(dtype),่ฝฌๆขๅ็ปๆๆฏไธ็ปด
a5
b=np.array([[6, 3, 5],
[5, 4, 6]])
np.unique(b) # ๅป้๏ผ่ฟๅๆญฃๅบ็ไธ็ปดๆฐ็ป
stock_change = np.random.normal(0, 1, (8, 10))
stock_change=stock_change[:4,:4]
stock_change
# ้ป่พๅคๆญ, ๅฆๆๆถจ่ทๅน
ๅคงไบ0.5ๅฐฑๆ ่ฎฐไธบTrue ๅฆๅไธบFalse
stock_change > 0.5
stock_change[stock_change > 0.5] = 1
stock_change
np.all(stock_change>-2)
np.any(stock_change>0.9)
# ๅคๆญ่ก็ฅจๆถจ่ทๅน
ๅคงไบ0็็ฝฎไธบ1๏ผๅฆๅไธบ0
np.where(stock_change >0 ,1 ,0)
np.where(np.logical_and(stock_change > 0.4, stock_change < 1), 1, 0)
np.where(np.logical_or(stock_change > 0.4, stock_change < 0), 1, 0)
stock_change
np.max(stock_change,axis=0) # axis=0 ๆ็
งๅ
np.min(stock_change,axis=1) # axis=1 ๆ็
ง่ก
np.argmin(stock_change,axis=0) # ๆฑๆๅฐๅผ็ไธๆ
arr = np.array([[1, 2, 3, 2, 1, 4], [5, 6, 1, 2, 3, 1]])
arr+1
[1, 2, 3, 2, 1, 4]+[1]
arr1 = np.array([[1, 2, 3, 2, 1, 4], [5, 6, 1, 2, 3, 1]])
arr2 = np.array([[1, 2, 3, 4], [3, 4, 5, 6]])
arr1+arr2
arr1 = np.array([[1], [2]])
arr2 = np.array([1,2,3])
arr1+arr2
arr1 = np.array([[[1,2,3], [2,3,3]],[[1,2,3], [2,3,3]]])
arr2 = np.array([[1,2,3], [2,3,3],[2,3,3]])
arr1.shape
arr2.shape
arr1+arr2
a = np.array([[80, 86], # (8,2)
[82, 80],
[85, 78],
[90, 90],
[86, 82],
[82, 90],
[78, 80],
[92, 94]])
b = np.array([[0.7,0.3]]) # (1,2)
a*b
a1=np.array([[1,2,3],[4,5,6]])
a2=np.array([[1],[2],[3]])
np.matmul(a1,a2)
a = np.array([[80, 86], # (8,2)
[82, 80],
[85, 78],
[90, 90],
[86, 82],
[82, 90],
[78, 80],
[92, 94]])
b = np.array([[0.7],[0.3]]) # (2,1)
np.dot(a,b)
sum(a[0]*b[:,0])
a = np.array([[1, 2],
[3, 4]])
b = np.array([[5, 6]])
np.concatenate([a,b],axis=0) # ๆ็
ง่กๅๅนถ ไฟ่ฏๅไธ่ด
b.T
np.concatenate([a,b.T],axis=1) # ๆ็
งๅๅๅนถ ไฟ่ฏ่กไธ่ด
np.vstack([a,b])
np.hstack([a,b.T])
np.arange(1,10,2)
a=np.arange(9) # ็ๆ10ไธชๆฐ
a
np.split(a,3) # ๅนณๅๆๅ
np.split(a,[3,5,8])
test = np.genfromtxt("test.csv", delimiter=',')
test
def fill_nan_by_column_mean(t):
for i in range(t.shape[1]):
# ่ฎก็ฎnan็ไธชๆฐ
nan_num = np.count_nonzero(t[:, i][t[:, i] != t[:, i]])
if nan_num > 0:
now_col = t[:, i]
# ๆฑๅ
now_col_not_nan = now_col[np.isnan(now_col) == False].sum()
# ๅ/ไธชๆฐ
now_col_mean = now_col_not_nan / (t.shape[0] - nan_num)
# ่ตๅผ็ปnow_col
now_col[np.isnan(now_col)] = now_col_mean
# ่ตๅผ็ปt๏ผๅณๆดๆฐt็ๅฝๅๅ
t[:, i] = now_col
return t
fill_nan_by_column_mean(test)
###Output
_____no_output_____ |
Pyber Analysis.ipynb | ###Markdown
PYBER ANALYSIS **The analysis gives insights about the demographics and revenue-makers for the Pyber divided in three regions , Urban , suburban and rural comprising of a total of 120 cities ,there are a total of 66 Urban , 36 Suburban , 18 Rural cities in the data .**The urban areas have the the highest share of total number of rides per city with a huge 68.4% of total rides by city type along with 80.9% of total drivers by city type belonging to the urban areas . The urban areas contribute to the Total fares by a 62.7% which is the highest followed by suburban at 30.5% and the rural areas come third at 6.8% .This clearly affirms the Urban areas being the biggest demographic of the company's services with the maximum number of rides , total number of drivers and the biggest revenue generator . Interestingly the urban areas have a much lower average fare between 20 to 28 USD compared to rural and suburban average fares .The highest average fare belongs to rural cities at 44 USD whereas the lowest average fare belongs to Urban cities at 20 USD . The suburban areas have an average fare between 24 and 36 USD. **This trend is consistent when analyzing the total percentage of drivers in suburban areas being 16.5% and in the rural areas it is 2.6%. Similarly the total rides per city have the suburban areas at 26.3% and rural at 5.3% .* The city with the highest average fare is "Taylorhaven" at 42.26 USD and lies within the rural area.* The city with the lowest average fare is "South Latoya" at 20 USD and lies within the urban area.* The city with the maximum number of rides is "West Angela" at 39 rides and lies in the urban area with an average fare of 29 USD and the minimum number of rides were given in the city "South Saramouth" at 6 rides and lies within the rural area and has an average fare of 36 USD.
###Code
%matplotlib inline
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
# Files to Load
city_data = "./Resources/city_data.csv"
ride_data = "./Resources/ride_data.csv"
# Read the City and Ride Data
city_data = pd.read_csv(city_data)
ride_data = pd.read_csv(ride_data)
# Combine the data into a single dataset
combined_df = pd.merge(ride_data, city_data, on="city", how ="left", indicator=True)
combined_df.head(10)
#2375 rows ร 7 columns
#Take a count and check for any NaN values
combined_df.count()
combined_df = combined_df.dropna(how = "any")
combined_df.count()
#Row nos remain 2375 before and after dropping any NaN values so there are none NaN found.
#BUBBLE PLOT OF RIDE SHARING DATA
# Obtain the x and y coordinates for each of the three city types
# Build the scatter plots for each city types
# Incorporate the other graph properties
# Create a legend
# Incorporate a text label regarding circle size
# Save Figure
#check how many unique cities are there
total_cities = combined_df["city"].unique()
print(len(total_cities)) # 120 cities
# Group the data acc to cities
city_grouped = combined_df.groupby(["city"])
#Average Fare ($) Per City( y axis)
avg_fare = city_grouped["fare"].mean()
city_grouped.count().head()
#Total Number of Rides Per City( x axis )
total_rides = combined_df.groupby(["city"])["ride_id"].count()
total_rides.head()
#Total Number of Drivers Per City
total_drivers = combined_df.groupby(["city"])["driver_count"].mean()
total_drivers.head()
city_types = city_data.set_index(["city"])["type"]
city_types.value_counts()
combined_df2 = pd.DataFrame({"Average Fare per City":avg_fare,
"Number of Rides": total_rides,
"Number of Drivers": total_drivers,
"City Type": city_types
})
combined_df2.head()
#Filtering on the basis of city type and creating Urban , Suburban and Rural dataframes.
urban_df = combined_df2.loc[combined_df2["City Type"]== "Urban"]
suburban_df = combined_df2.loc[combined_df2["City Type"]== "Suburban"]
rural_df = combined_df2.loc[combined_df2["City Type"]== "Rural"]
urban_df.head()
#Create a Scatterplot
plt.scatter(urban_df["Number of Rides"], urban_df["Average Fare per City"], marker="o", color = "lightcoral",
edgecolors="black", s = urban_df["Number of Drivers"]*20, label = "Urban", alpha = 0.5, linewidth = 1.5)
plt.scatter(suburban_df["Number of Rides"], suburban_df["Average Fare per City"], marker="o", color = "lightskyblue",
edgecolors="black", s = suburban_df["Number of Drivers"]*20, label = "SubUrban", alpha = 0.5,
linewidth = 1.5)
plt.scatter(rural_df["Number of Rides"], rural_df["Average Fare per City"], marker="o", color = "gold",
edgecolors="black", s = rural_df["Number of Drivers"]*20, label = "Rural", alpha = 0.5, linewidth = 1.5)
textstr = 'Note: Circle size correlates with driver count per city'
plt.text(42, 30, textstr, fontsize=12)
plt.subplots_adjust(right=1)
plt.xlim(0 , 41)
plt.ylim(18, 45)
plt.xlabel("Total number of rides(Per City)")
plt.ylabel("Average Fare($)")
plt.title("Pyber Ride sharing data(2016)")
legend = plt.legend(loc= "best", title="City Types")
legend.legendHandles[0]._sizes = [30]
legend.legendHandles[1]._sizes = [30]
legend.legendHandles[2]._sizes = [30]
plt.grid()
plt.savefig("./Resources/pyberimage")
plt.show()
# Total Fares by City Type
# Calculate Type Percents
# Build Pie Chart
# Save Figure
total_fares = combined_df.groupby(["type"])["fare"].sum()
Urban_fare= 39854.38
Suburban_fare = 19356.33
Rural_fare = 4327.93
fare_sum = (Rural_fare + Suburban_fare + Urban_fare )
rural_percent = (Rural_fare / fare_sum) *100
urban_percent = (Urban_fare / fare_sum)* 100
suburban_percent = (Suburban_fare / fare_sum)*100
fare_percents = [ suburban_percent , urban_percent, rural_percent ]
labels = [ "Suburban" , "Urban", "Rural" ]
colors= [ "lightskyblue" , "lightcoral", "gold"]
explode = (0, 0.10 , 0)
plt.title("% of Total Fares By City Type")
plt.pie(fare_percents, explode=explode, labels=labels ,colors=colors, autopct="%1.1f%%", shadow=True, startangle=150)
plt.show
plt.savefig("./Resources/pyberimage2")
total_fares
# Total Rides by City Type
# Calculate Ride Percents
# Build Pie Chart
# Save Figure
total_rides = combined_df.groupby(["type"])["ride_id"].count()
Rural_rides = 125
Suburban_rides = 625
Urban_rides = 1625
sum_rides= (Rural_rides + Suburban_rides + Urban_rides )
ruralrides_percent = (Rural_rides /sum_rides) *100
urbanrides_percent = (Urban_rides / sum_rides)* 100
suburbanrides_percent = (Suburban_rides /sum_rides)*100
percent_rides = [ suburbanrides_percent , urbanrides_percent, ruralrides_percent ]
labels = [ "Suburban" ,"Urban", "Rural" ]
colors= [ "lightskyblue" , "lightcoral", "gold" ]
explode = (0, 0.10, 0)
plt.title("% of Total Rides By City Type")
plt.pie(percent_rides, explode=explode, labels=labels , colors=colors, autopct="%1.1f%%",shadow=True,startangle=160)
plt.show
plt.savefig("./Resources/pyberimage3")
#Total Drivers by City Type
# Calculate Driver Percents
# Build Pie Charts
# Save Figure
total_drivers = city_data.groupby(["type"]).sum()["driver_count"]
Rural_drivers = 78
Suburban_drivers = 490
Urban_drivers = 2405
sum_drivers= (Rural_drivers + Suburban_drivers + Urban_drivers )
ruraldrivers_percent = (Rural_drivers/sum_drivers) *100
urbandrivers_percent = (Urban_drivers / sum_drivers)* 100
suburbandrivers_percent = (Suburban_drivers /sum_drivers)*100
percent_drivers = [ suburbandrivers_percent , urbandrivers_percent, ruraldrivers_percent ]
labels = [ "Suburban" ,"Urban", "Rural" ]
colors= [ "lightskyblue" , "lightcoral", "gold" ]
explode = (0, 0.10, 0)
plt.title("% of Total Drivers By City Type")
plt.pie(percent_drivers,explode=explode, labels=labels ,colors=colors,autopct="%1.1f%%", shadow=True, startangle=150)
plt.show
plt.savefig("./Resources/pyberimage3")
###Output
_____no_output_____ |
Image_Representation/7. Accuracy and Misclassification.ipynb | ###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. --- Find the average brigtness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
area = 600*1100.0 # pixels
# find the avg
avg = sum_brightness/area
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
###Output
Avg brightness: 35.202807575757575
###Markdown
Classification and Visualizing ErrorIn this section, we'll turn our average brightness feature into a classifier that takes in a standardized image and returns a `predicted_label` for that image. This `estimate_label` function should return a value: 0 or 1 (night or day, respectively). --- TODO: Build a complete classifier Complete this code so that it returns an estimated class label given an input RGB image.
###Code
# This function should take in RGB image input
def estimate_label(rgb_image):
# Extract average brightness feature from an RGB image
avg = avg_brightness(rgb_image)
# Use the avg brightness feature to predict a label (0, 1)
predicted_label = 0
threshold = 120
if(avg > threshold):
# if the average brightness is above the threshold value, we classify it as "day"
predicted_label = 1
# else, the pred-cted_label can stay 0 (it is predicted to be "night")
return predicted_label
###Output
_____no_output_____
###Markdown
Testing the classifierHere is where we test your classification algorithm using our test set of data that we set aside at the beginning of the notebook!Since we are using a pretty simple brightess feature, we may not expect this classifier to be 100% accurate. We'll aim for around 75-85% accuracy usin this one feature. Test datasetBelow, we load in the test dataset, standardize it using the `standardize` function you defined above, and then **shuffle** it; this ensures that order will not play a role in testing accuracy.
###Code
import random
# Using the load_dataset function in helpers.py
# Load test data
TEST_IMAGE_LIST = helpers.load_dataset(image_dir_test)
# Standardize the test data
STANDARDIZED_TEST_LIST = helpers.standardize(TEST_IMAGE_LIST)
# Shuffle the standardized test data
random.shuffle(STANDARDIZED_TEST_LIST)
###Output
_____no_output_____
###Markdown
Determine the AccuracyCompare the output of your classification algorithm (a.k.a. your "model") with the true labels and determine the accuracy.This code stores all the misclassified images, their predicted labels, and their true labels, in a list called `misclassified`.
###Code
# Constructs a list of misclassified images given a list of test images and their labels
def get_misclassified_images(test_images):
# Track misclassified images by placing them into a list
misclassified_images_labels = []
# Iterate through all the test images
# Classify each image and compare to the true label
for image in test_images:
# Get true data
im = image[0]
true_label = image[1]
# Get predicted label from your classifier
predicted_label = estimate_label(im)
# Compare true and predicted labels
if(predicted_label != true_label):
# If these labels are not equal, the image has been misclassified
misclassified_images_labels.append((im, predicted_label, true_label))
# Return the list of misclassified [image, predicted_label, true_label] values
return misclassified_images_labels
# Find all misclassified images in a given test set
MISCLASSIFIED = get_misclassified_images(STANDARDIZED_TEST_LIST)
# Accuracy calculations
total = len(STANDARDIZED_TEST_LIST)
num_correct = total - len(MISCLASSIFIED)
accuracy = num_correct/total
print('Accuracy: ' + str(accuracy))
print("Number of misclassified images = " + str(len(MISCLASSIFIED)) +' out of '+ str(total))
###Output
Accuracy: 0.86875
Number of misclassified images = 21 out of 160
###Markdown
--- Visualize the misclassified imagesVisualize some of the images you classified wrong (in the `MISCLASSIFIED` list) and note any qualities that make them difficult to classify. This will help you identify any weaknesses in your classification algorithm.
###Code
# Visualize misclassified example(s)
## TODO: Display an image in the `MISCLASSIFIED` list
## TODO: Print out its predicted label - to see what the image *was* incorrectly classified as5
num = 0
test_mis_im = MISCLASSIFIED[num][0]
plt.imshow(test_mis_im)
print(str(MISCLASSIFIED[num][1]))
###Output
0
|
book_notebooks/bootcamp_pandas_adv2-shape.ipynb | ###Markdown
Advanced Pandas: Shaping data The second in a series of notebooks that describe Pandas' powerful data management tools. This one covers shaping methods: switching rows and columns, pivoting, and stacking. We'll see that this is all about the indexes: the row and column labels. Outline: * [Example: WEO debt and deficits](wants). Something to work with. * [Indexing](index). Setting and resetting the index. Multi-indexes. * [Switching rows and columns](pivot). Transpose. Referring to variables with multi-indexes. * [Stack and unstack](stack). Managing column structure and labels. * [Pivot](pivot). Unstack shortcut if we start with wide data. * [Review](review). Apply what we've learned. More data management topics coming. **Note: requires internet access to run.** <!-- internal links http://sebastianraschka.com/Articles/2014_ipython_internal_links.html-->This Jupyter notebook was created by Dave Backus, Chase Coleman, and Spencer Lyon for the NYU Stern course [Data Bootcamp](http://nyu.data-bootcamp.com/). tl;drLet `df` be a DataFrame- We use `df.set_index` to move columns into the index of `df`- We use `df.reset_index` to move one or more levels of the index back to columns. If we set `drop=True`, the requested index levels are simply thrown away instead of made into columns- We use `df.stack` to move column index levels into the row index- We use `df.unstack` to move row index levels into the colunm index (Helpful mnemonic: `unstack` moves index levels **u**p) Preliminaries Import packages, etc
###Code
%matplotlib inline
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for Pandas
###Output
_____no_output_____
###Markdown
Example: WEO debt and deficits We spend most of our time on one of the examples from the previous notebook. The problem in this example is that variables run across rows, rather than down columns. Our **want** is to flip some of the rows and columns so that we can plot the data against time. The question is how.We use a small subset of the IMF's [World Economic Outlook database](https://www.imf.org/external/ns/cs.aspx?id=28) that contains two variables and three countries.
###Code
url = 'http://www.imf.org/external/pubs/ft/weo/2016/02/weodata/WEOOct2016all.xls'
# (1) define the column indices
col_indices = [1, 2, 3, 4, 6] + list(range(9, 46))
# (2) download the dataset
weo = pd.read_csv(url,
sep = '\t',
#index_col='ISO',
usecols=col_indices,
skipfooter=1, engine='python',
na_values=['n/a', '--'],
thousands =',',encoding='windows-1252')
# (3) turn the types of year variables into float
years = [str(year) for year in range(1980, 2017)]
weo[years] = weo[years].astype(float)
print('Variable dtypes:\n', weo.dtypes, sep='')
# create debt and deficits dataframe: two variables and three countries
variables = ['GGXWDG_NGDP', 'GGXCNL_NGDP']
countries = ['ARG', 'DEU', 'GRC']
dd = weo[weo['WEO Subject Code'].isin(variables) & weo['ISO'].isin(countries)]
# change column labels to something more intuitive
dd = dd.rename(columns={'WEO Subject Code': 'Variable',
'Subject Descriptor': 'Description'})
# rename variables (i.e. values of observables)
dd['Variable'] = dd['Variable'].replace(to_replace=['GGXWDG_NGDP', 'GGXCNL_NGDP'], value=['Debt', 'Surplus'])
dd
###Output
_____no_output_____
###Markdown
RemindersWhat kind of object does each of the following produce?
###Code
dd.index
dd.columns
dd['ISO']
dd[['ISO', 'Variable']]
dd[dd['ISO'] == 'ARG']
###Output
_____no_output_____
###Markdown
Wants We might imagine doing several different things with this data:* Plot a specific variable (debt or surplus) for a given date. * Time series plots for a specific country.* Time series plots for a specific variable. Depending on which we want, we might organize the data differently. We'll focus on the last two. Here's a brute force approach to the problem: simply transpose the data. This is where that leads:
###Code
dd.T
###Output
_____no_output_____
###Markdown
**Comments.** The problem here is that the columns include both the numbers (which we want to plot) and some descriptive information (which we don't). Setting and resetting the indexWe start by setting and resetting the index. That may sound like a step backwards -- haven't we done this already? -- but it reminds us of some things that will be handy later. Take the dataframe `dd`. What would we like in the index? Evenutally we'd like the dates llke `[2011, 2012, 2013]`, but right now the row labels are more naturally the variable or country. Here are some varriants. Setting the index
###Code
dd.set_index('Country')
# we can do the same thing with a list, which will be meaningful soon...
dd.set_index(['Country'])
###Output
_____no_output_____
###Markdown
**Exercise.** Set `Variable` as the index. **Comment.** Note that the new index brought its **name** along: `Country` in the two examples, `Variable` in the exercise. That's incredibly useful because we can refer to index levels by name. If we happen to have an index without a name, we can set it with ```pythondf.index.name = 'Whatever name we like'``` Multi-indexesWe can put more than one variable in an index, which gives us a **multi-index**. This is sometimes called a **hierarchical index** because the **levels** of the index (as they're called) are ordered. Multi-indexes are more common than you might think. One reason is that data itself is often multi-dimensional. A typical spreadsheet has two dimensions: the variable and the observation. The WEO data is naturally three dimensional: the variable, the year, and the country. (Think about that for a minute, it's deeper than it sounds.) The problem we're having is fitting this nicely into two dimensions. A multi-index allows us to manage that. A two-dimensional index would work here -- the country and the variable code -- but right now we have some redundancy. **Example.** We push all the descriptive, non-numerical columns into the index, leaving the dataframe itself with only numbers, which seems like a step in thee right direction.
###Code
ddi = dd.set_index(['Variable', 'Country', 'ISO', 'Description', 'Units'])
ddi
###Output
_____no_output_____
###Markdown
Let's take a closer look at the index
###Code
ddi.index
###Output
_____no_output_____
###Markdown
That's a lot to process, so we break it into pieces. * `ddi.index.names` contains a list of level names. (Remind yourself that lists are ordered, so this tracks levels.)* `ddi.index.levels` contains the values in each level. Here's what they like like here:
###Code
# Chase and Spencer like double quotes
print("The level names are:\n", ddi.index.names, "\n", sep="")
print("The levels (aka level values) are:\n", ddi.index.levels, sep="")
###Output
The level names are:
['Variable', 'Country', 'ISO', 'Description', 'Units']
The levels (aka level values) are:
[['Debt', 'Surplus'], ['Argentina', 'Germany', 'Greece'], ['ARG', 'DEU', 'GRC'], ['General government gross debt', 'General government net lending/borrowing'], ['Percent of GDP']]
###Markdown
Knowing the order of the index components and being able to inspect their values and names is fundamental to working with a multi-index. **Exercise**: What would happen if we had switched the order of the strings in the list when we called `dd.set_index`? Try it with this list to find out: `['ISO', 'Country', 'Variable', 'Description', 'Units']` Resetting the indexWe've seen that `set_index` pushes columns into the index. Here we see that `reset_index` does the reverse: it pushes components of the index back to the columns. **Example.**
###Code
ddi.head(2)
ddi.reset_index()
# or we can reset the index by level
ddi.reset_index(level=1).head(2)
# or by name
ddi.reset_index(level='Units').head(2)
# or do more than one at a time
ddi.reset_index(level=[1, 3]).head(2)
###Output
_____no_output_____
###Markdown
**Comment.** By default, `reset_index` pushes one or more index levels into columns. If we want to discard that level of the index altogether, we use the parameter `drop=True`.
###Code
ddi.reset_index(level=[1, 3], drop=True).head(2)
###Output
_____no_output_____
###Markdown
**Exercise.** For the dataframe `ddi` do the following in separate code cells: * Use the `reset_index` method to move the `Units` level of the index to a column of the dataframe.* Use the `drop` parameter of `reset_index` to delete `Units` from the dataframe. Switching rows and columns If we take the dataframe `ddi`, we see that the everything's been put into the index but the data itself. Perhaps we can get what we want if we just flip the rows and columns. Roughly speaking, we refer to this as **pivoting**. First look at switching rows and columns The simplest way to flip rows and columns is to use the `T` or transpose property. When we do that, we end up with a lot of stuff in the column labels, as the multi-index for the rows gets rotated into the columns. Other than that, we're good. We can even do a plot. The only problem is all the stuff we've pushed into the column labels -- it's kind of a mess.
###Code
ddt = ddi.T
ddt
###Output
_____no_output_____
###Markdown
**Comment.** We see here that the multi-index for the rows has been turned into a multi-index for the columns. Works the same way. The only problem here is that the column labels are more complicated than we might want. Here, for example, is what we get with the plot method. As usual, `.plot()` plots all the columns of the dataframe, but here that means we're mixing variables. And the legend contains all the levels of the column labels.
###Code
ddt.plot()
###Output
_____no_output_____
###Markdown
Referring to variables with a multi-indexCan we refer to variables in the same way? Sort of, as long as we refer to the top level of the column index. It gives us a dataframe that's a subset of the original one. Let's try each of these: * `ddt['Debt']`* `ddt['Debt']['Argentina']`* `ddt['Debt', 'Argentina']` * `ddt['ARG']`What do you see?
###Code
# indexing by variable
debt = ddt['Debt']
debt
ddt['Debt']['Argentina']
ddt['Debt', 'Argentina']
#ddt['ARG']
###Output
_____no_output_____
###Markdown
What's going on? The theme is that we can reference the top level, which in `ddi` is the `Variable`. If we try to access a lower level, it bombs. **Exercise.** With the dataframe `ddt`: * What type of object is `ddt["Debt"]`? * Construct a line plot of `Debt` over time with one line for each country. SOL<!--ddt['Debt'].dtypes--> SOL<!--ddt['Debt'].plot()--> **Example.** Let's do this together. How would we fix up the legend? What approaches cross your mind? (No code, just the general approach.)
###Code
fig, ax = plt.subplots()
ddt['Debt'].plot(ax=ax)
ax.legend(['ARG', 'DEU', 'GRE'], loc='best')
#ax.axhline(100, color='k', linestyle='--', alpha=.5)
###Output
_____no_output_____
###Markdown
Swapping levelsSince variables refer to the first level of the column index, it's not clear how we would group data by country. Suppose, for example, we wanted to plot `Debt` and `Surplus` for a specific country. What would we do? One way to do that is to make the country the top level with the `swaplevel` method. Note the `axis` parameter. With `axis=1` we swap column levels, with `axis=0` (the default) we swap row levels.
###Code
ddts = ddt.swaplevel(0, 1, axis=1)
ddts
###Output
_____no_output_____
###Markdown
**Exercise.** Use the dataframe `ddts` to plot `Debt` and `Surplus` across time for Argentina. *Hint:* In the `plot` method, set `subplots=True` so that each variable is in a separate subplot. SOL<!--fig, ax = plt.subplots(1, 2, figsize=(12, 4))ddts['Argentina']['Surplus'].plot(ax=ax[0])ax[0].legend(['Surplus'])ddts['Argentina']['Debt'].plot(ax=ax[1])ax[1].legend(['Debt'])ax[0].axhline(0, color='k')ax[0].set_ylim([-10, 10])--> The `xs` methodAnother approach to extracting data that cuts across levels of the row or column index: the `xs` method. This is recent addition to Pandas and an extremely good method once you get the hang of it. The basic syntax is ```pythondf.xs(item, axis=X, level=N)```where `N` is the name or number of an index level and `X` describes if we are extracting from the index or column names. Setting `X=0` (so `axis=0`) will slice up the data along the index, `X=1` extracts data for column labels.Here's how we could use `xs` to get the Argentina data without swapping the level of the column labels
###Code
# ddt.xs?
ddt.xs("Argentina", axis=1, level="Country")
ddt.xs("Argentina", axis=1, level="Country")["Debt"]
###Output
_____no_output_____
###Markdown
**Exercise.** Use a combination of `xs` and standard slicing with `[...]` to extract the variable `Debt` for Greece. SOL<!--ddt.xs("Greece", axis=1, level="Country")["Debt"]--> **Exercise.** Use the dataframe `ddt` -- and the `xs` method -- to plot `Debt` and `Surplus` across time for Argentina. SOL<!--fig, ax = plt.subplots()ddt.xs('Argentina', axis=1, level='Country').plot(ax=ax)ax.legend(['Surplus', 'Debt'])--> Stacking and unstacking The `set_index` and `reset_index` methods work on the row labels -- the index. They move columns to the index and the reverse. The `stack` and `unstack` methods move index levels to and from column levels: * `stack` moves the "inner most" (closest to the data when printed) column label into a row label. This creates a **long** dataframe. * `unstack` does the reverse, it moves the inner most level of the index `u`p to become the inner most column label. This creates a **wide** dataframe. We use both to shape (or reshape) our data. We use `set_index` to push things into the index. And then use `reset_index` to push some of them back to the columns. That gives us pretty fine-grainded control over the shape of our data. Intuitively- stacking (vertically): wide table $\rightarrow$ long table- unstacking: long table $\rightarrow$ wide table
###Code
ddi.stack?
###Output
_____no_output_____
###Markdown
**Single level index**
###Code
# example from docstring
dic = {'a': [1, 3], 'b': [2, 4]}
s = pd.DataFrame(data=dic, index=['one', 'two'])
print(s)
s.stack()
###Output
_____no_output_____
###Markdown
**Multi-index**
###Code
ddi.index
ddi.unstack() # Units variable has only one value, so this doesn't do much
ddi.unstack(level='ISO')
###Output
_____no_output_____
###Markdown
Let's get a smaller subset of this data to work with so we can see things a bit more clearly
###Code
# drop some of the index levels (think s for small)
dds = ddi.reset_index(level=[1, 3, 4], drop=True)
dds
# give a name to the column labels
dds.columns.name = 'Year'
dds
###Output
_____no_output_____
###Markdown
Let's remind ourselves **what we want.** We want to * move the column index (Year) into the row index * move the `Variable` and `ISO` levels the other way, into the column labels. The first one uses `stack`, the second one `unstack`. StackingWe stack our data, one variable on top of another, with a multi-index to keep track of what's what. In simple terms, we change the data from a **wide** format to a **long** format. The `stack` method takes the inner most column level and makes it the lowest row level.
###Code
# convert to long format. Notice printing is different... what `type` is ds?
ds = dds.stack()
ds
# same thing with explicit reference to column name
dds.stack(level='Year').head(8)
# or with level number
dds.stack(level=0).head(8)
###Output
_____no_output_____
###Markdown
Unstacking Stacking moves columns into the index, "stacking" the data up into longer columns. Unstacking does the reverse, taking levels of the row index and turning them into column labels. Roughly speaking we're rotating or **pivoting** the data.
###Code
# now go long to wide
ds.unstack() # default is lowest value wich is year now
# different level
ds.unstack(level='Variable')
# or two at once
ds.unstack(level=['Variable', 'ISO'])
###Output
_____no_output_____
###Markdown
**Exercise.** Run the code below and explain what each line of code does.
###Code
# stacked dataframe
ds.head(8)
du1 = ds.unstack()
du2 = du1.unstack()
###Output
_____no_output_____
###Markdown
**Exercise (challenging).** Take the unstacked dataframe `dds`. Use some combination of `stack`, `unstack`, and `plot` to plot the variable `Surplus` against `Year` for all three countries. Challenging mostly because you need to work out the steps by yourself. SOL<!--ddse = dds.stack().unstack(level=['Variable', 'ISO'])ddse['Surplus'].plot()--> PivotingThe `pivot` method: a short cut to some kinds of unstacking. In rough terms, it takes a wide dataframe and constructs a long one. The **inputs are columns, not index levels**. Example: BDS data The Census's [Business Dynamnics Statistics](http://www.census.gov/ces/dataproducts/bds/data.html) collects annual information about the hiring decisions of firms by size and age. This table list the number of firms and total employment by employment size categories: 1 to 4 employees, 5 to 9, and so on. **Apply want operator.** Our **want** is to plot total employment (the variable `Emp`) against size (variable `fsize`). Both are columns in the original data. Here we construct a subset of the data, where we look at two years rather than the whole 1976-2013 period.
###Code
url = 'http://www2.census.gov/ces/bds/firm/bds_f_sz_release.csv'
raw = pd.read_csv(url)
raw.head()
# Four size categories
sizes = ['a) 1 to 4', 'b) 5 to 9', 'c) 10 to 19', 'd) 20 to 49']
# only defined size categories and only period since 2012
restricted_sample = (raw['year2']>=2012) & raw['fsize'].isin(sizes)
# don't need all variables
var_names = ['year2', 'fsize', 'Firms', 'Emp']
bds = raw[restricted_sample][var_names]
bds
###Output
_____no_output_____
###Markdown
Pivoting the data Let's think specifically about what we **want**. We want to graph `Emp` against `fsize` for (say) 2013. This calls for: * The index should be the size categories `fsize`. * The column labels should be the entries of `year2`, namely `2012`, `2013` and `2014. * The data should come from the variable `Emp`. These inputs translate directly into the following `pivot` method:
###Code
bdsp = bds.pivot(index='fsize', columns='year2', values='Emp')
# divide by a million so bars aren't too long
bdsp = bdsp/10**6
bdsp
###Output
_____no_output_____
###Markdown
**Comment.** Note that all the parameters here are columns. That's not a choice, it's the way the the `pivot` method is written. We do a plot for fun:
###Code
# plot 2013 as bar chart
fig, ax = plt.subplots()
bdsp[2013].plot(ax=ax, kind='barh')
ax.set_ylabel('')
ax.set_xlabel('Number of Employees (millions)')
###Output
_____no_output_____
###Markdown
ReviewWe return to the OECD's healthcare data, specifically a subset of their table on the number of doctors per one thousand population. This loads and cleans the data:
###Code
url1 = 'http://www.oecd.org/health/health-systems/'
url2 = 'OECD-Health-Statistics-2017-Frequently-Requested-Data.xls'
docs = pd.read_excel(url1+url2,
skiprows=3,
usecols=[0, 51, 52, 53, 54, 55, 57],
sheetname='Physicians',
na_values=['..'],
skip_footer=21)
# rename country variable
names = list(docs)
docs = docs.rename(columns={names[0]: 'Country'})
# strip footnote numbers from country names
docs['Country'] = docs['Country'].str.rsplit(n=1).str.get(0)
docs = docs.head()
docs
###Output
/Users/wasserman/anaconda3/lib/python3.6/site-packages/pandas/util/_decorators.py:118: FutureWarning: The `sheetname` keyword is deprecated, use `sheet_name` instead
return func(*args, **kwargs)
###Markdown
Use this data to: * Set the index as `Country`. * Construct a horizontal bar chart of the number of doctors in each country in "2013 (or nearest year)". * Apply the `drop` method to `docs` to create a dataframe `new` that's missing the last column. * *Challenging.* Use `stack` and `unstack` to "pivot" the data so that columns are labeled by country names and rows are labeled by year. This is challenging because we have left out the intermediate steps. * Plot the number of doctors over time in each country as a line in the same plot. *Comment.* In the last plot, the x axis labels are non-intuitive. Ignore that. ResourcesFar and away the best material on this subject is Brandon Rhodes' 2015 Pycon presentation. 2 hours and 25 minutes and worth every second. * Video: https://youtu.be/5JnMutdy6Fw* Materials: https://github.com/brandon-rhodes/pycon-pandas-tutorial* Outline: https://github.com/brandon-rhodes/pycon-pandas-tutorial/blob/master/script.txt
###Code
#
###Output
_____no_output_____ |
Problem 017 - Number letter counts.ipynb | ###Markdown
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
###Code
let units = [
""
"one"
"two"
"three"
"four"
"five"
"six"
"seven"
"eight"
"nine"
]
let teens = [
"ten"
"eleven"
"twelve"
"thirteen"
"fourteen"
"fifteen"
"sixteen"
"seventeen"
"eighteen"
"nineteen"
]
let tens = [
""
"ten" // handled in "teens" case, probably not needed?
"twenty"
"thirty"
"forty"
"fifty"
"sixty"
"seventy"
"eighty"
"ninety"
]
let hundred = "hundred"
let thousand = "thousand"
let breakNumberIntoPowerParts n =
let numberByPowerPosition =
n.ToString().ToCharArray()
|> Array.map (fun x -> int(string(x)))
|> Array.rev
seq {
for i = 0 to numberByPowerPosition.Length - 1 do
yield (i, (numberByPowerPosition.[i]))
}
|> Seq.toList
|> List.rev
let simpleStringify pow10 n =
match pow10 with
| 3 -> units.[n] + thousand
| 2 -> if n > 0 then units.[n] + hundred else ""
| 1 -> tens.[n]
| 0 -> units.[n]
| _ -> ""
let rec stringifyPowerParts (digitPairs:(int * int) list) =
if digitPairs.IsEmpty then [] else
let (pow10, n) = List.head digitPairs
if pow10 = 1 && n = 1 then [teens.[snd(List.head(List.tail digitPairs))]; ""]
else [(simpleStringify pow10 n)] @ (stringifyPowerParts (List.tail digitPairs))
let maybeInsertAnd (numList:string list) =
if numList.Length < 3 then numList // no "and" needed, number < 100
else
let revNumList = List.rev numList
let unitAndTen = revNumList.[0..1]
let allTheRest = revNumList.[2..(revNumList.Length-1)]
if revNumList.[0] <> "" || revNumList.[1] <> "" then
(unitAndTen @ ["and"] @ allTheRest)
|> List.rev
else numList
let countEnglishLongForm n =
breakNumberIntoPowerParts n
|> stringifyPowerParts
|> maybeInsertAnd
|> List.filter (fun s -> s <> "")
|> String.concat ""
|> String.length
[1..1000]
|> List.map countEnglishLongForm
|> List.sum
###Output
_____no_output_____ |
pca_knn_desafio/Desafio/Testes PCA/MouseBehavior - KNN - Final_02.ipynb | ###Markdown
Reading and cleaning datasets
###Code
# reading csv files and creating dataframes
#df_evandro = pd.read_csv('Evandro.csv', sep=';', encoding='latin-1')
#df_celso = pd.read_csv('Celso.csv', sep=';', encoding='latin-1')
df_eliezer = pd.read_csv('Eliezer.csv', sep=';', encoding='latin-1')
df_rafael = pd.read_csv('Rafael.csv', sep=',', encoding='latin-1')
#df_thiago = pd.read_csv('Thiago.csv', sep=';', encoding='latin-1')
# drop NaN values (if any)
#df_evandro.dropna(inplace=True)
#df_celso.dropna(inplace=True)
df_eliezer.dropna(inplace=True)
df_rafael.dropna(inplace=True)
#df_thiago.dropna(inplace=True)
# drop useless data
#df_evandro.drop(['Date', 'Time', 'Event Type'], axis=1, inplace=True)
#df_celso.drop(['Date', 'Time', 'Event Type'], axis=1, inplace=True)
df_eliezer.drop(['Date', 'Time', 'Event Type'], axis=1, inplace=True)
df_rafael.drop(['Date', 'Time', 'Event Type'], axis=1, inplace=True)
#df_thiago.drop(['Date', 'Time', 'Event Type'], axis=1, inplace=True)
# getting rid of outliers by calculating the Z-score across all columns and deleting
# rows whose any of the values is below the threshold
#df_evandro = df_evandro[(np.abs(stats.zscore(df_evandro)) < 2).all(axis=1)].reset_index(drop=True)
#df_celso = df_celso[(np.abs(stats.zscore(df_celso)) < 2).all(axis=1)].reset_index(drop=True)
df_eliezer = df_eliezer[(np.abs(stats.zscore(df_eliezer)) < 2).all(axis=1)].reset_index(drop=True)
df_rafael = df_rafael[(np.abs(stats.zscore(df_rafael)) < 2).all(axis=1)].reset_index(drop=True)
#df_thiago = df_thiago[(np.abs(stats.zscore(df_thiago)) < 2).all(axis=1)].reset_index(drop=True)
# DAQUI EM DIANTE, DEIXAR APENAS OS DOIS QUE ESTรO SENDO TESTADOS
# set the maximum row numbers
#maxRows = [df_evandro.shape[0], df_celso.shape[0]]
#maxRows.sort()
maxRows = [df_eliezer.shape[0], df_thiago.shape[0]]
maxRows.sort()
# slice dataframes in order to equalize the length
#df_evandro = df_evandro.loc[:maxRows[0]-1,:]
#df_celso = df_celso.loc[:maxRows[0]-1,:]
df_eliezer = df_eliezer.loc[:maxRows[0]-1,:]
#df_rafael = df_rafael.loc[:maxRows[0]-1,:]
df_thiago = df_thiago.loc[:maxRows[0]-1,:]
#print(df_evandro.shape[0], df_celso.shape[0])
print(df_eliezer.shape[0], df_thiago.shape[0])
###Output
216730 216730
###Markdown
Methods for creating new variables and standardizing datasets
###Code
def createFeatures(df):
offset_list, xm_list, ym_list, xstd_list, ystd_list, distm_list, diststd_list, arct_list = ([] for i in range(8))
# deleting rows with coordinate X being 0
df = df[df['Coordinate X'] != 0]
# filtering unique id == 1
ulist = df['EventId'].unique()
for u in ulist:
df_unique = df[df['EventId'] == u]
if df_unique.shape[0] == 1: # original is "== 1"
df = df[df['EventId'] != u]
# list of unique id with occurrence > 1
ulist = df['EventId'].unique()
for u in ulist:
df_unique = df[df['EventId'] == u]
# adding mean
x_mean = df_unique['Coordinate X'].mean()
y_mean = df_unique['Coordinate Y'].mean()
xm_list.append(x_mean)
ym_list.append(y_mean)
# adding std
xstd_list.append(df_unique['Coordinate X'].std())
ystd_list.append(df_unique['Coordinate Y'].std())
# calculating euclidean distances
arr = np.array([(x, y) for x, y in zip(df_unique['Coordinate X'], df_unique['Coordinate Y'])])
dist = [np.linalg.norm(arr[i+1]-arr[i]) for i in range(arr.shape[0]-1)]
ideal_dist = np.linalg.norm(arr[arr.shape[0]-1]-arr[0])
# adding offset
offset_list.append(sum(dist)-ideal_dist)
# adding distance mean
distm_list.append(np.asarray(dist).mean())
# adding distance std deviation
diststd_list.append(np.asarray(dist).std())
# create df subset with the new features
df_subset = pd.DataFrame(ulist, columns=['EventId'])
df_subset['Dist Mean'] = distm_list
df_subset['Dist Std Dev'] = diststd_list
df_subset['Offset'] = offset_list
# drop EventId
df_subset.drop(['EventId'], axis=1, inplace=True)
return df_subset
def standardize(df):
# instanciate StandardScaler object
scaler = StandardScaler()
# compute the mean and std to be used for later scaling
scaler.fit(df)
# perform standardization by centering and scaling
scaled_features = scaler.transform(df)
return pd.DataFrame(scaled_features)
# creating new features from existing variables
#df_evandro = createFeatures(df_evandro)
#df_celso = createFeatures(df_celso)
df_eliezer = createFeatures(df_eliezer)
df_rafael = createFeatures(df_rafael)
#df_thiago = createFeatures(df_thiago)
###Output
_____no_output_____
###Markdown
Shuffling and splitting into training and testing dataset
###Code
# set the maximum row numbers
maxRows = [df_eliezer.shape[0], df_rafael.shape[0]]
#(ALTERAR PARA CADA TESTE DIFERENTE)
#df_evandro.shape[0], df_celso.shape[0], #df_eliezer.shape[0], #df_rafael.shape[0], #df_thiago.shape[0]
maxRows.sort()
# slice dataframes in order to equalize the length
#df_evandro = df_evandro.loc[:maxRows[0]-1,:]
#df_celso = df_celso.loc[:maxRows[0]-1,:]
df_eliezer = df_eliezer.loc[:maxRows[0]-1,:]
#df_rafael = df_rafael.loc[:maxRows[0]-1,:]
df_rafael = df_rafael.loc[:maxRows[0]-1,:]
print(df_eliezer.shape[0], df_rafael.shape[0])
#(ALTERAR PARA CADA TESTE DIFERENTE)
#df_evandro.shape[0], df_celso.shape[0], df_eliezer.shape[0], #df_rafael.shape[0], #df_thiago.shape[0]
###Output
42236 42236
###Markdown
RODAR VรRIAS VEZES A PARTIR DAQUI
###Code
# RODAR VรRIAS VEZES A PARTIR DAQUI, CADA VEZ O DATASET VAI SER MISTURADO E A ACURรCIA PODE SER DIFERENTE
#df_evandro_shuffle = df_evandro.sample(frac=1).reset_index(drop=True)
#df_celso_shuffle = df_celso.sample(frac=1).reset_index(drop=True)
df_eliezer_shuffle = df_eliezer.sample(frac=1).reset_index(drop=True)
#df_rafael_shuffle = df_rafael.sample(frac=1).reset_index(drop=True)
df_rafael_shuffle = df_rafael.sample(frac=1).reset_index(drop=True)
# PESSOA QUE QUER VERIFICAR (70% DE DADOS PRA TREINO E 30% PARA TESTE)
#df_evandro_train = df_evandro_shuffle.loc[:(df_evandro_shuffle.shape[0]-1)*0.7]
#df_evandro_test = df_evandro_shuffle.loc[(df_evandro_shuffle.shape[0]*0.7):]
df_eliezer_train = df_eliezer_shuffle.loc[:(df_eliezer_shuffle.shape[0]-1)*0.7]
df_eliezer_test = df_eliezer_shuffle.loc[(df_eliezer_shuffle.shape[0]*0.7):]
# OUTRA PESSOA (NรO PRECISA DO DATASET DE TESTE, PEGA APENAS 70% PARA TREINO)
#df_celso_train = df_celso_shuffle.loc[:(df_celso_shuffle.shape[0]-1)*0.7]
df_rafael_train = df_rafael_shuffle.loc[:(df_rafael_shuffle.shape[0]-1)*0.7]
# standardizing training datasets
# PADRONIZAR TREINO E TESTE DA PESSOA QUE QUER VERIFICAR (ALTERAR PARA CADA TESTE DIFERENTE)
#df_evandro_train = standardize(df_evandro_train)
#df_evandro_test = standardize(df_evandro_test)
df_eliezer_train = standardize(df_eliezer_train)
df_eliezer_test = standardize(df_eliezer_test)
# PADRONIZAR TREINO DA OUTRA PESSOA (ALTERAR PARA CADA TESTE DIFERENTE)
#df_celso_train = standardize(df_celso_train)
df_rafael_train = standardize(df_rafael_train)
###Output
_____no_output_____
###Markdown
Running PCA on training datasets
###Code
# applying PCA and concat on train datasets
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
# PCA NO DATASET DE TREINO DA PESSOA QUE QUER VERIFICAR (ALTERAR PARA CADA TESTE DIFERENTE)
#principalComponents = pca.fit_transform(df_evandro_train)
#df_evandro_train = pd.DataFrame(data = principalComponents)
#df_evandro_train['Label'] = ['Evandro' for s in range(df_evandro_train.shape[0])]
principalComponents = pca.fit_transform(df_eliezer_train)
df_eliezer_train = pd.DataFrame(data = principalComponents)
df_eliezer_train['Label'] = ['Eliezer' for s in range(df_eliezer_train.shape[0])]
# PCA NO DATASET DE TESTE DA PESSOA QUE QUER VERIFICAR (ALTERAR PARA CADA TESTE DIFERENTE)
#principalComponents = pca.fit_transform(df_evandro_test)
#df_evandro_test = pd.DataFrame(data = principalComponents)
#df_evandro_test['Label'] = ['Evandro' for s in range(df_evandro_test.shape[0])]
principalComponents = pca.fit_transform(df_eliezer_test)
df_eliezer_test = pd.DataFrame(data = principalComponents)
df_eliezer_test['Label'] = ['Eliezer' for s in range(df_eliezer_test.shape[0])]
# PCA NO DATASET DE TREINO DAS OUTRAS PESSOAS (ALTERAR PARA CADA TESTE DIFERENTE)
#principalComponents = pca.fit_transform(df_celso_train)
#df_celso_train = pd.DataFrame(data = principalComponents)
#df_celso_train['Label'] = ['Celso' for s in range(df_celso_train.shape[0])]
principalComponents = pca.fit_transform(df_rafael_train)
df_rafael_train = pd.DataFrame(data = principalComponents)
df_rafael_train['Label'] = ['Rafael' for s in range(df_rafael_train.shape[0])]
# CONCATENAR OS DOIS DATASETS DE TREINO (ALTERAR PARA CADA TESTE DIFERENTE)
#df_train = pd.concat([df_evandro_train, df_celso_train]).sample(frac=1).reset_index(drop=True)
#df_test = df_evandro_test
df_train = pd.concat([df_eliezer_train, df_rafael_train]).sample(frac=1).reset_index(drop=True)
df_test = df_eliezer_test
df_train.columns = 'PC1 PC2 PC3 Label'.split()
df_test.columns = 'PC1 PC2 PC3 Label'.split()
df_train.head()
X_train = df_train.drop('Label', axis=1)
Y_train = df_train['Label']
X_test = df_test.drop('Label', axis=1)
Y_test = df_test['Label']
df_train['Label'].value_counts()
# Looking for the best k parameter
error_rate = []
for i in range(1,50,2):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
error_rate.append(np.mean(Y_pred != Y_test))
plt.figure(figsize=(10,6))
plt.plot(range(1,50,2), error_rate, color='blue', lw=1, ls='dashed', marker='o', markerfacecolor='red')
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
# running KNN
knn = KNeighborsClassifier(n_neighbors=99)
knn.fit(X_train, Y_train)
pred = knn.predict(X_test)
print("Accuracy: {}%".format(round(accuracy_score(Y_test, pred)*100,2)))
###Output
Accuracy: 89.37%
|
scripts/tema4 - graficos-R-y-Python/02-matplotlib.ipynb | ###Markdown
Matplotlib
###Code
%matplotlib inline
import matplotlib.pyplot as plt
x = [1,2,3,4]
plt.plot(x)
plt.xlabel("Eje de abcisas")
plt.ylabel("Eje de ordenadas")
plt.show()
x = [1,2,3,4]
y = [1,4,9,16]
plt.plot(x, y)
plt.plot(x,y,"ro")
plt.axis([0, 6, 0, 20])
plt.show()
import numpy as np
data = np.arange(0.0, 10.0, 0.2)
data
plt.plot(data, data, "r--", data, data**2, "bs", data, data**3, "g^")
plt.show()
plt.plot(x,y, linewidth=2.0)
line, = plt.plot(x,y,'-')
line.set_antialiased(False)
lines = plt.plot(data, data, data, data**2)
plt.setp(lines, color = "r", linewidth = 2.0)
lines = plt.plot(data, data, data, data**2)
plt.setp(lines, "color", "r", "linewidth", 2.0)
plt.plot(x,y, alpha = 0.2)
plt.plot(x,y, marker = "+", linestyle = ":", animated = True)
plt.setp(lines)
###Output
agg_filter: a filter function, which takes a (m, n, 3) float array and a dpi value, and returns a (m, n, 3) array
alpha: float (0.0 transparent through 1.0 opaque)
animated: bool
antialiased or aa: bool
clip_box: a `.Bbox` instance
clip_on: bool
clip_path: [(`~matplotlib.path.Path`, `.Transform`) | `.Patch` | None]
color or c: any matplotlib color
contains: a callable function
dash_capstyle: ['butt' | 'round' | 'projecting']
dash_joinstyle: ['miter' | 'round' | 'bevel']
dashes: sequence of on/off ink in points
drawstyle: ['default' | 'steps' | 'steps-pre' | 'steps-mid' | 'steps-post']
figure: a `.Figure` instance
fillstyle: ['full' | 'left' | 'right' | 'bottom' | 'top' | 'none']
gid: an id string
label: object
linestyle or ls: ['solid' | 'dashed', 'dashdot', 'dotted' | (offset, on-off-dash-seq) | ``'-'`` | ``'--'`` | ``'-.'`` | ``':'`` | ``'None'`` | ``' '`` | ``''``]
linewidth or lw: float value in points
marker: :mod:`A valid marker style <matplotlib.markers>`
markeredgecolor or mec: any matplotlib color
markeredgewidth or mew: float value in points
markerfacecolor or mfc: any matplotlib color
markerfacecoloralt or mfcalt: any matplotlib color
markersize or ms: float
markevery: [None | int | length-2 tuple of int | slice | list/array of int | float | length-2 tuple of float]
path_effects: `.AbstractPathEffect`
picker: float distance in points or callable pick function ``fn(artist, event)``
pickradius: float distance in points
rasterized: bool or None
sketch_params: (scale: float, length: float, randomness: float)
snap: bool or None
solid_capstyle: ['butt' | 'round' | 'projecting']
solid_joinstyle: ['miter' | 'round' | 'bevel']
transform: a :class:`matplotlib.transforms.Transform` instance
url: a url string
visible: bool
xdata: 1D array
ydata: 1D array
zorder: float
###Markdown
Mรบltiples grรกficos en una misma figura
###Code
def f(x):
return np.exp(-x)*np.cos(2*np.pi*x)
x1 = np.arange(0, 5.0, 0.1)
x2 = np.arange(0, 5.0, 0.2)
plt.figure(1)
plt.subplot(2,1,1)
plt.plot(x1, f(x1), 'ro', x2, f(x2), 'k')
plt.subplot(2,1,2)
plt.plot(x2, f(x2), 'g--')
plt.show()
plt.figure(1)
plt.subplot(2,1,1)
plt.plot([1,2,3])
plt.subplot(2,1,2)
plt.plot([4,5,6])
plt.figure(2)
plt.plot([4,5,6])
plt.figure(1)
plt.subplot(2,1,1)
plt.title("Esto es el primer tรญtulo")
###Output
/anaconda3/lib/python3.5/site-packages/matplotlib/cbook/deprecation.py:107: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
warnings.warn(message, mplDeprecation, stacklevel=1)
###Markdown
Textos en los grรกficos
###Code
mu = 100
sigma = 20
x = mu + sigma * np.random.randn(10000)
n, bins, patches = plt.hist(x, 50, normed=1, facecolor="g", alpha=0.6)
plt.xlabel("Cociente Intelectual", fontsize = 14, color = "green")
plt.ylabel("Probabilidad")
plt.title(r"Histograma de CI $N(\mu,\sigma)$")
plt.text(120, 0.015, r'$\mu = 100,\ \sigma=20$')
plt.axis([20,180,0, 0.025])
plt.grid(True)
plt.show()
plt.figure(figsize=(10,6), dpi = 90)
plt.subplot(1,1,1)
x = np.arange(0, 10*np.pi, 0.01)
y = np.cos(x)
plt.plot(x,y, lw = 2.0)
plt.annotate('Mรกximo Local', xy = (4*np.pi, 1), xytext = (15, 1.5),
arrowprops = dict(facecolor = "black", shrink = 0.08))
plt.ylim(-2,2)
plt.show()
###Output
_____no_output_____
###Markdown
Cambio de escala
###Code
from matplotlib.ticker import NullFormatter
mu = 0.5
sd = 0.3
y = mu + sd*np.random.randn(1000)
y = y[(y>0) & (y<1)]
y.sort()
x = np.arange(len(y))
plt.figure(figsize=(10, 8))
plt.subplot(2,2,1)
plt.plot(x,y)
plt.yscale("linear")
plt.xscale("linear")
plt.title("Escala Lineal")
plt.grid(True)
plt.subplot(2,2,2)
plt.plot(x,y)
plt.yscale("log")
plt.title("Escala Logarรญtmica")
plt.grid(True)
plt.subplot(2,2,3)
plt.plot(x, y - y.mean())
plt.yscale("symlog", linthreshy=0.01)
plt.title("Escala Log Simรฉtrico")
plt.grid(True)
plt.subplot(2,2,4)
plt.plot(x,y)
plt.yscale("logit")
plt.title("Escala logรญstica")
plt.gca().yaxis.set_minor_formatter(NullFormatter())
plt.grid(True)
plt.subplots_adjust(top = 0.92, bottom = 0.08, left = 0.1, right = 0.95, hspace = 0.35, wspace = 0.35)
plt.show()
###Output
_____no_output_____
###Markdown
Cambios en los ejes
###Code
x = np.linspace(-np.pi, np.pi, 256, endpoint=True)
S, C = np.sin(x), np.cos(x)
plt.figure(figsize=(10,8))
plt.plot(x, S, color = "blue", linewidth = 1.2, linestyle = "-", label = "seno")
plt.plot(x, C, color = "green", linewidth = 1.2, linestyle = "-", label = "coseno")
plt.xlim(x.min()*1.1, x.max()*1.1)
plt.ylim(S.min()*1.1, S.max()*1.1)
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi],
[r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'+$\pi$'])
plt.yticks(np.linspace(-1,1, 2, endpoint=True),
['-1', '+1'])
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data', 0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
plt.legend(loc="upper left")
x0 = 2*np.pi/3
plt.plot([x0,x0], [0, np.sin(x0)], color = "blue", linewidth = 2.5, linestyle = "--")
plt.scatter([x0, ], [np.sin(x0), ], 50, color = "blue")
plt.annotate(r'$\sin(\frac{2\pi}{3}) = \frac{\sqrt{3}}{2}$',
xy = (x0, np.sin(x0)), xycoords = "data",
xytext = (+20,+40), textcoords = "offset points",
fontsize = 16, arrowprops = dict(arrowstyle = "->", connectionstyle="arc3,rad=.2"))
plt.plot([x0,x0], [0, np.cos(x0)], color = "green", linewidth = 2.5, linestyle = "--")
plt.scatter([x0, ], [np.cos(x0), ], 50, color = "green")
plt.annotate(r'$\cos(\frac{2\pi}{3}) = -\frac{1}{2}$',
xy = (x0, np.cos(x0)), xycoords = "data",
xytext = (-90,-60), textcoords = "offset points",
fontsize = 16, arrowprops = dict(arrowstyle = "->", connectionstyle="arc3,rad=.2"))
for label in ax.get_xticklabels() + ax.get_yticklabels():
label.set_fontsize(16)
label.set_bbox(dict(facecolor='white', edgecolor='None', alpha = 0.6))
plt.show()
###Output
_____no_output_____ |
FeatureCollection/fusion_table.ipynb | ###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell. Uncomment these lines if you are running this notebook for the first time.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for the first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
# print(fromFT.getInfo())
polys = fromFT.geometry()
centroid = polys.centroid()
print(centroid.getInfo())
lng, lat = centroid.getInfo()['coordinates']
print("lng = {}, lat = {}".format(lng, lat))
collection = ee.ImageCollection('LANDSAT/LC8_L1T_TOA')
path = collection.filterBounds(fromFT)
images = path.filterDate('2016-05-01', '2016-10-31')
print(images.size().getInfo())
median = images.median()
# lat = 46.80514
# lng = -99.22023
lng_lat = ee.Geometry.Point(lng, lat)
Map.setCenter(lng, lat, 10)
vis = {'bands': ['B5', 'B4', 'B3'], 'max': 0.3}
Map.addLayer(median,vis)
Map.addLayer(fromFT)
###Output
{'type': 'Point', 'coordinates': [-99.22647697049882, 47.20444580408089]}
lng = -99.22647697049882, lat = 47.20444580408089
33
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
###Code
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
###Code
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
# Add Earth Engine dataset
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
# print(fromFT.getInfo())
polys = fromFT.geometry()
centroid = polys.centroid()
print(centroid.getInfo())
lng, lat = centroid.getInfo()['coordinates']
print("lng = {}, lat = {}".format(lng, lat))
collection = ee.ImageCollection('LANDSAT/LC8_L1T_TOA')
path = collection.filterBounds(fromFT)
images = path.filterDate('2016-05-01', '2016-10-31')
print(images.size().getInfo())
median = images.median()
# lat = 46.80514
# lng = -99.22023
lng_lat = ee.Geometry.Point(lng, lat)
Map.setCenter(lng, lat, 10)
vis = {'bands': ['B5', 'B4', 'B3'], 'max': 0.3}
Map.addLayer(median,vis)
Map.addLayer(fromFT)
###Output
_____no_output_____
###Markdown
Display Earth Engine data layers
###Code
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The following script checks if the geehydro package has been installed. If not, it will install geehydro, which automatically install its dependencies, including earthengine-api and folium.
###Code
import subprocess
try:
import geehydro
except ImportError:
print('geehydro package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geehydro'])
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once.
###Code
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
# print(fromFT.getInfo())
polys = fromFT.geometry()
centroid = polys.centroid()
print(centroid.getInfo())
lng, lat = centroid.getInfo()['coordinates']
print("lng = {}, lat = {}".format(lng, lat))
collection = ee.ImageCollection('LANDSAT/LC8_L1T_TOA')
path = collection.filterBounds(fromFT)
images = path.filterDate('2016-05-01', '2016-10-31')
print(images.size().getInfo())
median = images.median()
# lat = 46.80514
# lng = -99.22023
lng_lat = ee.Geometry.Point(lng, lat)
Map.setCenter(lng, lat, 10)
vis = {'bands': ['B5', 'B4', 'B3'], 'max': 0.3}
Map.addLayer(median,vis)
Map.addLayer(fromFT)
###Output
{'type': 'Point', 'coordinates': [-99.22647697049882, 47.20444580408089]}
lng = -99.22647697049882, lat = 47.20444580408089
33
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____
###Markdown
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine APIInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geehydro](https://github.com/giswqs/geehydro). The **geehydro** Python package builds on the [folium](https://github.com/python-visualization/folium) package and implements several methods for displaying Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, `Map.centerObject()`, and `Map.setOptions()`.The magic command `%%capture` can be used to hide output from a specific cell.
###Code
# %%capture
# !pip install earthengine-api
# !pip install geehydro
###Output
_____no_output_____
###Markdown
Import libraries
###Code
import ee
import folium
import geehydro
###Output
_____no_output_____
###Markdown
Authenticate and initialize Earth Engine API. You only need to authenticate the Earth Engine API once. Uncomment the line `ee.Authenticate()` if you are running this notebook for this first time or if you are getting an authentication error.
###Code
# ee.Authenticate()
ee.Initialize()
###Output
_____no_output_____
###Markdown
Create an interactive map This step creates an interactive map using [folium](https://github.com/python-visualization/folium). The default basemap is the OpenStreetMap. Additional basemaps can be added using the `Map.setOptions()` function. The optional basemaps can be `ROADMAP`, `SATELLITE`, `HYBRID`, `TERRAIN`, or `ESRI`.
###Code
Map = folium.Map(location=[40, -100], zoom_start=4)
Map.setOptions('HYBRID')
###Output
_____no_output_____
###Markdown
Add Earth Engine Python script
###Code
fromFT = ee.FeatureCollection('ft:1CLldB-ULPyULBT2mxoRNv7enckVF0gCQoD2oH7XP')
# print(fromFT.getInfo())
polys = fromFT.geometry()
centroid = polys.centroid()
print(centroid.getInfo())
lng, lat = centroid.getInfo()['coordinates']
print("lng = {}, lat = {}".format(lng, lat))
collection = ee.ImageCollection('LANDSAT/LC8_L1T_TOA')
path = collection.filterBounds(fromFT)
images = path.filterDate('2016-05-01', '2016-10-31')
print(images.size().getInfo())
median = images.median()
# lat = 46.80514
# lng = -99.22023
lng_lat = ee.Geometry.Point(lng, lat)
Map.setCenter(lng, lat, 10)
vis = {'bands': ['B5', 'B4', 'B3'], 'max': 0.3}
Map.addLayer(median,vis)
Map.addLayer(fromFT)
###Output
{'type': 'Point', 'coordinates': [-99.22647697049882, 47.20444580408089]}
lng = -99.22647697049882, lat = 47.20444580408089
33
###Markdown
Display Earth Engine data layers
###Code
Map.setControlVisibility(layerControl=True, fullscreenControl=True, latLngPopup=True)
Map
###Output
_____no_output_____ |
IPyRoot/SimmyDev.ipynb | ###Markdown
A scratch notebook for testing and developing `simmy`.
###Code
from titan import TitanConfig
from subchandra import SCConfig
#This means that modules will be automatically reloaded when changed.
#Makes for swift exporatory development, probably a performance hit if using well-tested code.
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Simulation & SimulationGrid SimConfig XRB Config SCConfig
###Code
#Test SCConfig/SimConfig
test_model_dir = '/home/ajacobs/Research/Projects/Simmy/IPyRoot/SCTestGrid/10044-090-210-4lev-full-512'
#Init from existing test
test_model_config = SCConfig(test_model_dir)
test_model_config.printConfig()
#Init from scratch test
test_model_dir = '/home/ajacobs/Research/Projects/Simmy/IPyRoot/SCTestGrid/10044-090-210-4lev-debug'
#Design
# + Get baseline inputs and im ConfigRecs
# + Make factory method smart enough to use this to gen initial model and finish off initialization of ConfigRecs
#Build up initial model ConfigRecord baseline
im_rec = SCConfig.genIMConfigRec()
im_rec.setField('M_tot', '1.0')
im_rec.setField('M_He', '0.0445')
im_rec.setField('delta', '2000000.0')
im_rec.setField('temp_core', '90000000.0')
im_rec.setField('temp_base', '210000000.0')
im_rec.setField('im_exe', '/home/ajacobs/Codebase/Maestro/Util/initial_models/sub_chandra/init_1d.Linux.gfortran.debug.exe')
print(im_rec)
#Build up inputs ConfigRecord baseline
inputs_rec = SCConfig.genInputsConfigRec()
inputs_rec.setField('max_levs', 2)
inputs_rec.setField('coarse_res', 128)
print(inputs_rec)
#Create the Config object and print
test_model_config = SCConfig(test_model_dir, config_recs=[im_rec,inputs_rec])
###Output
Initial Model Configuration
Description: Configures the initial model for this simulation. This
corresponds to the _params file used by init1d to build an initial 1D
model to be mapped to the 3D domain. The data from this model are also
stored.
Fields:
im_exe
Description: Full path to the initial model builder.
Current value: /home/ajacobs/Codebase/Maestro/Util/initial_models/sub_chandra/init_1d.Linux.gfortran.debug.exe
nx
Description: Resolution (number of cells) of the 1D model, should match Maestro base state resolution.
Current value: None
M_tot
Description: Mass of the WD core in M_sol.
Current value: 1.0
M_He
Description: Mass of He envelope in M_sol.
Current value: 0.0445
delta
Description: Transition delta from core to envelope in cm.
Current value: 2000000.0
temp_core
Description: Isothermal core temperature in K.
Current value: 90000000.0
temp_base
Description: Temperature at the base of the He envelope in K.
Current value: 210000000.0
xmin
Description: Spatial coordinate in cm the model starts at.
Current value: 0.0
xmax
Description: Spatial coordinate in cm of the last cell, should match the sidelength of domain in octant simulation, half sidelength for full star.
Current value: None
mixed_co_wd
Description: Boolean that sets if core is C/O or just C.
Current value: .false.
low_density_cutoff
Description: Density floor in the initial model (NOT for the 3D Maestro domain).
Current value: 1.d-4
temp_fluff
Description: Temperature floor, will also be temperature when below density floor.
Current value: 7.5d7
smallt
Description: An unused parameter that used to be like temp_fluff.
Current value: 1.d6
radius
Description: NumPy array of initial model radius in cm.
Current value: None
density
Description: NumPy array of initial model density in g/cm^3.
Current value: None
temperature
Description: NumPy array of initial model temperature in K.
Current value: None
pressure
Description: NumPy array of initial model pressure in dyn/cm^2.
Current value: None
soundspeed
Description: NumPy array of initial model sound speed in cm/s.
Current value: None
entropy
Description: NumPy array of initial model specific entropy in erg/(g*K).
Current value: None
species
Description: NumPy 2D array of initial model species mass fractions.
Current value: None
Inputs Configuration
Description: Configuration of the inputs file. This is the file passed
to the Maestro executable that sets various Maestro parameters,
configures the simulation, and provides the location of initial model
data.
Fields:
im_file
Description: Initial model file with data to be read into the Maestro basestate.
Current value: None
drdxfac
Description: 5
Current value: 5
job_name
Description: Description of the simulation.
Current value: None
max_levs
Description: Number of levels the AMR will refine to.
Current value: 2
coarse_res
Description: Resolution of the base (coarsest) level
Current value: 128
anelastic_cutoff
Description: Density cutoff below which the Maestro velocity constraint is simplified to the anelastic constraint.
Current value: None
octant
Description: Boolean that sets if an octant or full star should be modeled.
Current value: .false.
dim
Description: Dimensionality of the problem.
Current value: 3
physical_size
Description: Sidelength in cm of the square domain.
Current value: None
plot_deltat
Description: Time interval in s at which to save pltfiles.
Current value: 5.0
mini_plot_deltat
Description: Time interval in s at which to save minipltfiles.
Current value: 0.2
chk_int
Description: Timestep interval at which to save chkpoint files.
Current value: 10
plot_base_name
Description: Basename for pltfiles. Pltfiles will be saved with this name plus their timestep.
Current value: None
mini_plot_base_name
Description: Basename for minipltfiles. Minipltfiles will be saved with this name plus their timestep.
Current value: None
check_base_name
Description: Basename for checkpoint files. Chkfiles will be saved with this name plus their timestep.
Current value: None
bc_lo
Description: Integer flag for the lower (x=y=z=0) boundary
Current value: None
bc_hi
Description: Integer flag for the hi (x=y=z=max) boundary
Current value: None
/home/ajacobs/Codebase/Maestro/Util/initial_models/sub_chandra/init_1d.Linux.gfortran.debug.exe /home/ajacobs/Research/Projects/Simmy/IPyRoot/SCTestGrid/10044-090-210-4lev-debug/model/_params.10044-090-210-4lev-debug
###Markdown
Machine RunConfig TitanConfig
###Code
#Test TitanConfig
config_dict = {}
config_dict['nodes'] = 128
test_config = TitanConfig('/home/ajacobs/Research/Projects/Simmy/IPyRoot/TestModel', config_dict)
#Tests to formalize
# + Successfully creates file
###Output
_____no_output_____ |
assignments/course_3/week_2_assignment_2.ipynb | ###Markdown
Logistic Regression with L2 regularizationThe goal of this second notebook is to implement your own logistic regression classifier with L2 regularization. You will do the following: * Extract features from Amazon product reviews. * Convert an SFrame into a NumPy array. * Write a function to compute the derivative of log likelihood function with an L2 penalty with respect to a single coefficient. * Implement gradient ascent with an L2 penalty. * Empirically explore how the L2 penalty can ameliorate overfitting. Fire up GraphLab Create Make sure you have the latest version of GraphLab Create. Upgrade by``` pip install graphlab-create --upgrade```See [this page](https://dato.com/download/) for detailed instructions on upgrading.
###Code
# from __future__ import division
# import graphlab
import turicreate as tc
import numpy as np
import string
import re
###Output
_____no_output_____
###Markdown
Load and process review dataset For this assignment, we will use the same subset of the Amazon product review dataset that we used in Module 3 assignment. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted of mostly positive reviews.
###Code
products = tc.SFrame('amazon_baby_subset.gl/')
###Output
_____no_output_____
###Markdown
Just like we did previously, we will work with a hand-curated list of important words extracted from the review data. We will also perform 2 simple data transformations:1. Remove punctuation using [Python's built-in](https://docs.python.org/2/library/string.html) string functionality.2. Compute word counts (only for the **important_words**)Refer to Module 3 assignment for more details.
###Code
# The same feature processing (same as the previous assignments)
# ---------------------------------------------------------------
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
# def remove_punctuation(text):
# import string
# return text.translate(None, string.punctuation)
def remove_punctuation(text):
regex = re.compile('[%s]' % re.escape(string.punctuation))
return regex.sub('', text)
# Remove punctuation.
products['review_clean'] = products['review'].apply(remove_punctuation)
# Split out the words into individual columns
for word in important_words:
# products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
products[word] = products['review_clean'].apply(lambda s : sum(1 for match in re.finditer(r"\b%s\b"% word, s)))
###Output
_____no_output_____
###Markdown
Now, let us take a look at what the dataset looks like (**Note:** This may take a few minutes).
###Code
products
###Output
_____no_output_____
###Markdown
Train-Validation splitWe split the data into a train-validation split with 80% of the data in the training set and 20% of the data in the validation set. We use `seed=2` so that everyone gets the same result.**Note:** In previous assignments, we have called this a **train-test split**. However, the portion of data that we don't train on will be used to help **select model parameters**. Thus, this portion of data should be called a **validation set**. Recall that examining performance of various potential models (i.e. models with different parameters) should be on a validation set, while evaluation of selected model should always be on a test set.
###Code
train_data, validation_data = products.random_split(.8, seed=2)
print('Training set : %d data points' % len(train_data))
print('Validation set : %d data points' % len(validation_data))
###Output
Training set : 42361 data points
Validation set : 10711 data points
###Markdown
Convert SFrame to NumPy array Just like in the second assignment of the previous module, we provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. **Note:** The feature matrix includes an additional column 'intercept' filled with 1's to take account of the intercept term.
###Code
import numpy as np
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
###Output
_____no_output_____
###Markdown
We convert both the training and validation sets into NumPy arrays.**Warning**: This may take a few minutes.
###Code
feature_matrix_train, sentiment_train = get_numpy_data(train_data, important_words, 'sentiment')
feature_matrix_valid, sentiment_valid = get_numpy_data(validation_data, important_words, 'sentiment')
###Output
_____no_output_____
###Markdown
**Are you running this notebook on an Amazon EC2 t2.micro instance?** (If you are using your own machine, please skip this section)It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running `get_numpy_data` function. Instead, download the [binary file](https://s3.amazonaws.com/static.dato.com/files/coursera/course-3/numpy-arrays/module-4-assignment-numpy-arrays.npz) containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:```arrays = np.load('module-4-assignment-numpy-arrays.npz')feature_matrix_train, sentiment_train = arrays['feature_matrix_train'], arrays['sentiment_train']feature_matrix_valid, sentiment_valid = arrays['feature_matrix_valid'], arrays['sentiment_valid']``` Building on logistic regression with no L2 penalty assignmentLet us now build on Module 3 assignment. Recall from lecture that the link function for logistic regression can be defined as:$$P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},$$where the feature vector $h(\mathbf{x}_i)$ is given by the word counts of **important_words** in the review $\mathbf{x}_i$. We will use the **same code** as in this past assignment to make probability predictions since this part is not affected by the L2 penalty. (Only the way in which the coefficients are learned is affected by the addition of a regularization term.)
###Code
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
## YOUR CODE HERE
score = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
## YOUR CODE HERE
predictions = 1/(1+np.exp(-score))
return predictions
###Output
_____no_output_____
###Markdown
Adding L2 penalty Let us now work on extending logistic regression with L2 regularization. As discussed in the lectures, the L2 regularization is particularly useful in preventing overfitting. In this assignment, we will explore L2 regularization in detail.Recall from lecture and the previous assignment that for logistic regression without an L2 penalty, the derivative of the log likelihood function is:$$\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)$$** Adding L2 penalty to the derivative** It takes only a small modification to add a L2 penalty. All terms indicated in **red** refer to terms that were added due to an **L2 penalty**.* Recall from the lecture that the link function is still the sigmoid:$$P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},$$* We add the L2 penalty term to the per-coefficient derivative of log likelihood:$$\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }$$The **per-coefficient derivative for logistic regression with an L2 penalty** is as follows:$$\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right) \color{red}{-2\lambda w_j }$$and for the intercept term, we have$$\frac{\partial\ell}{\partial w_0} = \sum_{i=1}^N h_0(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)$$ **Note**: As we did in the Regression course, we do not apply the L2 penalty on the intercept. A large intercept does not necessarily indicate overfitting because the intercept is not associated with any particular feature. Write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. Unlike its counterpart in the last assignment, the function accepts five arguments: * `errors` vector containing $(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w}))$ for all $i$ * `feature` vector containing $h_j(\mathbf{x}_i)$ for all $i$ * `coefficient` containing the current value of coefficient $w_j$. * `l2_penalty` representing the L2 penalty constant $\lambda$ * `feature_is_constant` telling whether the $j$-th feature is constant or not.
###Code
def feature_derivative_with_L2(errors, feature, coefficient, l2_penalty, feature_is_constant):
# Compute the dot product of errors and feature
## YOUR CODE HERE
derivative = np.dot(errors, feature)
# add L2 penalty term for any feature that isn't the intercept.
if not feature_is_constant:
## YOUR CODE HERE
derivative -= 2*l2_penalty*coefficient
return derivative
###Output
_____no_output_____
###Markdown
** Quiz Question:** In the code above, was the intercept term regularized? To verify the correctness of the gradient ascent algorithm, we provide a function for computing log likelihood (which we recall from the last assignment was a topic detailed in an advanced optional video, and used here for its numerical stability). $$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) \color{red}{-\lambda\|\mathbf{w}\|_2^2} $$
###Code
def compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
lp = np.sum((indicator-1)*scores - np.log(1. + np.exp(-scores))) - l2_penalty*np.sum(coefficients[1:]**2)
return lp
###Output
_____no_output_____
###Markdown
** Quiz Question:** Does the term with L2 regularization increase or decrease $\ell\ell(\mathbf{w})$? The logistic regression function looks almost like the one in the last assignment, with a minor modification to account for the L2 penalty. Fill in the code below to complete this modification.
###Code
def logistic_regression_with_L2(feature_matrix, sentiment, initial_coefficients, step_size, l2_penalty, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in range(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
## YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in range(len(coefficients)): # loop over each coefficient
is_intercept = (j == 0)
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
## YOUR CODE HERE
derivative = feature_derivative_with_L2(errors,
feature_matrix[:,j],
coefficients[j],
l2_penalty,
is_intercept)
# add the step size times the derivative to the current coefficient
## YOUR CODE HERE
coefficients[j] += step_size*derivative
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood_with_L2(feature_matrix, sentiment, coefficients, l2_penalty)
print('iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp))
return coefficients
###Output
_____no_output_____
###Markdown
Explore effects of L2 regularizationNow that we have written up all the pieces needed for regularized logistic regression, let's explore the benefits of using **L2 regularization** in analyzing sentiment for product reviews. **As iterations pass, the log likelihood should increase**.Below, we train models with increasing amounts of regularization, starting with no L2 penalty, which is equivalent to our previous logistic regression implementation.
###Code
# run with L2 = 0
coefficients_0_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=0, max_iter=501)
# run with L2 = 4
coefficients_4_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=4, max_iter=501)
# run with L2 = 10
coefficients_10_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=10, max_iter=501)
# run with L2 = 1e2
coefficients_1e2_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e2, max_iter=501)
# run with L2 = 1e3
coefficients_1e3_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e3, max_iter=501)
# run with L2 = 1e5
coefficients_1e5_penalty = logistic_regression_with_L2(feature_matrix_train, sentiment_train,
initial_coefficients=np.zeros(194),
step_size=5e-6, l2_penalty=1e5, max_iter=501)
###Output
iteration 0: log likelihood of observed labels = -29271.85955115
iteration 1: log likelihood of observed labels = -29271.71006589
iteration 2: log likelihood of observed labels = -29271.65738833
iteration 3: log likelihood of observed labels = -29271.61189923
iteration 4: log likelihood of observed labels = -29271.57079975
iteration 5: log likelihood of observed labels = -29271.53358505
iteration 6: log likelihood of observed labels = -29271.49988440
iteration 7: log likelihood of observed labels = -29271.46936584
iteration 8: log likelihood of observed labels = -29271.44172890
iteration 9: log likelihood of observed labels = -29271.41670149
iteration 10: log likelihood of observed labels = -29271.39403722
iteration 11: log likelihood of observed labels = -29271.37351294
iteration 12: log likelihood of observed labels = -29271.35492661
iteration 13: log likelihood of observed labels = -29271.33809523
iteration 14: log likelihood of observed labels = -29271.32285309
iteration 15: log likelihood of observed labels = -29271.30905015
iteration 20: log likelihood of observed labels = -29271.25729150
iteration 30: log likelihood of observed labels = -29271.20657205
iteration 40: log likelihood of observed labels = -29271.18775997
iteration 50: log likelihood of observed labels = -29271.18078247
iteration 60: log likelihood of observed labels = -29271.17819447
iteration 70: log likelihood of observed labels = -29271.17723457
iteration 80: log likelihood of observed labels = -29271.17687853
iteration 90: log likelihood of observed labels = -29271.17674648
iteration 100: log likelihood of observed labels = -29271.17669750
iteration 200: log likelihood of observed labels = -29271.17666862
iteration 300: log likelihood of observed labels = -29271.17666862
iteration 400: log likelihood of observed labels = -29271.17666862
iteration 500: log likelihood of observed labels = -29271.17666862
###Markdown
Compare coefficientsWe now compare the **coefficients** for each of the models that were trained above. We will create a table of features and learned coefficients associated with each of the different L2 penalty values.Below is a simple helper function that will help us create this table.
###Code
table = tc.SFrame({'word': ['(intercept)'] + important_words})
def add_coefficients_to_table(coefficients, column_name):
table[column_name] = coefficients
return table
###Output
_____no_output_____
###Markdown
Now, let's run the function `add_coefficients_to_table` for each of the L2 penalty strengths.
###Code
add_coefficients_to_table(coefficients_0_penalty, 'coefficients [L2=0]')
add_coefficients_to_table(coefficients_4_penalty, 'coefficients [L2=4]')
add_coefficients_to_table(coefficients_10_penalty, 'coefficients [L2=10]')
add_coefficients_to_table(coefficients_1e2_penalty, 'coefficients [L2=1e2]')
add_coefficients_to_table(coefficients_1e3_penalty, 'coefficients [L2=1e3]')
add_coefficients_to_table(coefficients_1e5_penalty, 'coefficients [L2=1e5]')
###Output
_____no_output_____
###Markdown
Using **the coefficients trained with L2 penalty 0**, find the 5 most positive words (with largest positive coefficients). Save them to **positive_words**. Similarly, find the 5 most negative words (with largest negative coefficients) and save them to **negative_words**.**Quiz Question**. Which of the following is **not** listed in either **positive_words** or **negative_words**?
###Code
positive_words = table.sort('coefficients [L2=0]', ascending=False).head(5)['word'].to_numpy()
positive_words
negative_words = table.sort('coefficients [L2=0]', ascending=True).head(5)['word'].to_numpy()
negative_words
###Output
_____no_output_____
###Markdown
Let us observe the effect of increasing L2 penalty on the 10 words just selected. We provide you with a utility function to plot the coefficient path.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
def make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list):
cmap_positive = plt.get_cmap('Reds')
cmap_negative = plt.get_cmap('Blues')
xx = l2_penalty_list
plt.plot(xx, [0.]*len(xx), '--', lw=1, color='k')
table_positive_words = table.filter_by(column_name='word', values=positive_words)
table_negative_words = table.filter_by(column_name='word', values=negative_words)
del table_positive_words['word']
del table_negative_words['word']
for i in range(len(positive_words)):
color = cmap_positive(0.8*((i+1)/(len(positive_words)*1.2)+0.15))
plt.plot(xx, table_positive_words[i:i+1].to_numpy().flatten(),
'-', label=positive_words[i], linewidth=4.0, color=color)
for i in range(len(negative_words)):
color = cmap_negative(0.8*((i+1)/(len(negative_words)*1.2)+0.15))
plt.plot(xx, table_negative_words[i:i+1].to_numpy().flatten(),
'-', label=negative_words[i], linewidth=4.0, color=color)
plt.legend(loc='best', ncol=3, prop={'size':16}, columnspacing=0.5)
plt.axis([1, 1e5, -1, 2])
plt.title('Coefficient path')
plt.xlabel(r'L2 penalty ($\lambda$)')
plt.ylabel('Coefficient value')
plt.xscale('log')
plt.rcParams.update({'font.size': 18})
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Run the following cell to generate the plot. Use the plot to answer the following quiz question.
###Code
make_coefficient_plot(table, positive_words, negative_words, l2_penalty_list=[0, 4, 10, 1e2, 1e3, 1e5])
###Output
_____no_output_____
###Markdown
**Quiz Question**: (True/False) All coefficients consistently get smaller in size as the L2 penalty is increased.**Quiz Question**: (True/False) The relative order of coefficients is preserved as the L2 penalty is increased. (For example, if the coefficient for 'cat' was more positive than that for 'dog', this remains true as the L2 penalty increases.) Measuring accuracyNow, let us compute the accuracy of the classifier model. Recall that the accuracy is given by$$\mbox{accuracy} = \frac{\mbox{ correctly classified data points}}{\mbox{ total data points}}$$Recall from lecture that that the class prediction is calculated using$$\hat{y}_i = \left\{\begin{array}{ll} +1 & h(\mathbf{x}_i)^T\mathbf{w} > 0 \\ -1 & h(\mathbf{x}_i)^T\mathbf{w} \leq 0 \\\end{array} \right.$$**Note**: It is important to know that the model prediction code doesn't change even with the addition of an L2 penalty. The only thing that changes is the estimated coefficients used in this prediction.Based on the above, we will use the same code that was used in Module 3 assignment.
###Code
def get_classification_accuracy(feature_matrix, sentiment, coefficients):
scores = np.dot(feature_matrix, coefficients)
apply_threshold = np.vectorize(lambda x: 1. if x > 0 else -1.)
predictions = apply_threshold(scores)
num_correct = (predictions == sentiment).sum()
accuracy = num_correct / len(feature_matrix)
return accuracy
###Output
_____no_output_____
###Markdown
Below, we compare the accuracy on the **training data** and **validation data** for all the models that were trained in this assignment. We first calculate the accuracy values and then build a simple report summarizing the performance for the various models.
###Code
train_accuracy = {}
train_accuracy[0] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_0_penalty)
train_accuracy[4] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_4_penalty)
train_accuracy[10] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_10_penalty)
train_accuracy[1e2] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e2_penalty)
train_accuracy[1e3] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e3_penalty)
train_accuracy[1e5] = get_classification_accuracy(feature_matrix_train, sentiment_train, coefficients_1e5_penalty)
validation_accuracy = {}
validation_accuracy[0] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_0_penalty)
validation_accuracy[4] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_4_penalty)
validation_accuracy[10] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_10_penalty)
validation_accuracy[1e2] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e2_penalty)
validation_accuracy[1e3] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e3_penalty)
validation_accuracy[1e5] = get_classification_accuracy(feature_matrix_valid, sentiment_valid, coefficients_1e5_penalty)
# Build a simple report
for key in sorted(validation_accuracy.keys()):
print("L2 penalty = %g" % key)
print("train accuracy = %s, validation_accuracy = %s" % (train_accuracy[key], validation_accuracy[key]))
print("--------------------------------------------------------------------------------")
# Optional. Plot accuracy on training and validation sets over choice of L2 penalty.
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 10, 6
sorted_list = sorted(train_accuracy.items(), key=lambda x:x[0])
plt.plot([p[0] for p in sorted_list], [p[1] for p in sorted_list], 'bo-', linewidth=4, label='Training accuracy')
sorted_list = sorted(validation_accuracy.items(), key=lambda x:x[0])
plt.plot([p[0] for p in sorted_list], [p[1] for p in sorted_list], 'ro-', linewidth=4, label='Validation accuracy')
plt.xscale('symlog')
plt.axis([0, 1e3, 0.78, 0.786])
plt.legend(loc='lower left')
plt.rcParams.update({'font.size': 18})
plt.tight_layout
###Output
_____no_output_____ |
notebooks/development/grouping.ipynb | ###Markdown
Group by
###Code
import datetime
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
data_dir = '../data/private_data'
df = pd.read_csv(data_dir+'/private_events_dev2/private_events_all_TRAIN_update.txt', header=None, sep=' ')
meta = pd.read_csv(data_dir+'/private_events_dev2/private_events_all_TRAIN_update_meta.txt', sep=',', parse_dates=['date'])
data_meta = pd.concat([meta, df], axis=1)
data_meta.head()
dog_names = ['Rex', 'Samson', 'Spike']
###Output
_____no_output_____
###Markdown
Group by date and calculate accuracy
###Code
dog0_data = data_meta[(data_meta['dog']==dog_names[0])]
grouped = dog0_data.groupby(by=['date', 'dog_result'])
group_by_result = grouped.size().unstack()
group_by_result['TPR'] = group_by_result.TP/(group_by_result.TP+group_by_result.FN)
group_by_result['TNR'] = group_by_result.TN/(group_by_result.TN+group_by_result.FP)
group_by_result['total'] = (group_by_result.TP+group_by_result.FN) + (group_by_result.TN+group_by_result.FP)
print(group_by_result)
###Output
_____no_output_____
###Markdown
Create a dataframe containing data from selected dates
###Code
# dog0's "good days" dataset (unshuffled)
condd = data_meta['dog']==dog_names[0]
cond0 = data_meta['date']!='2018-08-07'
cond1 = data_meta['date']!='2018-08-21'
cond2 = data_meta['date']!='2018-09-12'
cond3 = data_meta['date']!='2018-10-16'
cond4 = data_meta['date']!='2018-23-10'
cond = condd & cond0 & cond1 & cond2 & cond3 & cond4
selection_0 = data_meta[cond & (data_meta['class']==0)]
selection_1 = data_meta[cond & (data_meta['class']==1)]
print(selection_0.iloc[:,:16].head())
print(selection_1.iloc[:,:16].head())
focus = dog0_data[(dog0_data.date == '2018-08-07') & (dog0_data.dog_result == 'FN')]
focus.iloc[:,16:].T.plot.line()
focus = dog0_data[(dog0_data.date == '2018-08-07') & (dog0_data.dog_result == 'TN')]
focus.iloc[:5,16:].T.plot.line()
focus = dog0_data[(dog0_data.date == '2018-11-06') & (dog0_data.dog_result == 'TN')]
focus.iloc[:5,16:].T.plot.line()
###Output
_____no_output_____ |
80_data.ipynb | ###Markdown
Data> Functions used to create pytorch `DataSet`s and `DataLoader`s.
###Code
# export
from typing import Optional, Tuple, Union
import multiprocessing as mp
import numpy as np
import pandas as pd
import torch
from fastai.data_block import DataBunch, DatasetType
from pandas import DataFrame
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MaxAbsScaler, StandardScaler
from torch.utils.data import DataLoader, Dataset
# hide
%load_ext autoreload
%autoreload 2
# hide
import pandas as pd
url = "https://raw.githubusercontent.com/CamDavidsonPilon/lifelines/master/lifelines/datasets/rossi.csv"
df = pd.read_csv(url)
df.rename(columns={'week':'t', 'arrest':'e'}, inplace=True)
# export
class TestData(Dataset):
"""
Create pyTorch Dataset
parameters:
- t: time elapsed
- b: (optional) breakpoints where the hazard is different to previous segment of time.
**Must include 0 as first element and the maximum time as last element**
- x: (optional) features
"""
def __init__(self, t:np.array, b:Optional[np.array]=None, x:Optional[np.array]=None,
t_scaler:MaxAbsScaler=None, x_scaler:StandardScaler=None) -> None:
super().__init__()
self.t, self.b, self.x = t, b, x
self.t_scaler = t_scaler
self.x_scaler = x_scaler
if len(t.shape) == 1:
self.t = t[:,None]
if t_scaler:
self.t_scaler = t_scaler
self.t = self.t_scaler.transform(self.t)
else:
self.t_scaler = MaxAbsScaler()
self.t = self.t_scaler.fit_transform(self.t)
if b is not None:
b = b[1:-1]
if len(b.shape) == 1:
b = b[:,None]
if t_scaler:
self.b = t_scaler.transform(b).squeeze()
else:
self.b = self.t_scaler.transform(b).squeeze()
if x is not None:
if len(x.shape) == 1:
self.x = x[None, :]
if x_scaler:
self.x_scaler = x_scaler
self.x = self.x_scaler.transform(self.x)
else:
self.x_scaler = StandardScaler()
self.x = self.x_scaler.fit_transform(self.x)
self.only_x = False
def __len__(self) -> int:
return len(self.t)
def __getitem__(self, i:int) -> Tuple:
if self.only_x:
return torch.Tensor(self.x[i])
time = torch.Tensor(self.t[i])
if self.b is None:
x_ = (time,)
else:
t_section = torch.LongTensor([np.searchsorted(self.b, self.t[i])])
x_ = (time, t_section.squeeze())
if self.x is not None:
x = torch.Tensor(self.x[i])
x_ = x_ + (x,)
return x_
# export
class Data(TestData):
"""
Create pyTorch Dataset
parameters:
- t: time elapsed
- e: (death) event observed. 1 if observed, 0 otherwise.
- b: (optional) breakpoints where the hazard is different to previous segment of time.
- x: (optional) features
"""
def __init__(self, t:np.array, e:np.array, b:Optional[np.array]=None, x:Optional[np.array]=None,
t_scaler:MaxAbsScaler=None, x_scaler:StandardScaler=None) -> None:
super().__init__(t, b, x, t_scaler, x_scaler)
self.e = e
if len(e.shape) == 1:
self.e = e[:,None]
def __getitem__(self, i) -> Tuple:
x_ = super().__getitem__(i)
e = torch.Tensor(self.e[i])
return x_, e
# hide
np.random.seed(42)
N = 100
D = 3
p = 0.1
bs = 64
x = np.random.randn(N, D)
t = np.arange(N)
e = np.random.binomial(1, p, N)
data = Data(t, e, x=x)
batch = next(iter(DataLoader(data, bs)))
assert len(batch[-1]) == bs, (f"length of batch {len(batch)} is different"
f"to intended batch size {bs}")
[b.shape for b in batch[0]], batch[1].shape
# hide
breakpoints = np.array([0, 10, 50, N-1])
data = Data(t, e, breakpoints, x)
batch2 = next(iter(DataLoader(data, bs)))
assert len(batch2[-1]) == bs, (f"length of batch {len(batch2)} is different"
f"to intended batch size {bs}")
print([b.shape for b in batch2[0]], batch2[1].shape)
assert torch.all(batch[0][0] == batch2[0][0]), ("Discrepancy between batch "
"with breakpoints and without")
# export
class TestDataFrame(TestData):
"""
Wrapper around Data Class that takes in a dataframe instead
parameters:
- df: dataframe. **Must have t (time) and e (event) columns, other cols optional.
- b: breakpoints of time (optional)
"""
def __init__(self, df:DataFrame, b:Optional[np.array]=None,
t_scaler:MaxAbsScaler=None, x_scaler:StandardScaler=None) -> None:
t = df['t'].values
remainder = list(set(df.columns) - set(['t', 'e']))
x = df[remainder].values
if x.shape[1] == 0:
x = None
super().__init__(t, b, x, t_scaler, x_scaler)
# export
class DataFrame(Data):
"""
Wrapper around Data Class that takes in a dataframe instead
parameters:
- df: dataframe. **Must have t (time) and e (event) columns, other cols optional.
- b: breakpoints of time (optional)
"""
def __init__(self, df:DataFrame, b:Optional[np.array]=None,
t_scaler:MaxAbsScaler=None, x_scaler:StandardScaler=None) -> None:
t = df['t'].values
e = df['e'].values
x = df.drop(['t', 'e'], axis=1).values
if x.shape[1] == 0:
x = None
super().__init__(t, e, b, x, t_scaler, x_scaler)
# hide
# testing with pandas dataframe
import pandas as pd
df = pd.DataFrame({'t': t, 'e': e})
df2 = DataFrame(df)
df2[1]
# hide
# testing with x
new_df = pd.concat([df, pd.DataFrame(x)], axis=1)
df3 = DataFrame(new_df)
df3[1]
# hide
# testing with breakpoints
new_df = pd.concat([df, pd.DataFrame(x)], axis=1)
df3 = DataFrame(new_df, breakpoints)
df3[1]
###Output
_____no_output_____
###Markdown
Create iterable data loaders/ fastai databunch using above:
###Code
# export
def create_dl(df:pd.DataFrame, b:Optional[np.array]=None, train_size:float=0.8, random_state=None, bs:int=128)\
-> Tuple[DataBunch, MaxAbsScaler, StandardScaler]:
"""
Take dataframe and split into train, test, val (optional)
and convert to Fastai databunch
parameters:
- df: pandas dataframe
- b(optional): breakpoints of time. **Must include 0 as first element and the maximum time as last element**
- train_p: training percentage
- bs: batch size
"""
df.reset_index(drop=True, inplace=True)
train, val = train_test_split(df, train_size=train_size, stratify=df["e"], random_state=random_state)
train.reset_index(drop=True, inplace=True)
val.reset_index(drop=True, inplace=True)
train_ds = DataFrame(train, b)
val_ds = DataFrame(val, b, train_ds.t_scaler, train_ds.x_scaler)
train_dl = DataLoader(train_ds, bs, shuffle=True, drop_last=False, num_workers=mp.cpu_count())
val_dl = DataLoader(val_ds, bs, shuffle=False, drop_last=False, num_workers=mp.cpu_count())
return train_dl, val_dl, train_ds.t_scaler, train_ds.x_scaler
def create_test_dl(df:pd.DataFrame, b:Optional[np.array]=None,
t_scaler:MaxAbsScaler=None, x_scaler:StandardScaler=None,
bs:int=128, only_x:bool=False) -> DataLoader:
"""
Take dataframe and return a pytorch dataloader.
parameters:
- df: pandas dataframe
- b: breakpoints of time (optional)
- bs: batch size
"""
if only_x:
df["t"] = 0
df.reset_index(drop=True, inplace=True)
test_ds = TestDataFrame(df, b, t_scaler, x_scaler)
test_ds.only_x = only_x
test_dl = DataLoader(test_ds, bs, shuffle=False, drop_last=False, num_workers=mp.cpu_count())
return test_dl
# export
def get_breakpoints(df:DataFrame, percentiles:list=[20, 40, 60, 80]) -> np.array:
"""
Gives the times at which death events occur at given percentile
parameters:
df - must contain columns 't' (time) and 'e' (death event)
percentiles - list of percentages at which breakpoints occur (do not include 0 and 100)
"""
event_times = df.loc[df['e']==1, 't'].values
breakpoints = np.percentile(event_times, percentiles)
breakpoints = np.array([0] + breakpoints.tolist() + [df['t'].max()])
return breakpoints
# hide
from nbdev.export import *
notebook2script()
###Output
Converted 00_index.ipynb.
Converted 10_SAT.ipynb.
Converted 20_KaplanMeier.ipynb.
Converted 30_overall_model.ipynb.
Converted 50_hazard.ipynb.
Converted 55_hazard.PiecewiseHazard.ipynb.
Converted 59_hazard.Cox.ipynb.
Converted 60_AFT_models.ipynb.
Converted 65_AFT_error_distributions.ipynb.
Converted 80_data.ipynb.
Converted 90_model.ipynb.
Converted 95_Losses.ipynb.
|
module2-wrangle-ml-datasets/Build_Week_Project_TakeII.ipynb | ###Markdown
I. Complete Imports and Wrangle Data
###Code
# Import of packages and package classes
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from category_encoders import OneHotEncoder, OrdinalEncoder
from pandas_profiling import ProfileReport
from pdpbox.pdp import pdp_isolate, pdp_plot, pdp_interact, pdp_interact_plot
from sklearn.ensemble import RandomForestClassifier
from sklearn.impute import SimpleImputer
from sklearn.inspection import permutation_importance
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, plot_roc_curve, roc_auc_score, plot_confusion_matrix
from sklearn.model_selection import cross_val_score, train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from xgboost import XGBClassifier
# Write a wrangle function for our dataset
def wrangle(filepath):
# Establish path to data files
DATA_PATH = '../data/class-project/LoanApproval/'
# Read in the data
df = pd.read_csv(DATA_PATH + filepath)
print(df.shape)
# Drop 'Loan_ID' column, it is an identifier only and will not affect model
# No other high-cardinality colums
df.drop(columns='Loan_ID', inplace=True)
# Clean data col names
df.columns = [col.lower() for col in df.columns]
df = df.rename(columns=
{'applicantincome': 'applicant_income',
'coapplicantincome': 'coapplicant_income',
'loanamount': 'loan_amount'})
# Convert 'credit_history' to binary categorical from float (ready for OHE/ordinal encoding)
df['credit_history'].replace(to_replace={1.0: '1', 0.0: '0'},
inplace=True)
# Remove the outliers where income is > $250,000, and the two outliers where income is < $10,000
df = df[ (df['applicant_income'] > 1_000) &
(df['applicant_income'] < 25_000)]
# Remove the outlier for 'loan_amount'
df = df[df['loan_amount'] > 10]
#df = df[ (df['loan_amount'] > 80) &
# (df['loan_amount'] < 300_000)]
# Convert 'coapplicant_income' and'loan_amount' to integers from floats
for col in ['coapplicant_income', 'loan_amount']:
df[col] = df[col].astype(int)
# Address NaN values and prepare for OHE/Ordinal Encoding
mode_cols = ['gender', 'married', 'dependents', 'self_employed', 'loan_amount_term', 'credit_history']
for col in mode_cols:
df[col].fillna(value=df[col].mode()[0], inplace=True)
df['loan_amount_term'] = df['loan_amount_term'].astype(int).astype(str)
df['dependents'] = df['dependents'].str.strip('+')
# Convert target, 'LoanStatus' to binary numeric values
df['loan_status'].replace(to_replace={'Y': 1, 'N':0}, inplace=True)
#df.drop(columns=['gender', 'property_area', 'loan_amount_term', 'self_employed', 'education', 'married'], inplace=True)
return df
train_path = 'train_data.csv'
train = wrangle(train_path)
print(train.shape)
train.head()
# Remaining NaN values are in categorical features and will be filled using an imputer with strategy='most_frequent'
train.info()
train['gender'].value_counts(dropna=False)
train['credit_history'].value_counts(dropna=False)
train['dependents'].value_counts(dropna=False)
train['loan_amount_term'].value_counts(dropna=False)
train['loan_amount'].plot(kind='hist', bins=50)
# EDA
print(train.shape)
train.head()
# 'gender', 'married', 'education', 'self_employed' are binary categorial variables
# 'loan_status' is our target feature
# 'credit_history' is indicated to mean 'credit history meets guidelines, Y/N', and should be an integer
train.info()
# Categorical variables are not high cardinality
train.describe(exclude='number')
# Many more male applicants than female
print(train['gender'].value_counts(normalize=True))
train['gender'].value_counts().plot(kind='bar')
plt.xlabel('Gender')
plt.ylabel('Count')
plt.show();
# Primary number of dependents is 0, 3+ are the most infrequent
print(train['dependents'].value_counts(normalize=True))
train['dependents'].value_counts().plot(kind='bar')
plt.xlabel('Dependents')
plt.ylabel('Count')
plt.show();
# More applicants are classified as 'Graduate' vs 'Not Graduate'
print(train['education'].value_counts(normalize=True), "\n")
train['education'].value_counts().plot(kind='bar')
plt.xlabel('Graduation Status')
plt.ylabel('Count')
plt.show();
# More applicants work for someone else versus being self-employed
print(train['self_employed'].value_counts(normalize=True), "\n")
train['self_employed'].value_counts().plot(kind='bar')
plt.xlabel('Self Employed')
plt.ylabel('Frequency')
plt.show()
# After removing outliers, can see that most applicant income are in the 30,000 to 50,000 range, skewed to the right
print(train['applicant_income'].min())
print(train['applicant_income'].unique())
train['applicant_income'].plot(kind='hist', bins=50)
plt.xlabel('Applicant Income')
plt.ylabel('Frequency')
plt.show();
# Distribution of 'coapplicant_income'
print(train['coapplicant_income'].min())
print(train['coapplicant_income'].unique())
train['applicant_income'].plot(kind='hist')
plt.xlabel('Co-applicant Income')
plt.ylabel('Frequency')
plt.show();
# Distribution of 'loan_amount'
print(train['loan_amount'].min())
print(train['loan_amount'].unique())
train['loan_amount'].plot(kind='hist')
plt.xlabel('Loan Amount')
plt.ylabel('Frequency')
plt.show();
# Most applicants apper to be applying for loans with 30-year term lengths
print(train['loan_amount_term'].value_counts(normalize=True), "\n")
train['loan_amount_term'].value_counts().plot(kind='bar')
plt.xlabel('Loan Term')
plt.ylabel('Frequency')
plt.show()
# Most applicants 'credit_history' meet guidelines
print(train['credit_history'].value_counts(normalize=True), "\n")
print(train['credit_history'].dtypes)
train['credit_history'].value_counts().plot(kind='bar')
plt.xlabel('Credity History, meets guidelines')
plt.ylabel('Frequency')
plt.show()
# Most 'property_area'
print(train['property_area'].value_counts(normalize=True), "\n")
print(train['property_area'].dtypes)
train['property_area'].value_counts().plot(kind='bar')
plt.xlabel('Property Area')
plt.ylabel('Frequency')
plt.show()
train.shape
train.head()
# Examine for correlation among continuous variables
sns.pairplot(train[['applicant_income', 'coapplicant_income', 'loan_amount']])
###Output
_____no_output_____
###Markdown
II. Split the Data
###Code
# Create Feature Matrix and Target Array
target = 'loan_status'
y = train[target]
X = train.drop(columns=target)
y.shape
# Split the data
# Will use a random split; there is no datetime information included in this dataset.
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
###Output
_____no_output_____
###Markdown
III. Establish a Baseline
###Code
# Establish a baseline for this classification problem - 'Was the loan approved?'
# The classes of the Target Vector are moderately imbalanced towards approval
# Since this is a classification problem we will be looking at accuracy
# You have a 69.15% chance of being correct if you always decide that the loan was approved; this is our baseline
# to beat
print(y_train.value_counts(normalize=True), "\n")
print('Baseline Accuracy: {:.4f}'.format(y_train.value_counts(normalize=True).max()))
X_train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 462 entries, 265 to 110
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 gender 462 non-null object
1 married 462 non-null object
2 dependents 462 non-null object
3 education 462 non-null object
4 self_employed 462 non-null object
5 applicant_income 462 non-null int64
6 coapplicant_income 462 non-null int64
7 loan_amount 462 non-null int64
8 loan_amount_term 462 non-null object
9 credit_history 462 non-null object
10 property_area 462 non-null object
dtypes: int64(3), object(8)
memory usage: 43.3+ KB
###Markdown
IV. Build Models- `LogisticRegression` - `OneHotEncoder` - `StandardScaler` - `RandomForrestClassifier` - `OrdinalEncoder` - `XGBClassifier` - `OrdinalEncoder`
###Code
# Model 1: Logistic Regression Model
model_lr = make_pipeline(
OneHotEncoder(use_cat_names=True),
#SimpleImputer(strategy='most_frequent'),
StandardScaler(),
LogisticRegression()
)
model_lr.fit(X_train, y_train);
# Model 1: Random Forest Classifier Model
model_rf = make_pipeline(
OrdinalEncoder(),
#SimpleImputer(strategy='most_frequent'),
RandomForestClassifier(n_jobs=-1, random_state=42)
)
model_rf.fit(X_train, y_train);
# Model 2: XG-Boost Classifier
model_xgb = make_pipeline(
OrdinalEncoder(),
#SimpleImputer(strategy='most_frequent'),
XGBClassifier(random_state=42, n_jobs=-1)
)
model_xgb.fit(X_train, y_train);
###Output
_____no_output_____
###Markdown
V. Check Metrics
###Code
# Classification: Is your majority class frequency >= 50% and < 70% ?
# If so, you can just use accuracy if you want. Outside that range, accuracy could be misleading.
# What evaluation metric will you choose, in addition to or instead of accuracy?
# Our majority class is less than 70% and can just use accuracy. Should, however, come back and build out a
# confusion_matrix, and look at recall/precision. Will also explore Precision, Recall, and F1 Score.
# Training and Validation accuracy of our Logistic Regression model
print('Training Accuracy (LOGR):', model_lr.score(X_train, y_train))
print('Validation Accuracy (LOGR):', model_lr.score(X_val, y_val))
# Cross Validation Score for our Logistic Regression model
lr_cvs = cross_val_score(model_lr, X_train, y_train, cv=5, n_jobs=-1)
print('Cross Validation Score (LOGR):', '\n', lr_cvs[0], '\n',
lr_cvs[1], '\n', lr_cvs[2], '\n', lr_cvs[3], '\n',
lr_cvs[4])
# Training and Validation accuracy of our Random Forest Classifier model
print('Training Accuracy (RF):', model_rf.score(X_train, y_train))
print('Validation Accuracy (RF):', model_rf.score(X_val, y_val))
# Cross Validation Score for our Random Forest Classifier model
rf_cvs = cross_val_score(model_rf, X_train, y_train, cv=5, n_jobs=-1)
print('Cross Validation Score (RF):', '\n', rf_cvs[0], '\n',
rf_cvs[1], '\n', rf_cvs[2], '\n', rf_cvs[3], '\n',
rf_cvs[4])
# Training and Validation accuracy of our XGBoost Classifier model
# Model appears to be overfit
print('Training Accuracy (XGB):', model_xgb.score(X_train, y_train))
print('Valdiation Accuracy (XGB):', model_xgb.score(X_val, y_val))
# Cross Validation Score for our XGBoost Classifier model
xgb_cvs = cross_val_score(model_xgb, X_train, y_train, cv=5, n_jobs=-1)
print('Cross Validation Score (RF):', '\n', xgb_cvs[0], '\n',
xgb_cvs[1], '\n', xgb_cvs[2], '\n', xgb_cvs[3], '\n',
xgb_cvs[4])
# LOGISTIC REGRESSION, preformance
# Not very good precision for Y, great recall for Y
print('Logistic Regression')
print(classification_report(y_val, model_lr.predict(X_val)))
# LOGISTIC REGRESSION
# Plot Confusion Matrix
plot_confusion_matrix(model_lr, X_val, y_val, values_format='.0f')
# TN FP
#
# FN TP
# LOGISTIC REGRESSION
# Calculate Precision and Recall
# Precision = TP / (TP + FP)
precision = 82 / (82 + 16)
# Recall = TP / (TP + FN)
recall = 82 / (82 + 3)
print('Logistic Regression model precision', precision)
print('Logistic Regression model recall', recall)
# RANDOM FOREST CLASSIFIER, preformance
# My comments here ************************
print('Random Forest Classifier')
print(classification_report(y_val, model_rf.predict(X_val)))
# RANDOM FOREST CLASSIFIER
# Plot Confusion Matrix
plot_confusion_matrix(model_rf, X_val, y_val, values_format='.0f')
# TN FP
#
# FN TP
# RANDOM FOREST CLASSIFIER
# Calculate Precision and Recall
# Precision = TP / (TP + FP)
precision = 76 / (76 + 17)
# Recall = TP / (TP + FN)
recall = 76 / (76 + 9)
print('Random Forest Classifier precision', precision)
print('Random Forest Classifier recall', recall)
# XGBoost CLASSIFIER, preformance
# My comments here**********
print('XGBoost Classifier')
print(classification_report(y_val, model_xgb.predict(X_val)))
# XGBoost CLASSIFIER
# Plot Confusion Matrix
plot_confusion_matrix(model_xgb, X_val, y_val, values_format='.0f')
# TN FP
#
# FN TP
# XGBoost CLASSIFIER
# Calculate Precision and Recall
# Precision = TP / (TP + FP)
precision = 74 / (74 + 15)
# Recall = TP / (TP + FN)
recall = 74 / (74 + 11)
print('XGBoost model precision', precision)
print('XGBoost model recall', recall)
###Output
XGBoost model precision 0.8314606741573034
XGBoost model recall 0.8705882352941177
###Markdown
ROC Curve- To evalute models for binary classification- To decide what probability threshold you should use when making your predictions
###Code
# Use VALIDATION DATA
# ROC curve is used with classification problems
# 'How far up can I go without having to go too far to the right?'
# An ROC curve let's you see how your model will perform at various thresholds
# Also allows you to compare different models
lr = plot_roc_curve(model_lr, X_val, y_val, label='Logistic')
rf = plot_roc_curve(model_rf, X_val, y_val, ax=lr.ax_, label='Random Forest')
xgb = plot_roc_curve(model_xgb, X_val, y_val, ax=lr.ax_, label='XGBoost')
plt.plot([(0,0), (1,1)], color='grey', linestyle='--')
plt.legend();
print('Logistic: ROA-AUC Score:', roc_auc_score(y_val, model_lr.predict(X_val)))
print('Random Forest: ROC-AUC Score:', roc_auc_score(y_val, model_rf.predict(X_val)))
print('XGBoost: ROC-AUC Score:', roc_auc_score(y_val, model_xgb.predict(X_val)))
# Logisitic Regression coefficients
coefs = model_lr.named_steps['logisticregression'].coef_[0]
features = model_lr.named_steps['onehotencoder'].get_feature_names()
pd.Series(coefs, index=features).sort_values(key=abs).tail(20).plot(kind='barh')
importances = model_xgb.named_steps['xgbclassifier'].feature_importances_
features = X_train.columns
feat_imp = pd.Series(importances, index=features).sort_values()
feat_imp
feat_imp.tail(10).plot(kind='barh')
plt.xlabel('Gini Importance')
plt.ylabel('Feature')
plt.title('Feature Importances for XGBoost model')
# Calculate performance metrics using permutated data (add static-noise to features)
perm_imp = permutation_importance(
model_xgb,
X_val, # Always use your VALIDATION set
y_val,
n_jobs=-1,
random_state=42
)
perm_imp.keys()
# Put results into a DataFrame
data = {'importances_mean': perm_imp['importances_mean'],
'importances_std': perm_imp['importances_std']}
df = pd.DataFrame(data, index=X_val.columns)
df.sort_values(by='importances_mean', inplace=True)
df
df['importances_mean'].tail(10).plot(kind='barh')
plt.xlabel('Importance (drop in accuracy)')
plt.ylabel('Feature')
plt.title('Permutation importance for model_xgb')
feature = 'applicant_income'
# Build your 'pdp_isolate' object
# Create and instance of the pdp_isolate class
# Always use with test or validation data, NEVER training data
isolate = pdp_isolate(
model=model_xgb,
dataset=X_val, #<-- Always use with VALIDATION or TEST date
model_features=X_val.columns,
feature=feature
)
# Build your plot
pdp_plot(isolate, feature_name=feature);
feature = 'coapplicant_income'
# Build your 'pdp_isolate' object
# Create and instance of the pdp_isolate class
# Always use with test or validation data, NEVER training data
isolate = pdp_isolate(
model=model_xgb,
dataset=X_val, #<-- Always use with VALIDATION or TEST date
model_features=X_val.columns,
feature=feature
)
# Build your plot
pdp_plot(isolate, feature_name=feature);
feature = 'loan_amount'
# Build your 'pdp_isolate' object
# Create and instance of the pdp_isolate class
# Always use with test or validation data, NEVER training data
isolate = pdp_isolate(
model=model_xgb,
dataset=X_val, #<-- Always use with VALIDATION or TEST date
model_features=X_val.columns,
feature=feature
)
# Build your plot
pdp_plot(isolate, feature_name=feature);
features = ['loan_amount', 'coapplicant_income']
interact = pdp_interact(
model=model_xgb,
dataset=X_val,
model_features=X_val.columns,
features=features
)
pdp_interact_plot(interact, plot_type='grid', feature_names=features);
# Tune Logistic Regression Model
# penalty=
penalty_variants = ['l2', 'none']
# C=range(1, 11)
# solver=
solver_variants = ['newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga']
# No effect witnessed with tuning of this hyperparameter
# for _ in penalty_variants:
# model_lr_tune = make_pipeline(
# OneHotEncoder(use_cat_names=True),
# StandardScaler(),
# LogisticRegression(penalty=_)
# )
# model_lr_tune.fit(X_train, y_train);
# print(_, 'Validation Accuracy (LOGR_penalty_tuned):', model_lr.score(X_val, y_val), "\n")
# No effect witnessed with tuning of this hyperparameter
# for _ in range(1, 11):
# model_lr_tune = make_pipeline(
# OneHotEncoder(use_cat_names=True),
# StandardScaler(),
# LogisticRegression(C=_)
# )
# model_lr_tune.fit(X_train, y_train);
# print(_, 'Validation Accuracy (LOGR_penalty_tuned):', model_lr.score(X_val, y_val), "\n")
# No effect witnessed with tuning of this hyperparameter
# for _ in solver_variants:
# model_lr_tune = make_pipeline(
# OneHotEncoder(use_cat_names=True),
# StandardScaler(),
# LogisticRegression(solver=_)
# )
# model_lr_tune.fit(X_train, y_train);
# print(_, 'Validation Accuracy (LOGR_penalty_tuned):', model_lr.score(X_val, y_val), "\n")
params = {'randomforestclassifier__n_estimators': np.arange(20, 100, 5),
'randomforestclassifier__max_depth': np.arange(10, 75, 5),
'randomforestclassifier__max_samples': np.arange(0.1, 0.99, 0.1)}
rf_rs = RandomizedSearchCV(model_rf, param_distributions=params, n_iter=10, cv=5, n_jobs=-1, verbose=1)
rf_rs.fit(X_train, y_train)
rf_rs.best_score_
rf_rs.score(X_val, y_val)
rf_rs.best_params_
# # wrangle_I, clean data and drop NaN values
# def wrangle_I(filepath):
# # Read in the data
# df = pd.read_csv('../data/class-project/LoanApproval/' + filepath)
# print(df.shape)
# # Drop NaN values and drop high-cardinality identifier column, 'Loan_ID'
# df.dropna(inplace=True)
# df.drop(columns='Loan_ID', inplace=True)
# # Cleanup column names
# df.columns = [col.lower() for col in df.columns]
# df = df.rename(columns=
# {'applicantincome': 'applicant_income',
# 'coapplicantincome': 'coapplicant_income',
# 'loanamount': 'loan_amount'})
# # Scale 'applicant_income' and 'coapplicant_income' to thousands
# df['applicant_income'] = df['applicant_income'] / 100
# df['coapplicant_income'] = df['coapplicant_income'] / 100
# # Convert 'credit_history' to binary categorical from float
# df['credit_history'].replace(to_replace={1.0: '1', 0.0: '0'},
# inplace=True)
# # Convert 'loan_amount_term' to categorical variable (object) from float
# df['loan_amount_term'] = df['loan_amount_term'].astype(int).astype(str)
# # Clean 'dependents' feature
# df['dependents'] = df['dependents'].str.strip('+')
# # Convert target, 'LoanStatus' to binary numeric values
# df['loan_status'].replace(to_replace={'Y': 1, 'N':0}, inplace=True)
# return df
# train_path = 'train_data.csv'
# train_I = wrangle_I(train_path)
###Output
_____no_output_____ |
notebooks/basic-cross-validation.ipynb | ###Markdown
Basic: cross-validation This notebook explores the main elements of Optunity's cross-validation facilities, including:* standard cross-validation* using strata and clusters while constructing folds* using different aggregatorsWe recommend perusing the related documentation for more details.Nested cross-validation is available as a separate notebook.
###Code
import optunity
import optunity.cross_validation
###Output
_____no_output_____
###Markdown
We start by generating some toy data containing 6 instances which we will partition into folds.
###Code
data = list(range(6))
labels = [True] * 3 + [False] * 3
###Output
_____no_output_____
###Markdown
Standard cross-validation Each function to be decorated with cross-validation functionality must accept the following arguments:- x_train: training data- x_test: test data- y_train: training labels (required only when y is specified in the cross-validation decorator)- y_test: test labels (required only when y is specified in the cross-validation decorator)These arguments will be set implicitly by the cross-validation decorator to match the right folds. Any remaining arguments to the decorated function remain as free parameters that must be set later on. Lets start with the basics and look at Optunity's cross-validation in action. We use an objective function that simply prints out the train and test data in every split to see what's going on.
###Code
def f(x_train, y_train, x_test, y_test):
print("")
print("train data:\t" + str(x_train) + "\t train labels:\t" + str(y_train))
print("test data:\t" + str(x_test) + "\t test labels:\t" + str(y_test))
return 0.0
###Output
_____no_output_____
###Markdown
We start with 2 folds, which leads to equally sized train and test partitions.
###Code
f_2folds = optunity.cross_validated(x=data, y=labels, num_folds=2)(f)
print("using 2 folds")
f_2folds()
# f_2folds as defined above would typically be written using decorator syntax as follows
# we don't do that in these examples so we can reuse the toy objective function
@optunity.cross_validated(x=data, y=labels, num_folds=2)
def f_2folds(x_train, y_train, x_test, y_test):
print("")
print("train data:\t" + str(x_train) + "\t train labels:\t" + str(y_train))
print("test data:\t" + str(x_test) + "\t test labels:\t" + str(y_test))
return 0.0
###Output
_____no_output_____
###Markdown
If we use three folds instead of 2, we get 3 iterations in which the training set is twice the size of the test set.
###Code
f_3folds = optunity.cross_validated(x=data, y=labels, num_folds=3)(f)
print("using 3 folds")
f_3folds()
###Output
using 3 folds
train data: [2, 1, 3, 0] train labels: [True, True, False, True]
test data: [5, 4] test labels: [False, False]
train data: [5, 4, 3, 0] train labels: [False, False, False, True]
test data: [2, 1] test labels: [True, True]
train data: [5, 4, 2, 1] train labels: [False, False, True, True]
test data: [3, 0] test labels: [False, True]
###Markdown
If we do two iterations of 3-fold cross-validation (denoted by 2x3 fold), two sets of folds are generated and evaluated.
###Code
f_2x3folds = optunity.cross_validated(x=data, y=labels, num_folds=3, num_iter=2)(f)
print("using 2x3 folds")
f_2x3folds()
###Output
using 2x3 folds
train data: [4, 1, 5, 3] train labels: [False, True, False, False]
test data: [0, 2] test labels: [True, True]
train data: [0, 2, 5, 3] train labels: [True, True, False, False]
test data: [4, 1] test labels: [False, True]
train data: [0, 2, 4, 1] train labels: [True, True, False, True]
test data: [5, 3] test labels: [False, False]
train data: [0, 2, 1, 4] train labels: [True, True, True, False]
test data: [5, 3] test labels: [False, False]
train data: [5, 3, 1, 4] train labels: [False, False, True, False]
test data: [0, 2] test labels: [True, True]
train data: [5, 3, 0, 2] train labels: [False, False, True, True]
test data: [1, 4] test labels: [True, False]
###Markdown
Using strata and clusters Strata are defined as sets of instances that should be spread out across folds as much as possible (e.g. stratify patients by age). Clusters are sets of instances that must be put in a single fold (e.g. cluster measurements of the same patient).Optunity allows you to specify strata and/or clusters that must be accounted for while construct cross-validation folds. Not all instances have to belong to a stratum or clusters. Strata We start by illustrating strata. Strata are specified as a list of lists of instances indices. Each list defines one stratum. We will reuse the toy data and objective function specified above. We will create 2 strata with 2 instances each. These instances will be spread across folds. We create two strata: $\{0, 1\}$ and $\{2, 3\}$.
###Code
strata = [[0, 1], [2, 3]]
f_stratified = optunity.cross_validated(x=data, y=labels, strata=strata, num_folds=3)(f)
f_stratified()
###Output
train data: [0, 4, 2, 5] train labels: [True, False, True, False]
test data: [1, 3] test labels: [True, False]
train data: [1, 3, 2, 5] train labels: [True, False, True, False]
test data: [0, 4] test labels: [True, False]
train data: [1, 3, 0, 4] train labels: [True, False, True, False]
test data: [2, 5] test labels: [True, False]
###Markdown
Clusters Clusters work similarly, except that now instances within a cluster are guaranteed to be placed within a single fold. The way to specify clusters is identical to strata. We create two clusters: $\{0, 1\}$ and $\{2, 3\}$. These pairs will always occur in a single fold.
###Code
clusters = [[0, 1], [2, 3]]
f_clustered = optunity.cross_validated(x=data, y=labels, clusters=clusters, num_folds=3)(f)
f_clustered()
###Output
train data: [0, 1, 2, 3] train labels: [True, True, True, False]
test data: [4, 5] test labels: [False, False]
train data: [4, 5, 2, 3] train labels: [False, False, True, False]
test data: [0, 1] test labels: [True, True]
train data: [4, 5, 0, 1] train labels: [False, False, True, True]
test data: [2, 3] test labels: [True, False]
###Markdown
Strata and clusters Strata and clusters can be used together. Lets say we have the following configuration: - 1 stratum: $\{0, 1, 2\}$- 2 clusters: $\{0, 3\}$, $\{4, 5\}$In this particular example, instances 1 and 2 will inevitably end up in a single fold, even though they are part of one stratum. This happens because the total data set has size 6, and 4 instances are already in clusters.
###Code
strata = [[0, 1, 2]]
clusters = [[0, 3], [4, 5]]
f_strata_clustered = optunity.cross_validated(x=data, y=labels, clusters=clusters, strata=strata, num_folds=3)(f)
f_strata_clustered()
###Output
train data: [4, 5, 0, 3] train labels: [False, False, True, False]
test data: [1, 2] test labels: [True, True]
train data: [1, 2, 0, 3] train labels: [True, True, True, False]
test data: [4, 5] test labels: [False, False]
train data: [1, 2, 4, 5] train labels: [True, True, False, False]
test data: [0, 3] test labels: [True, False]
###Markdown
Aggregators Aggregators are used to combine the scores per fold into a single result. The default approach used in cross-validation is to take the mean of all scores. In some cases, we might be interested in worst-case or best-case performance, the spread, ...Opunity allows passing a custom callable to be used as aggregator. The default aggregation in Optunity is to compute the mean across folds.
###Code
@optunity.cross_validated(x=data, num_folds=3)
def f(x_train, x_test):
result = x_test[0]
print(result)
return result
f(1)
###Output
4
1
2
###Markdown
This can be replaced by any function, e.g. min or max.
###Code
@optunity.cross_validated(x=data, num_folds=3, aggregator=max)
def fmax(x_train, x_test):
result = x_test[0]
print(result)
return result
fmax(1)
@optunity.cross_validated(x=data, num_folds=3, aggregator=min)
def fmin(x_train, x_test):
result = x_test[0]
print(result)
return result
fmin(1)
###Output
3
4
5
###Markdown
Retaining intermediate results Often, it may be useful to retain all intermediate results, not just the final aggregated data. This is made possible via `optunity.cross_validation.mean_and_list` aggregator. This aggregator computes the mean for internal use in cross-validation, but also returns a list of lists containing the full evaluation results.
###Code
@optunity.cross_validated(x=data, num_folds=3,
aggregator=optunity.cross_validation.mean_and_list)
def f_full(x_train, x_test, coeff):
return x_test[0] * coeff
# evaluate f
mean_score, all_scores = f_full(1.0)
print(mean_score)
print(all_scores)
###Output
2.33333333333
[0.0, 2.0, 5.0]
###Markdown
Note that a cross-validation based on the `mean_and_list` aggregator essentially returns a tuple of results. If the result is iterable, all solvers in Optunity use the first element as the objective function value. You can let the cross-validation procedure return other useful statistics too, which you can access from the solver trace.
###Code
opt_coeff, info, _ = optunity.minimize(f_full, coeff=[0, 1], num_evals=10)
print(opt_coeff)
print("call log")
for args, val in zip(info.call_log['args']['coeff'], info.call_log['values']):
print(str(args) + '\t\t' + str(val))
###Output
{'coeff': 0.15771484375}
call log
0.34521484375 (0.8055013020833334, [0.0, 0.6904296875, 1.72607421875])
0.47021484375 (1.09716796875, [0.0, 0.9404296875, 2.35107421875])
0.97021484375 (2.2638346354166665, [0.0, 1.9404296875, 4.85107421875])
0.72021484375 (1.6805013020833333, [0.0, 1.4404296875, 3.60107421875])
0.22021484375 (0.5138346354166666, [0.0, 0.4404296875, 1.10107421875])
0.15771484375 (0.3680013020833333, [0.0, 0.3154296875, 0.78857421875])
0.65771484375 (1.53466796875, [0.0, 1.3154296875, 3.28857421875])
0.90771484375 (2.1180013020833335, [0.0, 1.8154296875, 4.53857421875])
0.40771484375 (0.9513346354166666, [0.0, 0.8154296875, 2.03857421875])
0.28271484375 (0.65966796875, [0.0, 0.5654296875, 1.41357421875])
###Markdown
Cross-validation with scikit-learn In this example we will show how to use cross-validation methods that are provided by scikit-learn in conjunction with Optunity. To do this we provide Optunity with the folds that scikit-learn produces in a specific format.In supervised learning datasets often have unbalanced labels. When performing cross-validation with unbalanced data it is good practice to preserve the percentage of samples for each class across folds. To achieve this label balance we will use StratifiedKFold.
###Code
data = list(range(20))
labels = [1 if i%4==0 else 0 for i in range(20)]
@optunity.cross_validated(x=data, y=labels, num_folds=5)
def unbalanced_folds(x_train, y_train, x_test, y_test):
print("")
print("train data:\t" + str(x_train) + "\ntrain labels:\t" + str(y_train)) + '\n'
print("test data:\t" + str(x_test) + "\ntest labels:\t" + str(y_test)) + '\n'
return 0.0
unbalanced_folds()
###Output
train data: [16, 6, 4, 14, 0, 11, 19, 5, 9, 2, 12, 8, 7, 10, 18, 3]
train labels: [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0]
test data: [15, 1, 13, 17]
test labels: [0, 0, 0, 0]
train data: [15, 1, 13, 17, 0, 11, 19, 5, 9, 2, 12, 8, 7, 10, 18, 3]
train labels: [0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0]
test data: [16, 6, 4, 14]
test labels: [1, 0, 1, 0]
train data: [15, 1, 13, 17, 16, 6, 4, 14, 9, 2, 12, 8, 7, 10, 18, 3]
train labels: [0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0]
test data: [0, 11, 19, 5]
test labels: [1, 0, 0, 0]
train data: [15, 1, 13, 17, 16, 6, 4, 14, 0, 11, 19, 5, 7, 10, 18, 3]
train labels: [0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0]
test data: [9, 2, 12, 8]
test labels: [0, 0, 1, 1]
train data: [15, 1, 13, 17, 16, 6, 4, 14, 0, 11, 19, 5, 9, 2, 12, 8]
train labels: [0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1]
test data: [7, 10, 18, 3]
test labels: [0, 0, 0, 0]
###Markdown
Notice above how the test label sets have a varying number of postive samples, some have none, some have one, and some have two.
###Code
from sklearn.cross_validation import StratifiedKFold
stratified_5folds = StratifiedKFold(labels, n_folds=5)
folds = [[list(test) for train, test in stratified_5folds]]
@optunity.cross_validated(x=data, y=labels, folds=folds, num_folds=5)
def balanced_folds(x_train, y_train, x_test, y_test):
print("")
print("train data:\t" + str(x_train) + "\ntrain labels:\t" + str(y_train)) + '\n'
print("test data:\t" + str(x_test) + "\ntest labels:\t" + str(y_test)) + '\n'
return 0.0
balanced_folds()
###Output
train data: [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
train labels: [1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0]
test data: [0, 1, 2, 3]
test labels: [1, 0, 0, 0]
train data: [0, 1, 2, 3, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
train labels: [1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0]
test data: [4, 5, 6, 7]
test labels: [1, 0, 0, 0]
train data: [0, 1, 2, 3, 4, 5, 6, 7, 12, 13, 14, 15, 16, 17, 18, 19]
train labels: [1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0]
test data: [8, 9, 10, 11]
test labels: [1, 0, 0, 0]
train data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 16, 17, 18, 19]
train labels: [1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0]
test data: [12, 13, 14, 15]
test labels: [1, 0, 0, 0]
train data: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
train labels: [1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0]
test data: [16, 17, 18, 19]
test labels: [1, 0, 0, 0]
###Markdown
Now all of our train sets have four positive samples and our test sets have one positive sample.To use predetermined folds, place a list of the test sample idices into a list. And then insert that list into another list. Why so many nested lists? Because you can perform multiple cross-validation runs by setting num_iter appropriately and then append num_iter lists of test samples to the outer most list. Note that the test samples for a given fold are the idicies that you provide and then the train samples for that fold are all of the indices from all other test sets joined together. If not done carefully this may lead to duplicated samples in a train set and also samples that fall in both train and test sets of a fold if a datapoint is in multiple folds' test sets.
###Code
data = list(range(6))
labels = [True] * 3 + [False] * 3
fold1 = [[0, 3], [1, 4], [2, 5]]
fold2 = [[0, 5], [1, 4], [0, 3]] # notice what happens when the indices are not unique
folds = [fold1, fold2]
@optunity.cross_validated(x=data, y=labels, folds=folds, num_folds=3, num_iter=2)
def multiple_iters(x_train, y_train, x_test, y_test):
print("")
print("train data:\t" + str(x_train) + "\t train labels:\t" + str(y_train))
print("test data:\t" + str(x_test) + "\t\t test labels:\t" + str(y_test))
return 0.0
multiple_iters()
###Output
train data: [1, 4, 2, 5] train labels: [True, False, True, False]
test data: [0, 3] test labels: [True, False]
train data: [0, 3, 2, 5] train labels: [True, False, True, False]
test data: [1, 4] test labels: [True, False]
train data: [0, 3, 1, 4] train labels: [True, False, True, False]
test data: [2, 5] test labels: [True, False]
train data: [1, 4, 0, 3] train labels: [True, False, True, False]
test data: [0, 5] test labels: [True, False]
train data: [0, 5, 0, 3] train labels: [True, False, True, False]
test data: [1, 4] test labels: [True, False]
train data: [0, 5, 1, 4] train labels: [True, False, True, False]
test data: [0, 3] test labels: [True, False]
|
Wavefunctions/ThreeBodyJastrowPolynomial.ipynb | ###Markdown
Three-body polynomial JastrowThree body Jastrow factor from "Jastrow correlation factor for atoms, molecules, and solids" N.D.Drummond, M.D.Towler, R.J.Needs, PRB 70 235119(2004)See the 'gen_three_body.py' script for code generation
###Code
ri = Symbol('r_i')
rj = Symbol('r_j')
rij = Symbol('r_ij')
(ri, rj, rij)
C = Symbol('C') # C is 2 or 3
L = Symbol('L')
gamma = IndexedBase('gamma')
r = IndexedBase('r')
l = Symbol('l',integer=True)
m = Symbol('m',integer=True)
n = Symbol('n',integer=True)
N = Symbol('N',integer=True)
N_ee = Symbol("N_ee",integer=True)
N_en = Symbol("N_en",integer=True)
# General form of the 3-body Jastrow
f = (ri - L)**C * (rj -L)**C * Sum(Sum(Sum(gamma[l,m,n]*ri**l *rj**m*rij**n,(l,0,N_en)),(n,0,N_en)),(m,0,N_ee))
f
# Concrete example for N_en = 1 and N_ee = 1
f1 = f.subs(N_en,1).subs(N_ee,1).doit()
f1
# Concrete example for N_en = 1 and N_ee = 2
f12 = f.subs(N_en,1).subs(N_ee,2).doit()
f12
###Output
_____no_output_____
###Markdown
Constraints$l$ is index for electron_1 - nuclei distance variable$m$ is index for electron_2 - nuclei distance variable$n$ is index for electron_1 - electron_2 distance variableThe Jastrow factor should be symmetric under electron exchange (swap $l$ and $m$)
###Code
# How do we use this to simplify the expressions above?
Eq(gamma[l,m,n], gamma[m,l,n])
# Brute force - loop over m and l<m and create the substitutions (next cell)
# Concrete values of N_ee and N_en for all the following
NN_ee = 2
NN_en = 2
ftmp = f1
sym_subs = {}
display(ftmp)
for i1 in range(NN_en+1):
for i2 in range(i1):
for i3 in range(NN_ee+1):
print(i1,i2,i3)
display (gamma[i2,i1,i3], gamma[i1,i2,i3])
#ftmp = ftmp.subs(gamma[i2,i1,i3], gamma[i1,i2,i3])
sym_subs[gamma[i2,i1,i3]] = gamma[i1,i2,i3]
ftmp = f.subs(N_en,NN_en).subs(N_ee,NN_ee).doit().subs(sym_subs)
sym_subs
# Three body Jastrow with symmetry constraints
ftmp_sym = simplify(expand(ftmp))
ftmp_sym
# Find the free gamma values
{a for a in ftmp_sym.free_symbols if type(a) is Indexed}
###Output
_____no_output_____
###Markdown
No electron-electron cusp
###Code
# First derivative of electron-electron distance should be zero at r_ij = 0
ftmp_ee = diff(ftmp, rij).subs(rij,0).subs(rj,ri)
ftmp_ee
# Remove the (r_i-L)**C part
ftmp2_ee = simplify(expand(ftmp_ee)).args[1]
ftmp2_ee
# Collect powers of r_i
ft3 = collect(ftmp2_ee,ri)
ft3
# Convert to polynomial to extract coefficients of r_i
pt3 = poly(ft3,ri)
pt3
pt4 = pt3.all_coeffs()
pt4
# To enforce results are zero for all distance, the coefficients must be zero
ee_soln = solve(pt4)
ee_soln
###Output
_____no_output_____
###Markdown
No electron-nuclei cusp
###Code
# First derivative of electron-nuclei distance should be zero at r_i = 0
ftmp_en = diff(ftmp, ri).subs(ri,0).subs(rij,rj)
ftmp_en
simplify(expand(ftmp_en))
# Remove the (-L)**(C-1) * (r_j -L)**C part
simplify(expand(ftmp_en)).args[2]
# Put in constraints from e-e cusp
ftmp_en2 = ftmp_en.subs(ee_soln)
ftmp_en2
simplify(ftmp_en2)
ftmp_en3 = simplify(expand(ftmp_en2))
ftmp_en3
# Remove the (-L)**(C-1) * (r_j -L)**C part
ftmp_en3 = ftmp_en3.subs(sym_subs).args[2]
ftmp_en3
# Powers of r_j
collect(expand(ftmp_en3),rj)
# Convert to polynomial to extract coefficients
pe3 = poly(ftmp_en3,rj)
pe3
pe4 = pe3.all_coeffs()
print(len(pe4))
pe4
# Solve can be very slow as the expansion size increases
#en_soln = solve(pe4)
# Using linsolve is faster
soln_var = {a for a in ftmp_en3.free_symbols if type(a) is Indexed}
en_soln = linsolve(pe4, soln_var)
en_soln
# Don't want the C=0,L=0 solution when using solve
#en_soln2 = en_soln[1]
#en_soln2
# If using linsolve
for tmp_soln in en_soln:
en_soln2 = {g:v for g,v in zip(soln_var, tmp_soln)}
en_soln2
# Final expression with all the constraints inserted
ftmp_out = ftmp.subs(sym_subs).subs(ee_soln).subs(en_soln2)
ftmp_out2 = simplify(expand(ftmp_out))
ftmp_out2
# Find the free gamma values
{a for a in ftmp_out2.free_symbols if type(a) is Indexed}
###Output
_____no_output_____
###Markdown
Formula from Appendix of the paper
###Code
NN_en = 2
NN_ee = 2
Nc_en = 2*NN_en + 1
Nc_ee = NN_en + NN_ee + 1
N_gamma = (NN_en + 1)*(NN_en+2)//2 * (NN_ee + 1)
print('Number of gamma values (after symmetry) : ',N_gamma)
print('Number of e-e constraints : ',Nc_ee)
print('Number of e-n constraints : ',Nc_en)
print('Number of free param = ',N_gamma - Nc_ee - Nc_en)
# Note, for N_en=1 and N_en=1, this formula doesn't match the above derivation.
# For all higher values it does match.
# For the electron-electron cusp
for k in range(0,2*NN_en+1):
terms = 0
print(k)
for l in range(0,NN_en+1):
for m in range(0,l):
if l+m == k:
#print(' ',l,m,'2*gamma[%d,%d,1]'%(l,m))
terms += 2*gamma[l,m,1]
# sum over l,m such that l+m == k and l>m
# plus
# sum over l such that 2l == k
for l in range(0,NN_en):
if 2*l == k:
#print(' ',l,'gamma[%d,%d,1]'%(l,l))
terms += 2*gamma[l,l,1]
print('k: ',k,' terms: ',terms)
# For the electron-nuclear cusp
for kp in range(0,NN_en + NN_ee+1):
terms = 0
if kp <= NN_ee:
terms = C*gamma[0,0,kp] - L*gamma[1,0,kp]
# sum of l,n such that l + n == kp and l>=1
for l in range(1,NN_en+1):
for n in range(NN_ee+1):
if l + n == kp:
terms += C*gamma[l,0,n] - L*gamma[l,1,n]
print('kp: ',kp,' terms: ',terms)
###Output
kp: 0 terms: C*gamma[0, 0, 0] - L*gamma[1, 0, 0]
kp: 1 terms: C*gamma[0, 0, 1] + C*gamma[1, 0, 0] - L*gamma[1, 0, 1] - L*gamma[1, 1, 0]
kp: 2 terms: C*gamma[0, 0, 2] + C*gamma[1, 0, 1] + C*gamma[2, 0, 0] - L*gamma[1, 0, 2] - L*gamma[1, 1, 1] - L*gamma[2, 1, 0]
kp: 3 terms: C*gamma[1, 0, 2] + C*gamma[2, 0, 1] - L*gamma[1, 1, 2] - L*gamma[2, 1, 1]
kp: 4 terms: C*gamma[2, 0, 2] - L*gamma[2, 1, 2]
|
BC4_crypto_forecasting/scripts_updated/MATIC_notebook.ipynb | ###Markdown
--> Forecasting - MATIC Master Degree Program in Data Science and Advanced Analytics Business Cases with Data Science Project: > Group AA Done by:> - Beatriz Martins Selidรณnio Gomes, m20210545> - Catarina Inรชs Lopes Garcez, m20210547 > - Diogo Andrรฉ Domingues Pires, m20201076 > - Rodrigo Faรญsca Guedes, m20210587 --- Table of Content Import and Data Integration - [Import the needed Libraries](third-bullet) Data Exploration and Understanding - [Initial Analysis (EDA - Exploratory Data Analysis)](fifth-bullet) - [Variables Distribution](seventh-bullet) Data Preparation - [Data Transformation](eighth-bullet) Modelling - [Building LSTM Model](twentysecond-bullet) - [Get Best Parameters for LSTM](twentythird-bullet) - [Run the LSTM Model and Get Predictions](twentyfourth-bullet) - [Recursive Predictions](twentysixth-bullet) --- Import and Data Integration Import the needed Libraries [Back to TOC](toc)
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Data Exploration and Understanding Initial Analysis (EDA - Exploratory Data Analysis) [Back to TOC](toc)
###Code
df = pd.read_csv('../data/data_aux/df_MATIC.csv')
df
###Output
_____no_output_____
###Markdown
Data Types
###Code
# Get to know the number of instances and Features, the DataTypes and if there are missing values in each Feature
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1826 entries, 0 to 1825
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Date 1826 non-null object
1 MATIC-USD_ADJCLOSE 1094 non-null float64
2 MATIC-USD_CLOSE 1094 non-null float64
3 MATIC-USD_HIGH 1094 non-null float64
4 MATIC-USD_LOW 1094 non-null float64
5 MATIC-USD_OPEN 1094 non-null float64
6 MATIC-USD_VOLUME 1094 non-null float64
dtypes: float64(6), object(1)
memory usage: 100.0+ KB
###Markdown
Missing Values
###Code
# Count the number of missing values for each Feature
df.isna().sum().to_frame().rename(columns={0: 'Count Missing Values'})
###Output
_____no_output_____
###Markdown
Descriptive Statistics
###Code
# Descriptive Statistics Table
df.describe().T
# settings to display all columns
pd.set_option("display.max_columns", None)
# display the dataframe head
df.sample(n=10)
#CHECK ROWS THAT HAVE ANY MISSING VALUE IN ONE OF THE COLUMNS
is_NaN = df.isnull()
row_has_NaN = is_NaN.any(axis=1)
rows_with_NaN = df[row_has_NaN]
rows_with_NaN
#FILTER OUT ROWS THAT ARE MISSING INFORMATION
df = df[~row_has_NaN]
df.reset_index(inplace=True, drop=True)
df
###Output
_____no_output_____
###Markdown
Data Preparation Data Transformation [Back to TOC](toc) __`Duplicates`__
###Code
# Checking if exist duplicated observations
print(f'\033[1m' + "Number of duplicates: " + '\033[0m', df.duplicated().sum())
###Output
[1mNumber of duplicates: [0m 0
###Markdown
__`Convert Date to correct format`__
###Code
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
df
###Output
_____no_output_____
###Markdown
__`Get percentual difference between open and close values and low and high values`__
###Code
df['pctDiff_CloseOpen'] = abs((df[df.columns[2]]-df[df.columns[5]])/df[df.columns[2]])*100
df['pctDiff_HighLow'] = abs((df[df.columns[3]]-df[df.columns[4]])/df[df.columns[4]])*100
df.head()
def plot_coinValue(df):
#Get coin name
coin_name = df.columns[2].split('-')[0]
#Get date and coin value
x = df['Date']
y = df[df.columns[2]] # ADA-USD_CLOSE
#Get the volume of trades
v = df[df.columns[-3]]/1e9
#Get percentual diferences
y2 = df[df.columns[-1]] # pctDiff_HighLow
y1= df[df.columns[-2]] # pctDiff_CloseOpen
fig, axs = plt.subplots(3, 1, figsize=(12,14))
axs[0].plot(x, y)
axs[2].plot(x, v)
# plotting the line 1 points
axs[1].plot(x, y1, label = "Close/Open")
# plotting the line 2 points
axs[1].plot(x, y2, label = "High/Low")
axs[1].legend()
axs[0].title.set_text('Time Evolution of '+ coin_name)
axs[0].set(xlabel="", ylabel="Close Value in USD$")
axs[2].title.set_text('Volume of trades of '+ coin_name)
axs[2].set(xlabel="", ylabel="Total number of trades in billions")
axs[1].title.set_text('Daily Market percentual differences of '+ coin_name)
axs[1].set(xlabel="", ylabel="Percentage (%)")
plt.savefig('../analysis/'+coin_name +'_stats'+'.png')
return coin_name
coin_name = plot_coinValue(df)
#FILTER DATASET
df = df.loc[df['Date']>= '2021-07-01']
df
###Output
_____no_output_____
###Markdown
Modelling Building LSTM Model [Back to TOC](toc) StrategyCreate a DF (windowed_df) where the middle columns will correspond to the close values of X days before the target date and the final column will correspond to the close value of the target date. Use these values for prediction and play with the value of X
###Code
def get_windowed_df(X, df):
start_Date = df['Date'] + pd.Timedelta(days=X)
perm = np.zeros((1,X+1))
#Get labels for DataFrame
j=1
labels=[]
while j <= X:
label = 'closeValue_' + str(j) + 'daysBefore'
labels.append(label)
j+=1
labels.append('closeValue')
for i in range(X,df.shape[0]):
temp = np.zeros((1,X+1))
#Date for i-th day
#temp[0,0] = df.iloc[i]['Date']
#Close values for k days before
for k in range(X):
temp[0,k] = df.iloc[i-k-1,2]
#Close value for i-th date
temp[0,-1] = df.iloc[i,2]
#Add values to the permanent frame
perm = np.vstack((perm,temp))
#Get the array in dataframe form
windowed_df = pd.DataFrame(perm[1:,:], columns = labels)
return windowed_df
#Get the dataframe and append the dates
windowed_df = get_windowed_df(7, df)
windowed_df['Date'] = df.iloc[7:]['Date'].reset_index(drop=True)
windowed_df
#Get the X,y and dates into a numpy array to apply on a model
def windowed_df_to_date_X_y(windowed_dataframe):
df_as_np = windowed_dataframe.to_numpy()
dates = df_as_np[:, -1]
middle_matrix = df_as_np[:, 0:-2]
X = middle_matrix.reshape((len(dates), middle_matrix.shape[1], 1))
Y = df_as_np[:, -2]
return dates, X.astype(np.float32), Y.astype(np.float32)
dates, X, y = windowed_df_to_date_X_y(windowed_df)
dates.shape, X.shape, y.shape
#Partition for train, validation and test
q_80 = int(len(dates) * .85)
q_90 = int(len(dates) * .95)
dates_train, X_train, y_train = dates[:q_80], X[:q_80], y[:q_80]
dates_val, X_val, y_val = dates[q_80:q_90], X[q_80:q_90], y[q_80:q_90]
dates_test, X_test, y_test = dates[q_90:], X[q_90:], y[q_90:]
fig,axs = plt.subplots(1, 1, figsize=(12,5))
#Plot the partitions
axs.plot(dates_train, y_train)
axs.plot(dates_val, y_val)
axs.plot(dates_test, y_test)
axs.legend(['Train', 'Validation', 'Test'])
fig.savefig('../analysis/'+coin_name +'_partition'+'.png')
###Output
_____no_output_____
###Markdown
Get Best Parameters for LSTM [Back to TOC](toc)
###Code
#!pip install tensorflow
#import os
#os.environ['PYTHONHASHSEED']= '0'
#import numpy as np
#np.random.seed(1)
#import random as rn
#rn.seed(1)
#import tensorflow as tf
#tf.random.set_seed(1)
#
#from tensorflow.keras.models import Sequential
#from tensorflow.keras.optimizers import Adam
#from tensorflow.keras import layers
#from sklearn.metrics import mean_squared_error
#
## Function to create LSTM model and compute the MSE value for the given parameters
#def check_model(X_train, y_train, X_val, y_val, X_test, y_test, learning_rate,epoch,batch):
#
# # create model
# model = Sequential([layers.Input((7, 1)),
# layers.LSTM(64),
# layers.Dense(32, activation='relu'),
# layers.Dense(32, activation='relu'),
# layers.Dense(1)])
# # Compile model
# model.compile(loss='mse', optimizer=Adam(learning_rate=learning_rate), metrics=['mean_absolute_error'])
#
# model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=epoch, shuffle=False, batch_size=batch, verbose=2)
#
# test_predictions = model.predict(X_test).flatten()
#
# LSTM_mse = mean_squared_error(y_test, test_predictions)
#
# return LSTM_mse
#
##Function that iterates the different parameters and gets the ones corresponding to the lowest MSE score.
#def search_parameters(batch_size, epochs, learn_rate, X_train, y_train, X_val, y_val, X_test, y_test):
#
# best_score = float('inf')
#
# for b in batch_size:
# for e in epochs:
# for l in learn_rate:
# print('Batch Size: ' + str(b))
# print('Number of Epochs: ' + str(e))
# print('Value of Learning Rate: ' + str(l))
# try:
# mse = check_model(X_train, y_train, X_val, y_val, X_test, y_test,l,e,b)
# print('MSE=%.3f' % (mse))
# if mse < best_score:
# best_score = mse
# top_params = [b, e, l]
# except:
# continue
#
# print('Best MSE=%.3f' % (best_score))
# print('Optimal Batch Size: ' + str(top_params[0]))
# print('Optimal Number of Epochs: ' + str(top_params[1]))
# print('Optimal Value of Learning Rate: ' + str(top_params[2]))
#
#
## define parameters
#batch_size = [10, 100, 1000]
#epochs = [50, 100]
#learn_rate = np.linspace(0.001,0.1, num=10)
#
#warnings.filterwarnings("ignore")
#search_parameters(batch_size, epochs, learn_rate, X_train, y_train, X_val, y_val, X_test, y_test)
###Output
_____no_output_____
###Markdown
Run the LSTM Model and Get Predictions [Back to TOC](toc)
###Code
#BEST SOLUTION OF THE MODEL
# MSE=0.003
# Batch Size: 100
# Number of Epochs: 100
# Value of Learning Rate: 0.1
from tensorflow.keras.models import Sequential
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import layers
from sklearn.metrics import mean_squared_error
model = Sequential([layers.Input((7, 1)),
layers.LSTM(64),
layers.Dense(32, activation='relu'),
layers.Dense(32, activation='relu'),
layers.Dense(1)])
model.compile(loss='mse',
optimizer=Adam(learning_rate=0.1),
metrics=['mean_absolute_error'])
model.fit(X_train, y_train, validation_data=(X_val, y_val), epochs=100, shuffle=False, batch_size=100, verbose=2)
#PREDICT THE VALUES USING THE MODEL
train_predictions = model.predict(X_train).flatten()
val_predictions = model.predict(X_val).flatten()
test_predictions = model.predict(X_test).flatten()
fig,axs = plt.subplots(3, 1, figsize=(14,14))
axs[0].plot(dates_train, train_predictions)
axs[0].plot(dates_train, y_train)
axs[0].legend(['Training Predictions', 'Training Observations'])
axs[1].plot(dates_val, val_predictions)
axs[1].plot(dates_val, y_val)
axs[1].legend(['Validation Predictions', 'Validation Observations'])
axs[2].plot(dates_test, test_predictions)
axs[2].plot(dates_test, y_test)
axs[2].legend(['Testing Predictions', 'Testing Observations'])
plt.savefig('../analysis/LTSM_recursive/'+coin_name +'_modelPredictions'+'.png')
###Output
_____no_output_____
###Markdown
Recursive Predictions [Back to TOC](toc)
###Code
from copy import deepcopy
#Get prediction for future dates recursively based on the previous existing information. Then update the window of days upon
#which the predictions are made
recursive_predictions = []
recursive_dates = np.concatenate([dates_test])
extra_dates = np.array(['2022-05-09', '2022-05-10', '2022-05-11'])
recursive_dates = np.append(recursive_dates,extra_dates)
last_window = deepcopy(X_train[-1])
for target_date in recursive_dates:
next_prediction = model.predict(np.array([last_window])).flatten()
recursive_predictions.append(next_prediction)
last_window = np.insert(last_window,0,next_prediction)[:-1]
fig,axs = plt.subplots(2, 1, figsize=(14,10))
axs[0].plot(dates_train, train_predictions)
axs[0].plot(dates_train, y_train)
axs[0].plot(dates_val, val_predictions)
axs[0].plot(dates_val, y_val)
axs[0].plot(dates_test, test_predictions)
axs[0].plot(dates_test, y_test)
axs[0].plot(recursive_dates, recursive_predictions)
axs[0].legend(['Training Predictions',
'Training Observations',
'Validation Predictions',
'Validation Observations',
'Testing Predictions',
'Testing Observations',
'Recursive Predictions'])
axs[1].plot(dates_test, y_test)
axs[1].plot(recursive_dates, recursive_predictions)
axs[1].legend(['Testing Observations',
'Recursive Predictions'])
plt.savefig('../analysis/LTSM_recursive/'+coin_name +'_recursivePredictions'+'.png')
may_10_prediction = coin_name +'-USD',recursive_predictions[-2][0]
may_10_prediction
###Output
_____no_output_____ |
examples/general/datetime_basic.ipynb | ###Markdown
`datetime` module examplesThis module will allow you to get the current date/time and calculate deltas, among many functions.If you want to use the current timestamp as a suffix for a table name or database in SQL, you'll need to do a bit of string manipulation, as shown in the creation of the `clean_timestamp` variable at the bottom of this notebook. Basic usage:
###Code
from datetime import datetime
# Get the timestamp for right now
this_instant = datetime.now()
# Let's inspect this item's type and representation
print(type(this_instant))
this_instant
###Output
<class 'datetime.datetime'>
###Markdown
You can use built-in `datetime` operations to extract the day and time:
###Code
d = this_instant.date()
t = this_instant.time()
d, t
###Output
_____no_output_____
###Markdown
You can also convert `datetime` to a string:
###Code
timestamp_str = str(this_instant)
timestamp_str
###Output
_____no_output_____
###Markdown
Use string-splitting to extract day and time
###Code
d, t = timestamp_str.split(" ")
d, t
# Omit the miliseconds in the time
t_no_miliseconds = t.split(".")[0]
t_no_miliseconds
###Output
_____no_output_____
###Markdown
Convert a `datetime` object into a nicely-formatted string:The format of the resulting string is `YYYY_MM_DD_HH_MM_SS`
###Code
d, t = str(this_instant).split(" ")
t = t.split(".")[0]
# Replace unwanted characters in the time variable with an underscore
t = t.replace(":", "_")
# Do the same for the date's dash
d = d.replace("-", "_")
clean_timestamp = f"{d}_{t}"
clean_timestamp
###Output
_____no_output_____ |
src/test.ipynb | ###Markdown
Encoding categorical variables
###Code
X = data[['Nation', 'Pos', 'Age', 'Min_Playing', 'Gls', 'Ast',
'CrdY', 'CrdR','winner_Bundesliga',
'winner_C3', 'finalist_C3', 'winner_UCL', 'finalist_UCL',
'winner_Club WC', 'finalist_Club WC', 'winner_Copa America',
'finalist_Copa America', 'winner_Euro', 'finalist_Euro', 'winner_Liga',
'winner_Ligue 1', 'winner_PL', 'winner_Serie A', 'winner_WC',
'finalist_WC']]
y1 = data[['%']].astype(float)
y2 = data[['Rang']]
import sklearn.preprocessing
encoder = sklearn.preprocessing.OneHotEncoder(sparse=False)
X = pd.concat([X,pd.DataFrame(encoder.fit_transform(X[['Pos']]),columns=encoder.categories_)],axis=1)
X.drop("Pos",axis=1,inplace=True)
X = pd.concat([X,pd.DataFrame(encoder.fit_transform(X[['Nation']]),columns=encoder.categories_)],axis=1)
X.drop("Nation",axis=1,inplace=True)
###Output
_____no_output_____
###Markdown
Train test split
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y1_train, y1_test = train_test_split(X, y1, test_size=0.3, random_state=0)
X_train, X_test, y2_train, y2_test = train_test_split(X, y2, test_size=0.3, random_state=0)
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
categorical_variables = ['winner_Bundesliga', 'winner_C3',
'finalist_C3', 'winner_UCL',
'finalist_UCL', 'winner_Club WC',
'finalist_Club WC', 'winner_Copa America',
'finalist_Copa America', 'winner_Euro',
'finalist_Euro', 'winner_Liga',
'winner_Ligue 1', 'winner_PL',
'winner_Serie A', 'winner_WC',
'finalist_WC', ('DF',),
('DF,MF',), ('FW',),
('FW,MF',), ('GK',),
('MF',), ('MF,FW',),
('ALG',), ('ARG',),
('BEL',), ('BIH',),
('BRA',), ('BUL',),
('CHI',), ('CIV',),
('CMR',), ('COL',),
('CRO',), ('CZE',),
('DEN',), ('EGY',),
('ENG',), ('ESP',),
('FIN',), ('FRA',),
('GAB',), ('GER',),
('GHA',), ('GRE',),
('IRL',), ('ITA',),
('KOR',), ('LBR',),
('MLI',), ('NED',),
('NGA',), ('POL',),
('POR',), ('ROU',),
('SEN',), ('SRB',),
('SVN',), ('TOG',),
('TRI',), ('URU',),
('WAL',)]
numeric_variables = ['Age', 'Min_Playing',
'Gls', 'Ast',
'CrdY', 'CrdR']
X_train.isna().sum()
X_test.isna().sum()
from sklearn.impute import KNNImputer
imputer = KNNImputer(n_neighbors=4, weights="uniform")
X_train = pd.DataFrame(imputer.fit_transform(X_train),columns=X_train.columns)
X_test = pd.DataFrame(imputer.fit_transform(X_test),columns=X_test.columns)
X_train.dtypes.unique()
###Output
_____no_output_____
###Markdown
Model building
###Code
X_train.shape
y1_train.shape
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import RidgeCV
from sklearn.linear_model import LassoCV
from sklearn.model_selection import cross_val_score
linear_regression = cross_val_score(LinearRegression(), X_train, y1_train, cv=10,scoring='neg_root_mean_squared_error')
ridge = RidgeCV(alphas = np.linspace(10,30),cv=10,scoring='neg_root_mean_squared_error').fit(X_train,y1_train)
lasso = LassoCV(alphas = np.linspace(1,2),cv=10,random_state=0).fit(X_train,y1_train)
###Output
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\linear_model\_coordinate_descent.py:1572: DataConversionWarning: A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().
y = column_or_1d(y, warn=True)
###Markdown
Linear Regression
###Code
print("Validation RMSE : "+str(abs(linear_regression.mean()))+ " ("+str(abs(linear_regression.mean()*100/y1_train['%'].mean()))+"% of the mean)")
from sklearn.model_selection import learning_curve
N, train_score, val_score = learning_curve(LinearRegression(), X_train,y1_train, train_sizes=np.linspace(0.1,1,10),cv=10,scoring='neg_root_mean_squared_error')
print(N)
plt.plot(N[1:], abs(train_score.mean(axis=1))[1:],label="train")
plt.plot(N[1:], abs(val_score.mean(axis=1))[1:], label="validation")
plt.xlabel('train_sizes')
plt.legend()
###Output
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
###Markdown
Ridge Regression
###Code
ridge.alpha_
pd.DataFrame(ridge.coef_,columns=X_train.columns).T
print("ridge training rยฒ : "+str(ridge.score(X_train,y1_train)))
print("ridge validation rmse : "+str(ridge.best_score_) +" ("+str(abs(ridge.best_score_*100/y1_train['%'].astype(float).mean()))+"% of the mean)")
from sklearn.linear_model import Ridge
N, train_score, val_score = learning_curve(Ridge(alpha=ridge.alpha_), X_train,y1_train, train_sizes=np.linspace(0.1,1,10),cv=10,scoring='neg_root_mean_squared_error')
print(N)
plt.plot(N, train_score.mean(axis=1),label="train")
plt.plot(N, val_score.mean(axis=1), label="validation")
plt.xlabel('train_sizes')
plt.legend()
###Output
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
C:\Users\charl\miniconda3\envs\BallondOr\lib\site-packages\sklearn\utils\validation.py:1675: FutureWarning: Feature names only support names that are all strings. Got feature names with dtypes: ['str', 'tuple']. An error will be raised in 1.2.
warnings.warn(
###Markdown
Lasso
###Code
lasso.alpha_
pd.DataFrame(lasso.coef_,columns=X_train.columns)
print("lasso training rยฒ : "+str(lasso.score(X_train,y1_train)))
def noise(X, y, n, sigma):
_X = X.copy()
_y = y.copy()
for _ in range(n):
X = np.r_[X, _X + np.random.randn(*_X.shape)*sigma]
y = np.r_[y, _y]
return X, y
test_X, test_y = noise(X_train,y1_train,2,1)
linear_regression_2 = cross_val_score(LinearRegression(), test_X, test_y, cv=10,scoring='neg_root_mean_squared_error')
ridge_2 = RidgeCV(alphas = np.linspace(800,1500),cv=10,scoring='neg_root_mean_squared_error').fit(test_X,test_y.ravel())
lasso_2 = LassoCV(alphas = np.linspace(1,2),cv=10,random_state=0).fit(test_X,test_y.ravel())
print("Validation RMSE : "+str(abs(linear_regression_2.mean()))+ " ("+str(abs(linear_regression_2.mean()*100/test_y.mean()))+"% of the mean)")
N, train_score, val_score = learning_curve(LinearRegression(), test_X,test_y, train_sizes=np.linspace(0.1,1,10),cv=10,scoring='neg_root_mean_squared_error')
print(N)
plt.plot(N[2:], abs(train_score.mean(axis=1))[2:],label="train")
plt.plot(N[2:], abs(val_score.mean(axis=1))[2:], label="validation")
plt.xlabel('train_sizes')
plt.legend()
ridge_2.alpha_
print("Validation RMSE : "+str(abs(ridge_2.best_score_))+ " ("+str(abs(ridge_2.best_score_*100/test_y.mean()))+"% of the mean)")
N, train_score, val_score = learning_curve(Ridge(ridge_2.alpha_), test_X,test_y, train_sizes=np.linspace(0.1,1,10),cv=10,scoring='neg_root_mean_squared_error')
print(N)
plt.plot(N[1:], abs(train_score.mean(axis=1))[1:],label="train")
plt.plot(N[1:], abs(val_score.mean(axis=1))[1:], label="validation")
plt.xlabel('train_sizes')
plt.legend()
###Output
[ 114 228 343 457 572 686 800 915 1029 1144]
###Markdown
Unsupervised Keyphrase Extraction
###Code
%%time
from kprank import *
from evaluation import *
# Extract acronym
abrv_kp, abrv_corpus = get_abrv(data['title+abs'])
%%time
# ConceptNet
con_final = rank(data, domain_list, pmi_en = None, \
domain_w=0.1, quality_w=0.1, alpha = 1, beta = 0.5, \
rank_method = 'sifrank', embed_method='conceptnet')
pd.DataFrame(evaluate(get_ranked_kplist(con_final), data)).T
%%time
# ELMo
elmo_final = rank(data, domain_list, pmi_en = None, \
domain_w=0.1, quality_w=0.1, alpha = 1, beta = 0.5, \
rank_method = 'sifrank', embed_method='elmo')
pd.DataFrame(evaluate(get_ranked_kplist(elmo_final), data)).T
%%time
# textrank
tr_final = rank(data, domain_list, pmi_en = None, \
domain_w=0.1, quality_w=0.1, alpha = 1, beta = 0.5, \
rank_method = 'textrank', embed_method='conceptnet')
pd.DataFrame(evaluate(get_ranked_kplist(tr_final), data)).T
%%time
# bert
bert_final = rank(data, domain_list, pmi_en = None, \
domain_w=0.1, quality_w=0.1, alpha = 1, beta = 0.5, \
rank_method = 'sifrank', embed_method='bert')
pd.DataFrame(evaluate(get_ranked_kplist(bert_final), data)).T
###Output
_____no_output_____
###Markdown
BASE model
###Code
%%time
# conceptnet base
con_base, _ = SIFRank_score(data['title+abs'], embed_method='conceptnet')
pd.DataFrame(evaluate(get_ranked_kplist(con_base), data)).T
%%time
# textrank base
tr_base = textrank_score(data['title+abs'])
pd.DataFrame(evaluate(get_ranked_kplist(tr_base), data)).T
%%time
# elmo base
elmo_base,_= SIFRank_score(data['title+abs'], embed_method='elmo')
pd.DataFrame(evaluate(get_ranked_kplist(elmo_base), data)).T
###Output
_____no_output_____
###Markdown
Clustering
###Code
import warnings
warnings.filterwarnings('ignore')
from clustering import *
###Output
_____no_output_____
###Markdown
keyphrase selection
###Code
filtered_kpdata = select_kp(data, con_final, abrv_corpus, topn=15, filter_thre = 0.2)
kpterms = ClusData(filtered_kpdata, embed_model=embeddings_index, embed_method='conceptnet')
# kpclus = Clusterer(kpterms, method = 'sp-kmeans')
# kpclus.find_opt_k_sil(100)
%%time
# test for optimal k
nums=range(10,100,2)
sp_silscore=[]
sp_dbscore=[]
sp_chscore=[]
sp_inertia = []
for num in nums:
clst=SphericalKMeans(num, init='k-means++', random_state = 0, n_init=10)
y=clst.fit_predict(kpterms.embed)
sp_inertia.append(clst.inertia_)
sp_silscore.append(silhouette_score(kpterms.embed,y,metric='cosine'))
nums=range(10,100,2)
hac_silscore=[]
hac_dbscore=[]
hac_chscore=[]
for num in nums:
clst=AgglomerativeClustering(n_clusters=num, affinity = 'cosine', linkage = 'average')
y=clst.fit_predict(kpterms.embed)
hac_silscore.append(silhouette_score(kpterms.embed,y,metric='cosine'))
import matplotlib.pyplot as plt
plt.rcdefaults()
fig=plt.figure()
ax=fig.add_subplot(1,1,1)
ax.plot(nums,sp_silscore,marker="+", label='Spherical KMeans')
ax.plot(nums,hac_silscore,marker="*",color='red', linestyle='--', label='Agglomerative Clustering')
ax.set_xlabel("n_clusters")
ax.set_ylabel("silhouette_score")
ax.legend(loc = 'bottom right')
plt.show()
10+sp_silscore.index(max(sp_silscore))*2
%%time
hacclus = Clusterer(kpterms, n_cluster=74, affinity = 'cosine', linkage = 'average', method = 'agglo')
hacclus.fit()
from sklearn import metrics
print("Calinski Harabaz: %0.4f" % metrics.calinski_harabaz_score(kpterms.embed, hacclus.membership))
%%time
spclus = Clusterer(kpterms, n_cluster=74, method = 'sp-kmeans')
spclus.fit()
from sklearn import metrics
print("Calinski Harabaz: %0.4f" % metrics.calinski_harabaz_score(kpterms.embed, spclus.membership))
center_names = []
clus_centers = spclus.center_ids
for cluster_id, keyword_idx in clus_centers:
keyword = kpterms.id2kp[keyword_idx]
center_names.append(keyword)
clusters = {}
for i in spclus.class2word:
clusters[float(i)] = find_most_similar(center_names[i], kpterms, spclus, n='all')
for i in spclus.class2word:
print(i)
print(center_names[i])
print(spclus.class2word[i])
print('=================================================================')
# import json
# f = open('../../dataset/ieee_xai/output/clusters.json', 'w')
# json.dump(clusters, f)
###Output
_____no_output_____
###Markdown
Addtional Experiment - Weakly Supervision in keyphrase selection
###Code
with open('../dataset/explainability_words','r', encoding = 'utf-8') as f:
explain = [i.strip().lower() for i in f.readlines()]
%%time
kp_embed = {w: embed_phrase(embeddings_index, w) for w in filtered_kpdata}
xai_sim,_ = domain_relevance_table(kp_embed, explain, embed_method = 'conceptnet', N=int(0.5*len(domain_list)))
xaiterms = [i[0] for i in sorted(xai_sim.items(), key=lambda x:x[1], reverse=True)[:300]]
xaidata = ClusData(xaiterms, embed_model=embeddings_index, embed_method='conceptnet')
xaiclus = Clusterer(xaidata, n_cluster=10, method = 'sp-kmeans')
xaiclus.find_opt_k_sil(50)
%%time
xaiclus = Clusterer(xaidata, n_cluster=10, method = 'sp-kmeans')
xaiclus.fit()
from sklearn import metrics
print("Calinski Harabaz: %0.4f" % metrics.calinski_harabaz_score(xaidata.embed, xaiclus.membership))
center_names = []
clus_centers = xaiclus.center_ids
for cluster_id, keyword_idx in clus_centers:
keyword = xaidata.id2kp[keyword_idx]
center_names.append(keyword)
subclusters = {}
for i in xaiclus.class2word:
subclusters[float(i)] = find_most_similar(center_names[i], xaidata, xaiclus, n='all')
subclusters
pd.DataFrame.from_dict(xaiclus.class2word, orient='index').sort_index().T[:10]
###Output
_____no_output_____
###Markdown
Testnetflex/src/test.ipynb by Jens Brage ([email protected])Copyright 2019 NODA Intelligent Systems ABThis file is part of the Python project Netflex, which implements a version of the alternating direction method of multipliers algorithm tailored towards model predictive control. Netflex was carried out as part of the research project Netflexible Heat Pumps within the research and innovation programme SamspEL, funded by the Swedish Energy Agency. Netflex is made available under the ISC licence, see the file netflex/LICENSE.md.
###Code
import matplotlib
%matplotlib inline
matplotlib.pyplot.rcParams['figure.figsize'] = [21, 13]
import networkx
import time
import cvxpy
import numpy
import pandas
from netflex import *
###Output
_____no_output_____
###Markdown
The data folder contains price and temperature data for the three years below. Select one year, e.g., yyyy = years[2] to work with the corresponding data.
###Code
years = 2010, 2013, 2015
yyyy = years[2]
df = pandas.read_csv('test/%s.csv' % yyyy, index_col=0)
df
###Output
_____no_output_____
###Markdown
The purpose of the optimization is to compute the control signals (u0, u1, ..., u9) that minimizes the overall cost subject to constraints. Add the control signals to the dataframe. Here u0 refers the electrical energy consumed by the central heat pump and u1, ..., u9 refer to temperature offsets applied to the outdoor temperature sensors used to regulate how the different buildings consume heat.
###Code
for i in range(0, 10):
df['u%r' % i] = 0.0e0
###Output
_____no_output_____
###Markdown
This particular simulation is performed over a rolling 3 * 24 hour time window, from 24 hours into the past to 2 * 24 hours into the future. The hours into the past are necessary for computing the rolling intergrals used to constrain the temperature offsets.
###Code
start = 0
periods = 24
sp = start, 2 * periods
dx = 1.0e0
dy = 1.0e0
###Output
_____no_output_____
###Markdown
The simulated network consists of ten agents (a0, a1, ..., a9), with a0 a heat pump supplying the buildings a1, ..., a9 with heat.
###Code
m1 = Parameter('m1', *sp)
k1 = Parameter('k1', *sp)
x1 = Variable('x1', *sp)
u1 = Variable('u1', *sp)
a1 = Agent(
0,
(dx, dy, x1),
constraints=[
x1 == m1 + cvxpy.multiply(k1, Parameter('C', *sp) + u1),
x1 >= 0.0e0,
u1 >=-1.0e1,
u1 <= 1.0e1,
cvxpy.abs(
rolling_integral(u1, periods),
) <= 4.0e1,
],
)
m2 = Parameter('m2', *sp)
k2 = Parameter('k2', *sp)
x2 = Variable('x2', *sp)
u2 = Variable('u2', *sp)
a2 = Agent(
0,
(dx, dy, x2),
constraints=[
x2 == m2 + cvxpy.multiply(k2, Parameter('C', *sp) + u2),
x2 >= 0.0e0,
u2 >=-1.0e1,
u2 <= 1.0e1,
cvxpy.abs(
rolling_integral(u2, periods),
) <= 4.0e1,
],
)
m3 = Parameter('m3', *sp)
k3 = Parameter('k3', *sp)
x3 = Variable('x3', *sp)
u3 = Variable('u3', *sp)
a3 = Agent(
0,
(dx, dy, x3),
constraints=[
x3 == m3 + cvxpy.multiply(k3, Parameter('C', *sp) + u3),
x3 >= 0.0e0,
u3 >=-1.0e1,
u3 <= 1.0e1,
cvxpy.abs(
rolling_integral(u3, periods),
) <= 4.0e1,
],
)
m4 = Parameter('m4', *sp)
k4 = Parameter('k4', *sp)
x4 = Variable('x4', *sp)
u4 = Variable('u4', *sp)
a4 = Agent(
0,
(dx, dy, x4),
constraints=[
x4 == m4 + cvxpy.multiply(k4, Parameter('C', *sp) + u4),
x4 >= 0.0e0,
u4 >=-1.0e1,
u4 <= 1.0e1,
cvxpy.abs(
rolling_integral(u4, periods),
) <= 4.0e1,
],
)
m5 = Parameter('m5', *sp)
k5 = Parameter('k5', *sp)
x5 = Variable('x5', *sp)
u5 = Variable('u5', *sp)
a5 = Agent(
0,
(dx, dy, x5),
constraints=[
x5 == m5 + cvxpy.multiply(k5, Parameter('C', *sp) + u5),
x5 >= 0.0e0,
u5 >=-1.0e1,
u5 <= 1.0e1,
cvxpy.abs(
rolling_integral(u5, periods),
) <= 4.0e1,
],
)
m6 = Parameter('m6', *sp)
k6 = Parameter('k6', *sp)
x6 = Variable('x6', *sp)
u6 = Variable('u6', *sp)
a6 = Agent(
0,
(dx, dy, x6),
constraints=[
x6 == m6 + cvxpy.multiply(k6, Parameter('C', *sp) + u6),
x6 >= 0.0e0,
u6 >=-1.0e1,
u6 <= 1.0e1,
cvxpy.abs(
rolling_integral(u6, periods),
) <= 4.0e1,
],
)
m7 = Parameter('m7', *sp)
k7 = Parameter('k7', *sp)
x7 = Variable('x7', *sp)
u7 = Variable('u7', *sp)
a7 = Agent(
0,
(dx, dy, x7),
constraints=[
x7 == m7 + cvxpy.multiply(k7, Parameter('C', *sp) + u7),
x7 >= 0.0e0,
u7 >=-1.0e1,
u7 <= 1.0e1,
cvxpy.abs(
rolling_integral(u7, periods),
) <= 4.0e1,
],
)
m8 = Parameter('m8', *sp)
k8 = Parameter('k8', *sp)
x8 = Variable('x8', *sp)
u8 = Variable('u8', *sp)
a8 = Agent(
0,
(dx, dy, x8),
constraints=[
x8 == m8 + cvxpy.multiply(k8, Parameter('C', *sp) + u8),
x8 >= 0.0e0,
u8 >=-1.0e1,
u8 <= 1.0e1,
cvxpy.abs(
rolling_integral(u8, periods),
) <= 4.0e1,
],
)
m9 = Parameter('m9', *sp)
k9 = Parameter('k9', *sp)
x9 = Variable('x9', *sp)
u9 = Variable('u9', *sp)
a9 = Agent(
0,
(dx, dy, x9),
constraints=[
x9 == m9 + cvxpy.multiply(k9, Parameter('C', *sp) + u9),
x9 >= 0.0e0,
u9 >=-1.0e1,
u9 <= 1.0e1,
cvxpy.abs(
rolling_integral(u9, periods),
) <= 4.0e1,
],
)
x1 = Variable('x1', *sp)
x2 = Variable('x2', *sp)
x3 = Variable('x3', *sp)
x4 = Variable('x4', *sp)
x5 = Variable('x5', *sp)
x6 = Variable('x6', *sp)
x7 = Variable('x7', *sp)
x8 = Variable('x8', *sp)
x9 = Variable('x9', *sp)
m0 = Parameter('m0', *sp)
k0 = Parameter('k0', *sp)
x0 = Variable('x0', *sp)
u0 = Variable('u0', *sp)
a0 = Agent(
1,
(dx, dy, x1),
(dx, dy, x2),
(dx, dy, x3),
(dx, dy, x4),
(dx, dy, x5),
(dx, dy, x6),
(dx, dy, x7),
(dx, dy, x8),
(dx, dy, x9),
cost=(
Parameter('SEK/kWh', *sp) * u0 +
cvxpy.norm(u0, 'inf') * periods * 0.10 # power tariff in SEK/kW
),
constraints=[
x0 == cvxpy.multiply(Parameter('COP', *sp), u0),
x0 == x1 + x2 + x3 + x4 + x5 + x6 + x7 + x8 + x9,
],
)
###Output
_____no_output_____
###Markdown
The visualization of the network is a work in progress, though still useful for debuging configurations.
###Code
market = Market(a1, a2, a3, a4, a5, a6, a7, a8, a9, a0)
path = 'test/market.dot'
market.dot(path)
G = networkx.nx_pydot.read_dot(path)
networkx.draw_kamada_kawai(G)
###Output
_____no_output_____
###Markdown
Select where to start the simulation and the number of consequative hours for model predictive control. The graph depicts the relevant time window.
###Code
run_start = 1249 # Ferbruary 22
run_periods = 24
df[['SEK/kWh', 'C', 'COP']][
run_start - periods :
run_start + 2 * periods + run_periods
].plot()
###Output
_____no_output_____
###Markdown
Run the simulation, and plot how the residuals develop over time. Note that, sometimes, the optimization failes for no good reason. When that happens, re-instantiate the market to keep things manageable.
###Code
for period in range(run_periods):
s = time.time()
market.run(df, run_start + period, 100)
e = time.time()
print(e - s)
pandas.DataFrame(market.log).plot()
###Output
0.47643303871154785
0.36951708793640137
0.5724091529846191
0.8343460559844971
0.4827151298522949
0.448408842086792
0.3962709903717041
0.41631269454956055
0.4457540512084961
0.41068172454833984
0.39630699157714844
0.38885974884033203
0.410200834274292
0.39555811882019043
0.37967586517333984
0.37997007369995117
0.4507930278778076
0.3930058479309082
0.37705230712890625
0.37760400772094727
0.38853979110717773
0.3742818832397461
0.3770768642425537
0.3804922103881836
###Markdown
For sufficiently nice cost functions, higher prices should correlate with positive temperature offsets, and sometimes with agents utilizing negative temperature offsets to heat buildings during periods with lower prices to avoid having to heat them as much during periods with higer prices.
###Code
df[['u%r' % i for i in range(1, 10)]].loc[
run_start - periods :
run_start + 2 * periods + run_periods
].plot()
###Output
_____no_output_____
###Markdown
Finally, the electrical energy consumed by the central heat pump.
###Code
df[['u0']].loc[
run_start - periods :
run_start + 2 * periods + run_periods
].plot()
###Output
_____no_output_____
###Markdown
Data
###Code
import numpy as np
import torch
from mf_functions import forrester_low, forrester_high
from sklearn.preprocessing import StandardScaler, MinMaxScaler
import gpytorch
import random
# Eliminate Randomness in testing
def set_seed(i):
np.random.seed(i)
torch.manual_seed(i)
random.seed(i)
set_seed(1)
r"""
Assume conventional design matrix format - observations are rows
"""
x_h = torch.tensor([0,0.4,0.6,1])
i_h = torch.full((len(x_h),1), 1)
x_l = torch.linspace(0,1,12)
i_l = torch.full((len(x_l),1), 0)
y_l = torch.as_tensor(forrester_low(np.array(x_l).reshape(-1,1)))
y_h = torch.as_tensor(forrester_high(np.array(x_h).reshape(-1,1)))
train_X = torch.cat([x_l, x_h]).unsqueeze(-1)
train_I = torch.cat([i_l, i_h])
train_Y = torch.cat([y_l, y_h]).unsqueeze(-1)
scalerX = MinMaxScaler().fit(train_X)
scalerY = StandardScaler().fit(train_Y)
train_Y = torch.as_tensor(scalerY.transform(train_Y)).squeeze(-1)
train_X = torch.as_tensor(scalerX.transform(train_X))
###Output
_____no_output_____
###Markdown
Model
###Code
from model import LinearARModel
model = LinearARModel(train_X, train_Y, train_I, epoch=50, lr=1e-2)
model.build()
###Output
100%|โโโโโโโโโโ| 49/49 [00:01<00:00, 46.69it/s, loss=-.0681]
###Markdown
Predict
###Code
X = scalerX.inverse_transform(train_X)
Y = scalerY.inverse_transform(train_Y.unsqueeze(-1))
pred_x = torch.linspace(0, 1, 100, dtype=torch.double).unsqueeze(-1)
pred_i = torch.full((pred_x.shape[0],1), dtype=torch.long, fill_value=1)
with torch.no_grad(), gpytorch.settings.fast_pred_var():
mean, var = model.predict(pred_x, pred_i)
x = torch.tensor(scalerX.inverse_transform(pred_x).squeeze(-1))
mean = torch.as_tensor(scalerY.inverse_transform(mean.unsqueeze(-1)).squeeze(-1))
upper,lower = mean+2*torch.sqrt(var), mean-2*torch.sqrt(var)
###Output
_____no_output_____
###Markdown
Evaluate
###Code
from matplotlib import pyplot as plt
f,ax = plt.subplots(1, 1, figsize=(6, 6))
plt.title("Forrester Function")
ax.plot(x, mean, 'b--')
ax.fill_between(x, lower, upper, color='b', alpha=0.25)
ax.plot(x, forrester_high(np.array(pred_x).reshape(-1,1)), 'g')
ax.plot(x, forrester_low(np.array(pred_x).reshape(-1,1)), 'r')
ax.plot(X[train_I==1], Y[(train_I==1)], 'g*')
ax.plot(X[train_I==0], Y[(train_I==0)], 'r*')
ax.legend(['$\mu$','$2\sigma$','$y_H$','$y_L$','HF data','LF data'])
###Output
_____no_output_____
###Markdown
0.9308646028020.970114946571.00036650175 (90 1 0.1)0.9102017844080.970114946571.00036650175 (90 1 0.5)0.9310481334170.970114946571.00036650175 (95 1 0.5)
###Code
R_barre
###Output
_____no_output_____
###Markdown
DCNN Keras Image Classifier - Author: Felipe Silveira ([email protected])- A simple and generic image classifier (test) built with Keras using cuda libraries. Imports
###Code
import os
import time
import itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from keras.models import load_model
from keras.preprocessing.image import ImageDataGenerator
%matplotlib inline
###Output
_____no_output_____
###Markdown
Adjusting hyperparameters
###Code
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # Suppress warnings
START_TIME = time.time()
data_dir = "../data/1440/5X/original"
test_data_dir = "../data/1440/5X/test"
results_dir = '../models_checkpoints/'
results_files = os.listdir(results_dir)
results_files
file_name = 'Xception_tf_0_lr_0.001_batch_16_epochs_2'
file_path = results_dir + file_name
class_names = os.listdir(data_dir)
class_names = [x.title() for x in class_names]
class_names = [x.replace('_',' ') for x in class_names]
img_width, img_height = 256, 256
batch_size = 16
results = {
"accuracy":"",
"loss":"",
"precision":"",
"recall":"",
"f1-score":"",
"report":""
}
def plot_confusion_matrix(confusion_matrix_to_print, classes,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints applicationsand plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
plt.imshow(confusion_matrix_to_print, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
thresh = confusion_matrix_to_print.max() / 2.
for i, j in itertools.product(range(confusion_matrix_to_print.shape[0]),
range(confusion_matrix_to_print.shape[1])):
plt.text(j, i, format(confusion_matrix_to_print[i, j], 'd'),
horizontalalignment="center",
color="white" if confusion_matrix_to_print[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
def make_confusion_matrix_and_plot(validation_generator, file_name, model_final):
"""Predict and plot confusion matrix"""
validation_features = model_final.predict_generator(validation_generator,
validation_generator.samples,
verbose=1)
plt.figure(figsize=(10,10))
plot_confusion_matrix(confusion_matrix(np.argmax(validation_features, axis=1),
validation_generator.classes),
classes=class_names,
title='Test Confusion Matrix Graph')
plt.savefig('../output_images/' + file_name + '_matrix_TEST.png')
print("Total time after generate confusion matrix: %s" %
(time.time() - START_TIME))
def classification_report_csv(report, file_name):
"""
This function turns the sklearn report into an array where each class is a position.
"""
report_data = []
lines = report.split('\n')
for line in lines[2:-5]:
line=" ".join(line.split())
row = {}
row_data = line.split(' ')
#row['class'] = row_data[0]
row['precision'] = row_data[1]
row['recall'] = row_data[2]
row['f1_score'] = row_data[3]
#row['support'] = row_data[4]
report_data.append(row)
dataframe = pd.DataFrame.from_dict(report_data)
print(dataframe)
path = '../results/' + file_name + '_classification_report_TEST.csv'
dataframe.to_csv(path, index = False)
print("Report dataframe saved as " + path)
return dataframe
def main():
# used to rescale the pixel values from [0, 255] to [0, 1] interval
datagen = ImageDataGenerator(rescale=1./255)
test_generator = datagen.flow_from_directory(
test_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
# Loading the saved model
model = load_model(file_path + '_model.h5')
# Loading weights
model.load_weights(file_path + '_weights.h5')
# Printing the model summary
#model.summary()
test_predict = model.predict_generator(test_generator, steps=None)
test_evaluate = model.evaluate_generator(test_generator, test_generator.samples)
# Printing sklearn metrics report for test
y_true = test_generator.classes
y_pred = np.argmax(test_predict, axis=1)
report = metrics.classification_report(y_true, y_pred, digits=6)
csv_report = classification_report_csv(report, file_name)
# making the confusion matrix of train/test
test_generator_matrix = datagen.flow_from_directory(
test_data_dir,
target_size=(img_width, img_height),
batch_size=1,
shuffle=False,
class_mode='categorical')
make_confusion_matrix_and_plot(
test_generator_matrix, file_name, model)
# Saving results
results["accuracy"] = test_evaluate[1]
results["loss"] = test_evaluate[0]
results["precision"] = metrics.precision_score(y_true, y_pred, average='macro')
results["recall"] = metrics.recall_score(y_true, y_pred, average='macro')
results["f1-score"] = metrics.f1_score(y_true, y_pred, average='macro')
results["report"] = csv_report
return results
results = main()
results
###Output
_____no_output_____
###Markdown
###Code
2+2 #Executing a command
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
X = np.random.normal(0,1,100)
plt.plot(X);
plt.xlabel('Time')
plt.ylabel('returns')
plt.show()
np.mean(X)
np.std(X)
!pip install quandl #add this line if you run on colab
import quandl
import matplotlib.pyplot as plt
import pandas as pd
t0 = "2017-05-16" #start-date
t1 = "2018-05-16" # end-date
i_ticker = 'WIKI/GOOG' # we import data from wiki table of Quandl
stk = quandl.get(i_ticker, start_date = t0, end_date = t1)
type(stk)
stk.head()
stk.Close.plot()
#stk.Volume.plot()
plt.title(i_ticker + "price")
plt.xlabel("date")
plt.ylabel("price")
plt.legend();
plt.show()
rolling_mean = stk['Close'].rolling(10).mean()
stk.Close.plot()
rolling_mean.plot()
plt.title(i_ticker + 'price with its rolling means')
plt.xlabel("date")
plt.ylabel("price")
plt.legend();
plt.show()
###Output
_____no_output_____
###Markdown
Plot the generated data
###Code
tree_obj.plot_all_mat()
###Output
_____no_output_____
###Markdown
Plot the groud-truth tree
###Code
tree_obj.plot_tree_full(save_dir, title="Ground-truth tree with attached samples")
###Output
Ground-truth tree with attached samples
###Markdown
Getting required data to start MCMC
###Code
gt_E, gt_D, D, CNP, gt_T = tree_obj.get_mcmc_tree_data()
gt_E, gt_D, D = gt_E.T, gt_D.T, D.T,
gensNames = list( str(i) for i in range(M) )
print("GenesNames:\n\t"+'\n\t'.join(gensNames))
C_num = D.shape[1]
G_num = D.shape[0]
_.print_warn( 'There is {} cells and {} mutations at {} genes in this dataset.'.format(C_num, G_num, len(gensNames)) )
###Output
GenesNames:
0
1
2
3
4
5
6
7
8
9
###Markdown
Run MCMC Fill missed data
###Code
### fill missed data
def tf(m,c):
os = len(np.where(D[:,c]==1.))*1.
zs = len(np.where(D[:,c]==0.))*1.
return 1. if np.random.rand() < os/(os+zs) else 0.
for m in range(G_num):
for c in range(C_num):
if D[m,c] == 3.:
D[m,c] = tf(m,c)
dl = list(d for d in D)
root = [n for n,d in gt_T.in_degree() if d==0][0]
print('ROOT:', root)
T = McmcTree(
gensNames,
D,
data_list=dl,
root=str(root),
alpha=alpha, beta=beta,
save_dir="../tmp"
)
###Output
ROOT: 7
###Markdown
Set GT data to evalute the inferenced tree
###Code
T.set_ground_truth(gt_D, gt_E, gt_T=gt_T)
# T.plot_tree_full('../tmp/', title="Ground-truth tree with attached samples")
T.randomize()
# hkebs
T.plot_best_T('initial T')
jks
# T.plot('T0')
# T.randomize()
# T.plot_best_T('initial T')
# T.plot('T0')
T.set_rho(30)
for i in range(100):
if T.next():
break
T.plot_all_results()
img = mpimg.imread('./benchmark.png')
plt.figure(figsize=(30,40))
plt.imshow(img)
plt.title('Benchmark')
plt.axis('off')
run_data = T.run_data
rd = np.array(run_data)
errors = T.get_errors()
rd
plt.plot(errors, 'r', label='Accepted Error') # accepted errors
plt.plot(rd[:, -2], 'k', label='Random Error') # random errors
# plt.plot(self.enrgs) # best errors
plt.legend()
plt.xlabel('Iteration')
plt.ylabel('Energy')
plt.title('Changing energy after {} step'.format(5000))
# if filename:
# plt.savefig(filename)
plt.show()
new_acc_errors = []
new_random_errors = []
for i, t in enumerate(rd):
rnd = np.random.rand()
if t[-1] > rnd/10000:
new_acc_errors.append(errors[i])
new_random_errors.append(t[-2])
plt.figure(figsize=(12, 4))
plt.plot(new_acc_errors[1:], 'r', label='Accepted Error') # accepted errors
plt.plot(new_random_errors, 'k', label='Random Error') # random errors
# plt.plot(self.enrgs) # best errors
plt.legend()
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.title('Changing loss after {} step'.format(len(new_random_errors)))
# if filename:
# plt.savefig(filename)
plt.show()
T.plot_all_results(plot_pm=True)
# T.plot_all_results()
D = T.D
np.unique(D)
# D
M = D.shape[0]
N = D.shape[1]
plt.figure(figsize=(M*0.5,N*0.5))
# plt.imshow(D.T-1, cmap='GnBu', interpolation="nearest")
t=1
cmap = {0:[1,1,0.95,t], 1:[0.3,0.3,0.6,t], 3:[0.5,0.5,0.8,t/3]}
labels = {0:'0', 1:'1', 3:'missed'}
arrayShow = np.array([[cmap[i] for i in j] for j in D.T])
## create patches as legend
patches =[mpatches.Patch(color=cmap[i],label=labels[i]) for i in cmap]
plt.imshow(arrayShow, interpolation="nearest")
plt.legend(handles=patches, loc=2, borderaxespad=-6)
plt.yticks(range(D.shape[1]), ['cell %d'%i for i in range(N)])
plt.xticks(range(D.shape[0]), [ 'mut %d'%i for i in range(M)])
plt.xticks(rotation=60)
plt.title("Noisy Genes-Cells Matrix D (input Data)")
# file_path = '{}D_{}.png'.format('./', str_params)
# plt.savefig(file_path)
plt.show()
D.shape
###Output
_____no_output_____
###Markdown
Load the training data into feature matrix, class labels, and event ids:
###Code
DATA_TRAIN_PATH = '../data/train.csv' # TODO: download train data and supply path here
y, x, ids = loader.load_csv_data(DATA_TRAIN_PATH)
nb_samples = x.shape[0]
nb_features = x.shape[1]
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
# Cleaned input array by replacing errors with most frequent values
x_clean_mf = pp.clean_data(x, error_value, pp.most_frequent)
# Cleaned input array by replacing errors with mean
x_clean_mean = pp.clean_data(x, error_value, np.mean)
# Cleaned input array by replacing errors with median
x_clean_median = pp.clean_data(x, error_value, np.median)
# Chosen cleaned data
x_clean = x_clean_mean
# Normalised version of the data (without the 1's column)
x_normal = pp.normalise(x_clean)
x_normal.shape
# Compute tx : column of ones followed by x
first_col = np.ones((nb_samples, 1))
tx = np.concatenate((first_col, x_normal), axis=1)
tx.shape
w_across_impl = {}
# Test for Gradient Descent Least squares.
# Define the parameters of the algorithm.
max_iters = 0
gamma = 10e-2
# Initialization
w_initial = np.ones((31,))
# Debugger
dbg = debugger.Debugger(['loss', 'w', 'gamma'])
# Start gradient descent.
w, loss = impl.least_squares_GD(y, tx, w_initial, max_iters, gamma, debugger=dbg, dynamic_gamma=True)
dbg.plot('loss')
dbg.print('loss', last_n=0)
print('-------------------------')
dbg.print('gamma', last_n=0)
w_across_impl['GD_LS'] = w
# Test for Stochastic Gradient Descent Least squares.
# clear debugger
dbg.clear()
# Define the parameters of the algorithm.
max_iters = 200
gamma = 10e-3
# Initialization
w_initial = np.ones((31,))
# Start gradient descent.
w, loss = impl.least_squares_SGD(y, tx, w_initial, max_iters, gamma, debugger=dbg, dynamic_gamma=False)
dbg.plot('loss')
dbg.print('loss', last_n=0)
print('-------------------------')
dbg.print('gamma', last_n=0)
w_across_impl['SGD_LS'] = w
# Test for Least squares with normal equations.
w, loss = impl.least_squares(y, tx)
print('loss:', loss)
w_across_impl['NE_LS'] = w
print(np.linalg.norm(w))
eps = 1000
norm_w = np.linalg.norm(w)
n_impl = len(w_across_impl)
for i, impl1 in enumerate(w_across_impl):
for j, impl2 in enumerate(w_across_impl):
if(impl1 < impl2):
error = np.linalg.norm(w_across_impl[impl1] - w_across_impl[impl2])
print('Error between', impl1, 'and', impl2, 'is', error)
assert error < eps
print('\nNorm of w:', norm_w)
###Output
Error between GD_LS and SGD_LS is 1.8227946772554986
Error between GD_LS and NE_LS is 440.64244918920446
Error between NE_LS and SGD_LS is 440.6466806833583
Norm of w: 440.81584456731656
###Markdown
Logistic regression test
###Code
np.random.seed(114)
# Random guess
w = np.random.uniform(0,1,size=nb_features)
z_ = cost.sigmoid(x_normal @ w)
y_ = misc.map_prediction(z_)
print(misc.accuracy(y, y_))
# Test of log reg GD
# Define the parameters of the algorithm.
max_iters = 300
gamma = 1e-7
# Initialization
nb_features = tx.shape[1]
w_initial = np.random.uniform(0,1,size=nb_features)
dbg = debugger.Debugger(['loss', 'w', 'gamma'])
w, loss = impl.logistic_regression(y, tx, w_initial, max_iters, gamma, debugger=dbg, dynamic_gamma=True)
dbg.plot('loss')
dbg.print('loss', last_n=0)
print('------------------')
dbg.print('gamma', last_n=0)
w_across_impl['LR'] = w
y_ = misc.map_prediction(misc.lr_output(tx, w))
print(misc.accuracy(y, y_))
# Test of log reg GD
# Define the parameters of the algorithm.
max_iters = 300
gamma = 1e-7
lambda_ = 1e-7
# Initialization
nb_features = tx.shape[1]
w_initial = np.random.uniform(0,1,size=nb_features)
dbg = debugger.Debugger(['loss', 'w', 'gamma'])
w, loss = impl.reg_logistic_regression(y, tx, lambda_, w_initial, max_iters, gamma, debugger=dbg, dynamic_gamma=True)
dbg.plot('loss')
dbg.print('loss', last_n=0)
print('------------------')
dbg.print('gamma', last_n=0)
w_across_impl['RLR'] = w
eps = 1000
norm_w = np.linalg.norm(w)
n_impl = len(w_across_impl)
for i, impl1 in enumerate(w_across_impl):
for j, impl2 in enumerate(w_across_impl):
if(impl1 < impl2):
error = np.linalg.norm(w_across_impl[impl1] - w_across_impl[impl2])
print('Error between', impl1, 'and', impl2, 'is', error)
assert error < eps
print('\nNorm of w:', norm_w)
###Output
Error between GD_LS and SGD_LS is 1.8227946772554986
Error between GD_LS and NE_LS is 440.64244918920446
Error between GD_LS and LR is 2.472966185412844
Error between GD_LS and RLR is 3.3421665343864886
Error between NE_LS and SGD_LS is 440.6466806833583
Error between NE_LS and RLR is 440.8524456162927
Error between LR and SGD_LS is 2.853413681534433
Error between LR and NE_LS is 440.522273422285
Error between LR and RLR is 1.5387666468477792
Error between RLR and SGD_LS is 3.4851580866207392
Norm of w: 3.770358968600738
###Markdown
Dish Name Segmentation
###Code
img = thresh
kernel = np.ones((6,6), np.uint8)
op1 = cv2.dilate(img, kernel, iterations=1)
plt.imshow( op1,cmap='gray')
def dish_name_segmentation(dilated_img ,img):
num_labels, labels_im = cv2.connectedComponents(dilated_img)
boxes = get_bounding_boxes(num_labels ,labels_im)
segments = []
for box in boxes:
label = box[0]
w = box[1]
h = box[2]
x = box[3]
y = box[4]
cropped = img[y-10:y+h+10 ,x-10:x+w+10]
segments.append( [ 255-cropped , x,y,w,h ] )
return segments
segs = dish_name_segmentation(op1 ,img)
f, plots = plt.subplots(len(segs),1)
counter = 0
for i in segs:
plots[counter].imshow(i[0] ,cmap='gray')
counter+=1
###Output
_____no_output_____
###Markdown
OCR
###Code
from PIL import Image
import pytesseract
final_text = []
for i in segs:
PIL_image = Image.fromarray(i[0])
text = pytesseract.image_to_string(PIL_image)
temp = text.split('\x0c')[0]
line = temp.split('\n')[0]
for j in [line]:
final_text.append([j ,i[1] ,i[2] ,i[3] ,i[4] ])
final_text
###Output
_____no_output_____
###Markdown
Database Creation
###Code
import os
rootdir = '../img/menu_items'
db = []
for subdir, dirs, files in os.walk(rootdir):
for file in files:
temp = file.split('.')[0]
db.append(temp)
db
###Output
_____no_output_____
###Markdown
OCR Correction
###Code
def edit_distance(s1 ,s2 ,max_dist):
l1 = len(s1)
l2 = len(s2)
dp = np.zeros((2 ,l1+1))
for i in range(l1+1):
dp[0][i] = i
for i in range(1,l2+1):
for j in range(0,l1+1):
if j==0:
dp[i%2][j] = i
elif s1[j-1] == s2[i-1]:
dp[i%2][j] = dp[(i-1)%2][j-1]
else:
dp[i%2][j] = 1 + min(dp[(i-1)%2][j], min(dp[i%2][j-1], dp[(i-1)%2][j-1]))
dist = dp[l2%2][l1]
if dist > max_dist:
return max_dist+1
return dist
def db_lookup(test_str , db ,max_dist):
min_dist = sys.maxsize
match = None
for i in db:
dist = edit_distance(test_str ,i ,max_dist)
if dist < min_dist:
min_dist = dist
match = i
if min_dist == 0 :
break
if min_dist < max_dist:
return match
def OCR_Correction( final_text ,max_dist):
corrected_img = []
for i in final_text:
dish = i[0].lower()
op = db_lookup(dish ,db ,max_dist)
i.append(op)
corrected_img.append(i)
return corrected_img
dish_names = OCR_Correction(final_text , 4)
dish_names
###Output
_____no_output_____
###Markdown
Final Output Generation
###Code
img = cv2.imread('../img/crop.jpeg' ,0)
plt.imshow(img ,cmap='gray')
img = 255-img
res = rotate_image(img,final_angle)
res = 255- res
plt.imshow(res ,cmap='gray')
test = dish_names[1]
path = test[5]+'.jpeg'
w = test[3]
h = test[4]
dish_img = cv2.imread('../img/menu_items/' + path )
dish_img = cv2.cvtColor(dish_img, cv2.COLOR_BGR2RGB)
plt.imshow(dish_img ,cmap='gray')
sz = res.shape
width = 800
hi ,wi = int(sz[0]*width/sz[1]) , width
cropped_img = cv2.resize(res ,(wi, hi))
plt.imshow(cropped_img ,cmap='gray')
ratio = width/sz[1]
new_dish_img = cv2.resize(dish_img , (int((sz[1]*h*ratio)/sz[0]) ,int(h*ratio) ) )
plt.imshow(new_dish_img)
new_cropped_img = np.stack((cropped_img,)*3, axis=-1)
x,y,w,h = test[1] ,test[2] ,test[3] ,test[4]
x = int(x*ratio)
y = int(y*ratio)
w = int(w*ratio)
h = int(h*ratio)
sz = new_dish_img.shape
new_cropped_img[ y:y+sz[0] ,x+w:x+w+sz[1],:] = new_dish_img[:,:,:]
plt.figure( figsize=(15,15))
plt.imshow(new_cropped_img)
def get_finla_output( menu , dish_names ,final_angle):
img = 255-menu
res = rotate_image(img,final_angle)
res = 255- res
siz = res.shape
width = 800
hi ,wi = int(siz[0]*width/siz[1]) , width
cropped_img = cv2.resize(res ,(wi, hi))
new_cropped_img = np.stack((cropped_img,)*3, axis=-1)
for i in dish_names:
test = i
if test[5] != None:
path = test[5]+'.jpeg'
w = test[3]
h = test[4]
dish_img = cv2.imread('../img/menu_items/' + path )
dish_img = cv2.cvtColor(dish_img, cv2.COLOR_BGR2RGB)
ratio = width/siz[1]
new_dish_img = cv2.resize(dish_img , (int((siz[1]*h*ratio)/siz[0]) ,int(h*ratio) ) )
x,y,w,h = test[1] ,test[2] ,test[3] ,test[4]
x = int(x*ratio)
y = int(y*ratio)
w = int(w*ratio)
h = int(h*ratio)
sz = new_dish_img.shape
new_cropped_img[ y:y+sz[0] ,x+w:x+w+sz[1],:] = new_dish_img[:,:,:]
return new_cropped_img
img = cv2.imread('../img/crop.jpeg' ,0)
op = get_finla_output(img ,dish_names ,final_angle)
plt.figure(figsize=(15,15))
plt.imshow(op)
###Output
_____no_output_____
###Markdown
Loading Model Please load the test data set in the format - test_data inside that two folders one named No_ball and other named valid_ball you can upload it either by directly uploading the zip file and then unzipping it or through google colab. Also load the file named 'No_ballResNet50FineTune_1.h5' provided in the submission file
###Code
from keras.models import load_model
p = load_model('No_ballResNet50FineTune_1.h5')
#Run this cell only if uplaoding the zip file(test dataset) directly from drive link.
#change field by changing pasting the id of drive share link of the test data
fileId = '14FiaZImlYMupUbX19utXgSq-ajy2dfZ3'
import os
from zipfile import ZipFile
from shutil import copy
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
fileName = fileId + '.zip'
downloaded = drive.CreateFile({'id': fileId})
downloaded.GetContentFile(fileName)
ds = ZipFile(fileName)
ds.extractall()
os.remove(fileName)
print('Extracted zip file ' + fileName)
#Run this cell only if not running the above cell i.e. uplaoding the zip file directly from local host
ds = ZipFile("/content/data2.zip", 'r') #change data2 with the name of zip folder upload
ds.extractall()
os.remove("data2.zip")
#print('Extracted zip file ' + fileName)
from keras.preprocessing.image import ImageDataGenerator
PATHtest = '/content/data2' #change data2 with the name of zip folder upload
print(len(os.listdir(PATHtest)))
test_dir = PATHtest
batch_size = 32
target_size=(224, 224)
test_datagen = ImageDataGenerator(rescale=1./255)
test_generator = test_datagen.flow_from_directory(
test_dir,target_size=target_size,batch_size=batch_size)
print(test_generator.class_indices)
test_loss, test_acc = p.evaluate_generator(test_generator, steps= 3561 // batch_size, verbose=1)
print('test acc:', test_acc)
p.evaluate_generator(test_generator, steps= 3561 // batch_size, verbose=1)
print(test_generator.class_indices)
p.predict_generator(test_generator)
###Output
_____no_output_____ |
sample-code/notebooks/4-04.ipynb | ###Markdown
็ฌฌ4็ซ Matplotlibใงใฐใฉใใ ๆ็ปใใใ 4-4: ๆฃๅธๅณ
###Code
import matplotlib.pyplot as plt
import numpy as np
# ใชในใ4.4.1๏ผๆฃๅธๅณใฎๆ็ป
plt.style.use("ggplot")
# ๅ
ฅๅๅคใฎ็ๆ
np.random.seed(2)
x = np.arange(1, 101)
y = 4 * x * np.random.rand(100)
# ๆฃๅธๅณใฎๆ็ป
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(x, y)
plt.show()
# ใชในใ4.4.2๏ผanime_master_csvใใผใฟใฎ่ชญใฟ่พผใฟ
import os
import pandas as pd
base_url = "https://raw.githubusercontent.com/practical-jupyter/sample-data/master/anime/"
anime_master_csv = os.path.join(base_url, "anime_master.csv")
df = pd.read_csv(anime_master_csv)
df.head()
# ใชในใ4.4.3๏ผanime_master_csvใใผใฟใฎๅ่ชญใฟ่พผใฟ
df = pd.read_csv(anime_master_csv, index_col="anime_id")
df.head()
# ใชในใ4.4.4๏ผmembersใจratingใฎๅคใงๆฃๅธๅณใไฝๆ
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(df["members"], df["rating"], alpha=0.5)
plt.show()
# ใชในใ4.4.5๏ผใกใณใๆฐ80ไธไบบไปฅไธใฎไฝๅ
# membersใฎๅคใงใใผใฟใ็ตใ่พผใฟ
df.loc[df["members"] >= 800000, ["name", "members"]]
# ใชในใ4.4.6๏ผใกใณใๆฐ60ไธไบบไปฅไธใใคใฌใผใใฃใณใฐ8.5ไปฅไธใฎใใผใฟ
# membersใจratingใฎๅคใงใใผใฟใ็ตใ่พผใฟ
df.loc[(df["members"] >= 600000) & (df["rating"] >= 8.5), ["name", "rating"]]
# ใชในใ4.4.7๏ผtypeใฎ้่คใฎใชใใชในใ
types = df["type"].unique()
types
# ใชในใ4.4.9๏ผ้
็ตฆ็จฎๅฅใใจใซใฐใซใผใๅใใใใใผใฟใฎๆฃๅธๅณใฎไฝๆ
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(111)
for t in types:
x = df.loc[df["type"] == t, "members"]
y = df.loc[df["type"] == t, "rating"]
ax.scatter(x, y, alpha=0.5, label=t)
ax.set_title("้
็ตฆ็จฎๅฅใใจใซใฐใซใผใๅใใๆฃๅธๅณ")
ax.set_xlabel("Members")
ax.set_ylabel("Rating")
ax.legend(loc="lower right", fontsize=12)
plt.show()
###Output
_____no_output_____ |
code/InferenceNotebook.ipynb | ###Markdown
Loading the data
###Code
train_features = pd.read_csv('../input/lish-moa/train_features.csv')
test_data = pd.read_csv('../input/lish-moa/test_features.csv')
train_drug_ids = pd.read_csv('../input/lish-moa/train_drug.csv')
train_targets = pd.read_csv('../input/lish-moa/train_targets_scored.csv')
train_data = train_features.merge(train_targets, on='sig_id', how='left')
train_data = train_data.merge(train_drug_ids, on='sig_id', how='left')
target_columns = [c for c in train_targets.columns if c != 'sig_id']
gene_features = [col for col in train_features.columns if col.startswith('g-')]
cell_features = [col for col in train_features.columns if col.startswith('c-')]
feature_columns = gene_features + cell_features
###Output
_____no_output_____
###Markdown
Cross validation strategy
###Code
def create_cross_validation_strategy(data, targets, FOLDS, SEED):
vc = data.drug_id.value_counts()
# vc1 = vc.loc[(vc==6)|(vc==12)|(vc==18)].index.sort_values()
# vc2 = vc.loc[(vc!=6)&(vc!=12)&(vc!=18)].index.sort_values()
vc1 = vc.loc[vc <= 19].index.sort_values()
vc2 = vc.loc[vc > 19].index.sort_values()
dct1 = {}
dct2 = {}
skf = MultilabelStratifiedKFold(n_splits=FOLDS, shuffle=True, random_state=SEED)
tmp = data.groupby('drug_id')[targets].mean().loc[vc1]
for fold,(idxT,idxV) in enumerate( skf.split(tmp,tmp[targets])):
dd = {k:fold for k in tmp.index[idxV].values}
dct1.update(dd)
# STRATIFY DRUGS MORE THAN 18X
skf = MultilabelStratifiedKFold(n_splits=FOLDS, shuffle=True, random_state=SEED)
tmp = data.loc[data.drug_id.isin(vc2)].reset_index(drop=True)
for fold,(idxT,idxV) in enumerate( skf.split(tmp,tmp[targets])):
dd = {k:fold for k in tmp.sig_id[idxV].values}
dct2.update(dd)
# ASSIGN FOLDS
data['fold'] = data.drug_id.map(dct1)
data.loc[data.fold.isna(),'fold'] = data.loc[data.fold.isna(),'sig_id'].map(dct2)
data.fold = data.fold.astype('int8')
return data
###Output
_____no_output_____
###Markdown
Modeling
###Code
class MoaDataset:
def __init__(self, dataset_df, gene_features, cell_features, target_ids):
self.dataset_df = dataset_df
self.target_ids = target_ids
self.gene_features = self.dataset_df[gene_features].values
self.cell_features = self.dataset_df[cell_features].values
self.targets = None
if self.target_ids is not None:
self.targets = self.dataset_df[target_ids].values
def __len__(self):
return len(self.dataset_df)
def number_of_features(self):
return self.gene_features.shape[1], self.cell_features.shape[1]
def __getitem__(self, item):
dataset_sample = {}
dataset_sample['genes'] = torch.tensor(self.gene_features[item, :], dtype=torch.float)
dataset_sample['cells'] = torch.tensor(self.cell_features[item, :], dtype=torch.float)
dataset_sample['sig_id'] = self.dataset_df.loc[item, 'sig_id']
if self.target_ids is not None:
dataset_sample['y'] = torch.tensor(self.targets[item, :], dtype=torch.float)
return dataset_sample
class MoaMetaDataset:
def __init__(self, dataset_df, feature_ids, target_ids):
self.dataset_df = dataset_df
self.feature_ids = feature_ids
self.target_ids = target_ids
self.num_models = len(feature_ids) // 206
# samples x models x targets
self.features = self.dataset_df[feature_ids].values
self.targets = None
if self.target_ids is not None:
self.targets = self.dataset_df[target_ids].values
def __len__(self):
return len(self.dataset_df)
def num_of_features(self):
return len(feature_ids)
def num_of_targets(self):
return None if self.target_ids is None else len(self.target_ids)
def get_ids(self):
return self.dataset_df.sig_id.values
def __getitem__(self, item):
return_item = {}
return_item['x'] = torch.tensor(self.features[item, :].reshape(self.num_models, 206), dtype=torch.float)
return_item['sig_id'] = self.dataset_df.loc[item, 'sig_id']
if self.target_ids is not None:
return_item['y'] = torch.tensor(self.targets[item, :], dtype=torch.float)
return return_item
class ModelConfig:
def __init__(self, number_of_features, number_of_genes, number_of_cells, number_of_targets):
self.number_of_features = number_of_features
self.number_of_genes = number_of_genes
self.number_of_cells = number_of_cells
self.number_of_targets = number_of_targets
class MoaModelBlock(nn.Module):
def __init__(self, num_in, num_out, dropout, weight_norm=False, ):
super().__init__()
self.batch_norm = nn.BatchNorm1d(num_in)
self.dropout = nn.Dropout(dropout)
if weight_norm:
self.linear = nn.utils.weight_norm(nn.Linear(num_in, num_out))
else:
self.linear = nn.Linear(num_in, num_out)
self.activation = nn.PReLU(num_out)
def forward(self, x):
x = self.batch_norm(x)
x = self.dropout(x)
x = self.linear(x)
x = self.activation(x)
return x
class MoaEncodeBlock(nn.Module):
def __init__(self, num_in, num_out, dropout, weight_norm=False):
super().__init__()
self.batch_norm = nn.BatchNorm1d(num_in)
self.dropout = nn.Dropout(dropout)
if weight_norm:
self.linear = nn.utils.weight_norm(nn.Linear(num_in, num_out))
else:
self.linear = nn.Linear(num_in, num_out)
def forward(self, x):
x = self.batch_norm(x)
x = self.dropout(x)
x = self.linear(x)
return x
class MoaModel_V1(nn.Module):
def __init__(self, model_config):
super().__init__()
total_features = model_config.number_of_genes + model_config.number_of_cells
dropout = 0.15
hidden_size = 1024
self.block1 = MoaModelBlock(total_features, 2048, dropout)
self.block2 = MoaModelBlock(2048, 1024, dropout)
self.model = nn.Sequential(
nn.BatchNorm1d(1024),
nn.Dropout(dropout),
nn.Linear(1024, model_config.number_of_targets))
def forward(self, data):
x_genes = data['genes']
x_cells = data['cells']
x = torch.cat((x_genes, x_cells), dim=1)
x = x.to(DEVICE)
x = self.block1(x)
x = self.block2(x)
x = self.model(x)
return x
class MoaModel_V2(nn.Module):
def __init__(self, model_config):
super().__init__()
dropout = 0.15
hidden_size = 512
self.genes_encoder = MoaEncodeBlock(model_config.number_of_genes, 128, dropout)
self.cells_encoder = MoaEncodeBlock(model_config.number_of_cells, 32, dropout)
out_encodings = 128 + 32
self.block1 = MoaModelBlock(128, hidden_size, dropout)
self.block2 = MoaModelBlock(32, hidden_size, dropout)
self.block3 = MoaModelBlock(hidden_size, hidden_size, dropout)
self.block4 = MoaModelBlock(hidden_size, hidden_size, dropout)
self.model = nn.Sequential(
nn.BatchNorm1d(hidden_size),
nn.Dropout(dropout),
nn.Linear(hidden_size, model_config.number_of_targets))
def forward(self, data):
x_genes = data['genes'].to(DEVICE)
x_cells = data['cells'].to(DEVICE)
encoded_genes = self.genes_encoder(x_genes)
encoded_cells = self.cells_encoder(x_cells)
x_genes = self.block1(encoded_genes)
x_cells = self.block2(encoded_cells)
x = self.block3(x_genes + x_cells)
x = self.block4(x)
x = self.model(x)
return x
class MoaModel_V3(nn.Module):
def __init__(self, model_config):
super().__init__()
dropout = 0.15
hidden_size = 512
self.genes_encoder = MoaEncodeBlock(model_config.number_of_genes, 128, dropout)
self.cells_encoder = MoaEncodeBlock(model_config.number_of_cells, 32, dropout)
out_encodings = 128 + 32
self.block1 = MoaModelBlock(out_encodings, hidden_size, dropout)
self.block2 = MoaModelBlock(hidden_size, hidden_size, dropout)
self.model = nn.Sequential(
nn.BatchNorm1d(hidden_size),
nn.Dropout(dropout),
nn.Linear(hidden_size, model_config.number_of_targets))
def forward(self, data):
x_genes = data['genes'].to(DEVICE)
x_cells = data['cells'].to(DEVICE)
encoded_genes = self.genes_encoder(x_genes)
encoded_cells = self.cells_encoder(x_cells)
x = torch.cat((encoded_genes, encoded_cells), 1)
x = self.block1(x)
x = self.block2(x)
x = self.model(x)
return x
class ConvFeatureExtractions(nn.Module):
def __init__(self, num_features, hidden_size, channel_1=256, channel_2=512):
super().__init__()
self.channel_1 = channel_1
self.channel_2 = channel_2
self.final_conv_features = int(hidden_size / channel_1) * channel_2
self.batch_norm1 = nn.BatchNorm1d(num_features)
self.dropout1 = nn.Dropout(0.15)
self.dense1 = nn.utils.weight_norm(nn.Linear(num_features, hidden_size))
self.batch_norm_c1 = nn.BatchNorm1d(channel_1)
self.dropout_c1 = nn.Dropout(0.15)
self.conv1 = nn.utils.weight_norm(nn.Conv1d(channel_1,channel_2, kernel_size = 5, stride = 1, padding=2, bias=False),dim=None)
self.batch_norm_c2 = nn.BatchNorm1d(channel_2)
self.dropout_c2 = nn.Dropout(0.2)
self.conv2 = nn.utils.weight_norm(nn.Conv1d(channel_2,channel_2, kernel_size = 3, stride = 1, padding=1, bias=True),dim=None)
self.max_po_c2 = nn.MaxPool1d(kernel_size=4, stride=2, padding=1)
self.final_conv_features = int(self.final_conv_features / 2)
self.flt = nn.Flatten()
def forward(self, x):
x = self.batch_norm1(x)
x = self.dropout1(x)
x = F.celu(self.dense1(x), alpha=0.06)
x = x.reshape(x.shape[0], self.channel_1, -1)
x = self.batch_norm_c1(x)
x = self.dropout_c1(x)
x = F.relu(self.conv1(x))
x = self.batch_norm_c2(x)
x = self.dropout_c2(x)
x = F.relu(self.conv2(x))
x = self.max_po_c2(x)
x = self.flt(x)
return x
class MoaModel_V4(nn.Module):
def __init__(self, model_config):
super(MoaModel_V4, self).__init__()
hidden_size = 512
dropout = 0.15
self.gene_cnn_features = ConvFeatureExtractions(model_config.number_of_genes, 2048, channel_1=128, channel_2=256)
self.cell_cnn_features = ConvFeatureExtractions(model_config.number_of_cells, 1024, channel_1=64, channel_2=128)
encoded_features = self.gene_cnn_features.final_conv_features + self.cell_cnn_features.final_conv_features
self.block1 = MoaModelBlock(encoded_features, hidden_size, dropout, weight_norm=True)
self.model = MoaEncodeBlock(hidden_size, model_config.number_of_targets, dropout, weight_norm=True)
def forward(self, data):
x_genes = data['genes'].to(DEVICE)
x_cells = data['cells'].to(DEVICE)
x_genes = self.gene_cnn_features(x_genes)
x_cells = self.cell_cnn_features(x_cells)
x = torch.cat((x_genes, x_cells), dim=1)
x = self.block1(x)
x = self.model(x)
return x
class MetaModel(nn.Module):
def __init__(self, model_config):
super().__init__()
self.num_models = model_config.number_of_features // model_config.number_of_targets
self.model_config = model_config
dropout = 0.15
hidden_size = 512
self.encoders = nn.ModuleList([MoaEncodeBlock(model_config.number_of_targets, 64, dropout) for i in range(self.num_models)])
self.model = nn.Sequential(nn.Linear(64, hidden_size),
nn.Dropout(dropout),
nn.ReLU(),
nn.Linear(hidden_size, hidden_size),
nn.Dropout(dropout),
nn.ReLU(),
nn.Linear(hidden_size, model_config.number_of_targets))
def forward(self, data): # batch size x models x features
x = data['x'].to(DEVICE)
x_ = self.encoders[0](x[:, 0, :])
for i in range(1, self.num_models):
x_ = x_ + self.encoders[i](x[:, i, :])
return self.model(x_)
###Output
_____no_output_____
###Markdown
Smooth loss function
###Code
class SmoothCrossEntropyLoss(_WeightedLoss):
def __init__(self, weight=None, reduction='mean', smoothing=0.0):
super().__init__(weight=weight, reduction=reduction)
self.smoothing = smoothing
self.weight = weight
self.reduction = reduction
@staticmethod
def _smooth(targets, n_classes, smoothing=0.0):
assert 0 <= smoothing <= 1
with torch.no_grad():
targets = targets * (1 - smoothing) + torch.ones_like(targets).to(DEVICE) * smoothing / n_classes
return targets
def forward(self, inputs, targets):
targets = SmoothCrossEntropyLoss()._smooth(targets, inputs.shape[1], self.smoothing)
if self.weight is not None:
inputs = inputs * self.weight.unsqueeze(0)
loss = F.binary_cross_entropy_with_logits(inputs, targets)
return loss
class SmoothBCEwLogits(_WeightedLoss):
def __init__(self, weight=None, reduction='mean', smoothing=0.0,pos_weight = None):
super().__init__(weight=weight, reduction=reduction)
self.smoothing = smoothing
self.weight = weight
self.reduction = reduction
self.pos_weight = pos_weight
@staticmethod
def _smooth(targets:torch.Tensor, n_labels:int, smoothing=0.0):
assert 0 <= smoothing < 1
with torch.no_grad():
targets = targets * (1.0 - smoothing) + 0.5 * smoothing
return targets
def forward(self, inputs, targets):
targets = SmoothBCEwLogits._smooth(targets, inputs.size(-1),
self.smoothing)
loss = F.binary_cross_entropy_with_logits(inputs, targets,self.weight,
pos_weight = self.pos_weight)
if self.reduction == 'sum':
loss = loss.sum()
elif self.reduction == 'mean':
loss = loss.mean()
return loss
###Output
_____no_output_____
###Markdown
Scaling functions
###Code
def true_rank_gaus_scale(data, columns):
global DEVICE
if DEVICE == 'cuda':
import cupy as cp
from cupyx.scipy.special import erfinv
epsilon = 1e-6
for f in columns:
r_gpu = cp.array(data[f].values)
r_gpu = r_gpu.argsort().argsort()
r_gpu = (r_gpu/r_gpu.max()-0.5)*2
r_gpu = cp.clip(r_gpu,-1+epsilon,1-epsilon)
r_gpu = erfinv(r_gpu)
data[f] = cp.asnumpy( r_gpu * np.sqrt(2) )
return data
from scipy.special import erfinv as sp_erfinv
epsilon = 1e-6
for f in columns:
r_cpu = data[f].values.argsort().argsort()
r_cpu = (r_cpu/r_cpu.max()-0.5)*2
r_cpu = np.clip(r_cpu,-1+epsilon,1-epsilon)
r_cpu = sp_erfinv(r_cpu)
data[f] = r_cpu * np.sqrt(2)
return data
def quantile_dosetime_scaling(train_data, valid_data, test_data, feature_columns):
global RANDOM_SEED
train_arr = []
valid_arr = []
test_arr = []
for cp_dose in ['D1', 'D2']:
for cp_time in [24, 48, 72]:
temp_train = train_data[train_data.cp_dose == cp_dose].reset_index(drop=True)
temp_train = temp_train[temp_train.cp_time == cp_time].reset_index(drop=True)
temp_valid = valid_data[valid_data.cp_dose == cp_dose].reset_index(drop=True)
temp_valid = temp_valid[temp_valid.cp_time == cp_time].reset_index(drop=True)
temp_test = test_data[test_data.cp_dose == cp_dose].reset_index(drop=True)
temp_test = temp_test[temp_test.cp_time == cp_time].reset_index(drop=True)
scaler = QuantileTransformer(n_quantiles=100,random_state=RANDOM_SEED, output_distribution="normal")
temp_train[feature_columns] = scaler.fit_transform(temp_train[feature_columns])
temp_valid[feature_columns] = scaler.transform(temp_valid[feature_columns])
temp_test[feature_columns] = scaler.transform(temp_test[feature_columns])
train_arr.append(temp_train)
valid_arr.append(temp_valid)
test_arr.append(temp_test)
train_data = pd.concat(train_arr).reset_index(drop=True)
valid_data = pd.concat(valid_arr).reset_index(drop=True)
test_data = pd.concat(test_arr).reset_index(drop=True)
return train_data, valid_data, test_data
def true_rankgaus_dosetime(data, columns):
global RANDOM_SEED
arr = []
for cp_dose in ['D1', 'D2']:
for cp_time in [24, 48, 72]:
temp_data = data[data.cp_dose == cp_dose].reset_index(drop=True)
temp_data = temp_data[temp_data.cp_time == cp_time].reset_index(drop=True)
arr.append(true_rank_gaus_scale(temp_data, columns))
return pd.concat(arr).reset_index(drop=True)
def true_rankgaus_dosetime_scaling(train_data, valid_data, test_data, feature_columns):
train_data = true_rankgaus_dosetime(train_data, feature_columns)
valid_data = true_rankgaus_dosetime(valid_data, feature_columns)
test_data = true_rankgaus_dosetime(test_data, feature_columns)
return train_data, valid_data, test_data
def true_rankgaus_scaling(train_data, valid_data, test_data, feature_columns):
train_data = true_rank_gaus_scale(train_data, feature_columns)
valid_data = true_rank_gaus_scale(valid_data, feature_columns)
test_data = true_rank_gaus_scale(test_data, feature_columns)
return train_data, valid_data, test_data
def quantile_scaling(train_data, valid_data, test_data, feature_columns):
global RANDOM_SEED
scaler = QuantileTransformer(n_quantiles=100,random_state=RANDOM_SEED, output_distribution="normal")
train_data[feature_columns] = scaler.fit_transform(train_data[feature_columns])
valid_data[feature_columns] = scaler.transform(valid_data[feature_columns])
test_data[feature_columns] = scaler.transform(test_data[feature_columns])
return train_data, valid_data, test_data
###Output
_____no_output_____
###Markdown
Data preprocessing
###Code
def create_dataloader(data, batch_size, shuffle, target_columns=None):
gene_features = [c for c in data.columns if 'g-' in c]
cell_features = [c for c in data.columns if 'c-' in c]
dataset = MoaDataset(data, gene_features, cell_features, target_columns)
return torch.utils.data.DataLoader(dataset,
batch_size=batch_size,
shuffle=shuffle)
def create_meta_dataloader(data, batch_size, shuffle, target_columns=None):
global meta_feature_columns
dataset = MoaMetaDataset(data, feature_ids=meta_feature_columns, target_ids=target_columns)
return torch.utils.data.DataLoader(dataset,
batch_size=batch_size,
shuffle=shuffle)
def pca_transform(fitted_pca, data, feature_columns, sig_ids, base_feature_name):
feature_data = fitted_pca.transform(data[feature_columns].values)
df = pd.DataFrame(feature_data, columns =[f'{base_feature_name}-{i}' for i in range(feature_data.shape[1])])
df['sig_id'] = sig_ids
return df
def preprocess_fold_data(train_data, test_data, fold, dataloader_factory_func, scaling_func=None, add_pca=False):
global feature_columns, target_columns, gene_features, cell_features
fold_train_data = train_data[train_data.fold != fold].reset_index(drop=True)
fold_valid_data = train_data[train_data.fold == fold].reset_index(drop=True)
if add_pca:
fold_data = [fold_train_data, fold_valid_data, test_data]
pca_genes = PCA(n_components=150)
pca_cells = PCA(n_components=30)
pca_genes.fit(fold_train_data[gene_features].values)
pca_cells.fit(fold_train_data[cell_features].values)
for fitted_pca, pca_features, colum_name in [(pca_genes, gene_features, 'g-pca'), (pca_cells, cell_features, 'c-pca')]:
for i, pca_data in enumerate(fold_data):
fitted_pca_data = pca_transform(fitted_pca=fitted_pca,
data=pca_data,
feature_columns=pca_features,
sig_ids=pca_data.sig_id.values,
base_feature_name=colum_name)
fold_data[i] = pd.merge(fold_data[i], fitted_pca_data, on='sig_id')
fold_train_data = fold_data[0]
fold_valid_data = fold_data[1]
test_data = fold_data[2]
if scaling_func is not None:
fold_train_data, fold_valid_data, test_data = scaling_func(fold_train_data, fold_valid_data, test_data, feature_columns)
train_dataloader = dataloader_factory_func(data=fold_train_data, batch_size=BATCH_SIZE, shuffle=True, target_columns=target_columns)
valid_dataloader = dataloader_factory_func(data=fold_valid_data, batch_size=BATCH_SIZE, shuffle=False, target_columns=target_columns)
test_dataloader = dataloader_factory_func(data=test_data, batch_size=BATCH_SIZE, shuffle=False, target_columns=None)
return train_dataloader, valid_dataloader, test_dataloader
###Output
_____no_output_____
###Markdown
Blending functions
###Code
def log_loss_numpy(y_pred):
loss = 0
y_pred_clip = np.clip(y_pred, 1e-15, 1 - 1e-15)
for i in range(y_pred.shape[1]):
loss += - np.mean(y_true[:, i] * np.log(y_pred_clip[:, i]) + (1 - y_true[:, i]) * np.log(1 - y_pred_clip[:, i]))
return loss / y_pred.shape[1]
def func_numpy_metric(weights):
oof_blend = np.tensordot(weights, oof, axes = ((0), (0)))
score = log_loss_numpy(oof_blend)
coef = 1e-6
penalty = coef * (np.sum(weights) - 1) ** 2
return score + penalty
def grad_func(weights):
oof_clip = np.clip(oof, 1e-15, 1 - 1e-15)
gradients = np.zeros(oof.shape[0])
for i in range(oof.shape[0]):
a, b, c = y_true, oof_clip[i], 0
for j in range(oof.shape[0]):
if j != i:
c += weights[j] * oof_clip[j]
gradients[i] = -np.mean((-a*b+(b**2)*weights[i]+b*c)/((b**2)*(weights[i]**2)+2*b*c*weights[i]-b*weights[i]+(c**2)-c))
return gradients
oof = []
y_true = []
def find_optimal_blend(predictions, train_data, target_columns):
global oof, y_true
y_true = train_data.sort_values(by='sig_id')[target_columns].values
oof = np.zeros((len(predictions), y_true.shape[0], y_true.shape[1]))
for i, pred in enumerate(predictions):
oof[i] = pred.sort_values(by='sig_id')[target_columns].values
tol = 1e-10
init_guess = [1 / oof.shape[0]] * oof.shape[0]
bnds = [(0, 1) for _ in range(oof.shape[0])]
cons = {'type': 'eq',
'fun': lambda x: np.sum(x) - 1,
'jac': lambda x: [1] * len(x)}
res_scipy = minimize(fun = func_numpy_metric,
x0 = init_guess,
method = 'SLSQP',
jac = grad_func,
bounds = bnds,
constraints = cons,
tol = tol)
return res_scipy.x
###Output
_____no_output_____
###Markdown
Utils functions
###Code
def inference(model, data_loader, target_columns):
predictions = []
model.eval()
for batch in data_loader:
batch_predictions = model(batch).sigmoid().detach().cpu().numpy()
sig_ids = np.array(batch['sig_id'])
df = pd.DataFrame(batch_predictions, columns=target_columns)
df['sig_id'] = sig_ids
predictions.append(df)
return pd.concat(predictions).reset_index(drop=True)
def calculate_log_loss(predicted_df, train_df, target_columns):
predicted_df = predicted_df.copy()
train_df = train_df.copy()
predicted_df = predicted_df[target_columns + ['sig_id']].reset_index(drop=True)
predicted_df = predicted_df.sort_values(by=['sig_id'])
predicted_df = predicted_df.drop('sig_id', axis=1)
true_df = train_df[target_columns + ['sig_id']].reset_index(drop=True)
true_df = true_df.sort_values(by=['sig_id'])
true_df = true_df.drop('sig_id', axis=1)
predicted_values = predicted_df.values
true_values = true_df.values
score = 0
loss_per_class = []
for i in range(predicted_values.shape[1]):
_score = log_loss(true_values[:, i].astype(np.float), predicted_values[:, i].astype(np.float), eps=1e-15, labels=[1,0])
loss_per_class.append(_score)
score += _score / predicted_values.shape[1]
return score, loss_per_class
def scale_predictions(predictions, target_columns, scale_values=None):
predictions = [p.copy() for p in predictions]
predictions = [p.sort_values(by=['sig_id']).reset_index(drop=True) for p in predictions]
final_predictions = np.zeros((predictions[0].shape[0], len(target_columns)))
for i, p in enumerate(predictions):
p_values = p[target_columns].values
if scale_values is None:
final_predictions += p_values / len(predictions)
else:
final_predictions += (p_values * scale_values[i])
predictions_df = predictions[0].copy()
predictions_df.loc[:, target_columns] = final_predictions
return predictions_df
class TrainFactory:
@classmethod
def model_version1(cls, train_loader, epochs):
global model_config, DEVICE
model = MoaModel_V1(model_config).to(DEVICE)
best_model = MoaModel_V1(model_config).to(DEVICE)
optimizer = torch.optim.Adam(model.parameters(), lr=0.04647353847564317, weight_decay=8.087569236449597e-06)
scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=len(train_loader)*epochs//2, num_training_steps=len(train_loader)*epochs)
loss_fn = nn.BCEWithLogitsLoss()
return model, best_model, optimizer, scheduler, loss_fn
@classmethod
def model_version2(cls, train_loader, epochs):
global model_config, DEVICE
model = MoaModel_V2(model_config).to(DEVICE)
best_model = MoaModel_V2(model_config).to(DEVICE)
optimizer = torch.optim.Adam(params=model.parameters(),
lr=1e-3,
weight_decay=1e-5)
scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer=optimizer,
max_lr=1e-2,
epochs=epochs,
steps_per_epoch=len(train_loader))
loss_fn = nn.BCEWithLogitsLoss()
return model, best_model, optimizer, scheduler, loss_fn
@classmethod
def model_version3(cls, train_loader, epochs):
global model_config, DEVICE
model = MoaModel_V3(model_config).to(DEVICE)
best_model = MoaModel_V3(model_config).to(DEVICE)
optimizer = torch.optim.Adam(params=model.parameters(),
lr=1e-3,
weight_decay=1e-5)
scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer=optimizer,
max_lr=1e-2,
epochs=epochs,
steps_per_epoch=len(train_loader))
loss_fn = nn.BCEWithLogitsLoss()
return model, best_model, optimizer, scheduler, loss_fn
@classmethod
def model_version4(cls, train_loader, epochs):
global model_config, DEVICE
model = MoaModel_V4(model_config).to(DEVICE)
best_model = MoaModel_V4(model_config).to(DEVICE)
optimizer = torch.optim.Adam(params=model.parameters(),
lr=1e-3,
weight_decay=1e-5)
scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer=optimizer,
max_lr=5e-3,
epochs=epochs,
steps_per_epoch=len(train_loader))
loss_fn = nn.BCEWithLogitsLoss()
return model, best_model, optimizer, scheduler, loss_fn
@classmethod
def meta_model(cls, train_loader, epochs):
global model_config, DEVICE
model = MetaModel(model_config).to(DEVICE)
best_model = MetaModel(model_config).to(DEVICE)
optimizer = torch.optim.Adam(params=model.parameters(),
lr=1e-3,
weight_decay=1e-5)
scheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer=optimizer,
max_lr=1e-2,
epochs=epochs,
steps_per_epoch=len(train_loader))
loss_fn = nn.BCEWithLogitsLoss()
return model, best_model, optimizer, scheduler, loss_fn
def train_model(model, best_model, optimizer, scheduler, loss_fn, train_loader, valid_loader, test_loader, epochs):
global gene_features, cell_features, target_columns
train_data = train_loader.dataset.dataset_df
valid_data = valid_loader.dataset.dataset_df
best_loss = np.inf
for epoch in range(epochs):
model.train()
train_loss = 0
for train_batch in train_loader:
optimizer.zero_grad()
y_pred = model(train_batch)
y_true = train_batch['y'].to(DEVICE)
curr_train_loss = loss_fn(y_pred, y_true)
curr_train_loss.backward()
optimizer.step()
scheduler.step()
train_loss += ( curr_train_loss.item() * (len(train_batch['sig_id']) / len(train_data)))
valid_predictions = inference(model, valid_loader, target_columns)
valid_loss, _ = calculate_log_loss(valid_predictions, valid_data, target_columns)
if valid_loss < best_loss:
best_loss = valid_loss
best_model.load_state_dict(model.state_dict())
if (epoch + 1) % 5 == 0:
print(f'Epoch:{epoch} \t train_loss:{train_loss:.10f} \t valid_loss:{valid_loss:.10f}')
valid_predictions = inference(best_model, valid_loader, target_columns)
test_predictions = inference(best_model, test_loader, target_columns)
return best_model, valid_predictions, test_predictions
#Hyperparameters
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
FOLDS = 5
EPOCHS = 1
BATCH_SIZE = 4092
SEEDS = [11, 221, 50]
#Creating the cross validation strategy
train_data = create_cross_validation_strategy(train_data, target_columns, FOLDS, RANDOM_SEED)
train_data = train_data[train_data.cp_type == 'trt_cp'].reset_index(drop=True)
class ModelTrainConfig:
def __init__(self, model_name, factory_func, scaling_func, add_pca):
self.model_name = model_name
self.factory_func = factory_func
self.scaling_func = scaling_func
self.add_pca = add_pca
model_version1 = ModelTrainConfig(model_name='version_1',
factory_func=TrainFactory.model_version1,
scaling_func=true_rankgaus_scaling,
add_pca=False)
model_version2 = ModelTrainConfig(model_name='version_2',
factory_func=TrainFactory.model_version2,
scaling_func=quantile_dosetime_scaling,
add_pca=False)
model_version3 = ModelTrainConfig(model_name='version_3',
factory_func=TrainFactory.model_version3,
scaling_func=quantile_scaling,
add_pca=False)
model_version4 = ModelTrainConfig(model_name='version_4',
factory_func=TrainFactory.model_version4,
scaling_func=quantile_scaling,
add_pca=False)
meta_model = ModelTrainConfig(model_name='meta_model',
factory_func=TrainFactory.meta_model,
scaling_func=None,
add_pca=False)
models_train_configs = [model_version1, model_version2, model_version3, model_version4]
meta_models_train_configs = [meta_model]
models_valid_predictions = []
models_test_predictions = []
seed_losses = []
for model_train_config in models_train_configs:
print(f'Training model:{model_train_config.model_name}')
single_model_valid_predictions = []
single_model_test_predictions = []
for seed in SEEDS:
seed_everything(seed)
model_seed_valid_predictions = []
model_seed_test_predictions = []
for fold in range(FOLDS):
print(f'Training fold: {fold}')
fold_test_data = test_data[test_data.cp_type == 'trt_cp'].reset_index(drop=True)
train_loader, valid_loader, test_loader = preprocess_fold_data(train_data=train_data,
test_data=fold_test_data,
fold=fold,
dataloader_factory_func=create_dataloader,
scaling_func=model_train_config.scaling_func)
number_of_genes, number_of_cells = train_loader.dataset.number_of_features()
model_config = ModelConfig(number_of_features=number_of_genes + number_of_cells,
number_of_genes=number_of_genes,
number_of_cells=number_of_cells,
number_of_targets=len(target_columns))
model, _, optimizer, scheduler, loss_fn = model_train_config.factory_func(train_loader, EPOCHS)
model.load_state_dict(torch.load(f'../input/moablogdataset/model-{model_train_config.model_name}_fold-{fold}_seed-{seed}',
map_location=torch.device(DEVICE)))
valid_predictions = inference(model, valid_loader, target_columns)
test_predictions = inference(model, test_loader, target_columns)
model_seed_valid_predictions.append(valid_predictions)
model_seed_test_predictions.append(test_predictions)
print('-' * 100)
valid_predictions = pd.concat(model_seed_valid_predictions).reset_index(drop=True)
test_predictions = scale_predictions(model_seed_test_predictions, target_columns)
single_model_valid_predictions.append(valid_predictions)
single_model_test_predictions.append(test_predictions)
valid_loss, _ = calculate_log_loss(valid_predictions, train_data, target_columns)
seed_losses.append(valid_loss)
print(f'Model:{model_train_config.model_name} \t Seed:{seed} \t oof_loss:{valid_loss:.10f}')
valid_predictions = scale_predictions(single_model_valid_predictions, target_columns)
test_predictions = scale_predictions(single_model_test_predictions, target_columns)
models_valid_predictions.append(valid_predictions)
models_test_predictions.append(test_predictions)
valid_loss, _ = calculate_log_loss(valid_predictions, train_data, target_columns)
print(f'Model:{model_train_config.model_name} \t valid_loss:{valid_loss:.10f}')
#Finding optimal blend weights
blend_weights = find_optimal_blend(models_valid_predictions, train_data, target_columns)
print(f'Optimal blend weights: {blend_weights}')
level1_valid_predictions = scale_predictions(models_valid_predictions, target_columns, blend_weights)
level1_test_predictions = scale_predictions(models_test_predictions, target_columns, blend_weights)
meta_features = pd.DataFrame(data=train_data.sig_id.values, columns=['sig_id'])
for version, valid_predictions in enumerate(models_valid_predictions):
df = valid_predictions.rename(columns={v:f'meta-f-{i}-model{version + 1}' if v != 'sig_id' else v for i, v in enumerate(valid_predictions.columns)})
meta_features = pd.merge(meta_features, df, on='sig_id')
train_data = pd.merge(train_data, meta_features, on='sig_id')
meta_features = pd.DataFrame(data=test_data.sig_id.values, columns=['sig_id'])
for version, test_predictions in enumerate(models_test_predictions):
df = test_predictions.rename(columns={v:f'meta-f-{i}-model{version + 1}' if v != 'sig_id' else v for i, v in enumerate(test_predictions.columns)})
meta_features = pd.merge(meta_features, df, on='sig_id')
test_data = pd.merge(test_data, meta_features, on='sig_id')
meta_feature_columns = [c for c in train_data.columns if 'meta-f-' in c]
models_valid_predictions = []
models_test_predictions = []
seed_losses = []
for model_train_config in meta_models_train_configs:
print(f'Training model:{model_train_config.model_name}')
single_model_valid_predictions = []
single_model_test_predictions = []
for seed in SEEDS:
seed_everything(seed)
model_seed_valid_predictions = []
model_seed_test_predictions = []
for fold in range(FOLDS):
print(f'Training fold: {fold}')
fold_test_data = test_data[test_data.cp_type == 'trt_cp'].reset_index(drop=True)
train_loader, valid_loader, test_loader = preprocess_fold_data(train_data=train_data,
test_data=fold_test_data,
fold=fold,
dataloader_factory_func=create_meta_dataloader,
scaling_func=model_train_config.scaling_func)
model_config = ModelConfig(number_of_features=len(feature_columns),
number_of_genes=0,
number_of_cells=0,
number_of_targets=len(target_columns))
model, _, optimizer, scheduler, loss_fn = model_train_config.factory_func(train_loader, EPOCHS)
model.load_state_dict(torch.load(f'../input/moablognotebook-stacking/model-{model_train_config.model_name}_fold-{fold}_seed-{seed}',
map_location=torch.device(DEVICE)))
valid_predictions = inference(model, valid_loader, target_columns)
test_predictions = inference(model, test_loader, target_columns)
model_seed_valid_predictions.append(valid_predictions)
model_seed_test_predictions.append(test_predictions)
print('-' * 100)
valid_predictions = pd.concat(model_seed_valid_predictions).reset_index(drop=True)
test_predictions = scale_predictions(model_seed_test_predictions, target_columns)
single_model_valid_predictions.append(valid_predictions)
single_model_test_predictions.append(test_predictions)
valid_loss, _ = calculate_log_loss(valid_predictions, train_data, target_columns)
seed_losses.append(valid_loss)
print(f'Model:{model_train_config.model_name} \t Seed:{seed} \t oof_loss:{valid_loss:.10f}')
valid_predictions = scale_predictions(single_model_valid_predictions, target_columns)
test_predictions = scale_predictions(single_model_test_predictions, target_columns)
models_valid_predictions.append(valid_predictions)
models_test_predictions.append(test_predictions)
valid_loss, _ = calculate_log_loss(valid_predictions, train_data, target_columns)
print(f'Model:{model_train_config.model_name} \t valid_loss:{valid_loss:.10f}')
blend_weights = find_optimal_blend(models_valid_predictions, train_data, target_columns)
print(f'Optimal blend weights: {blend_weights}')
level2_valid_predictions = scale_predictions(models_valid_predictions, target_columns, blend_weights)
level2_test_predictions = scale_predictions(models_test_predictions, target_columns, blend_weights)
combined_models_valid = [level1_valid_predictions, level2_valid_predictions]
combined_models_test = [level1_test_predictions, level2_test_predictions]
#Finding optimal blend weights
blend_weights = find_optimal_blend(combined_models_valid, train_data, target_columns)
print(f'Optimal blend weights: {blend_weights}')
valid_predictions = scale_predictions(combined_models_valid, target_columns, blend_weights)
test_predictions = scale_predictions(combined_models_test, target_columns, blend_weights)
# for i, model_config in enumerate(models_train_configs):
# models_valid_predictions[i].to_csv(f'{model_config.model_name}.csv', index=False)
validation_loss, _ = calculate_log_loss(valid_predictions, train_data, target_columns)
print(f'Validation loss: {validation_loss}')
test_data = pd.read_csv('../input/lish-moa/test_features.csv')
zero_ids = test_data[test_data.cp_type == 'ctl_vehicle'].sig_id.values
zero_df = pd.DataFrame(np.zeros((len(zero_ids), len(target_columns))), columns=target_columns)
zero_df['sig_id'] = zero_ids
nonzero_df = test_predictions[~test_predictions.sig_id.isin(zero_ids)]
nonzero_df = nonzero_df[target_columns + ['sig_id']].reset_index(drop=True)
submission = pd.concat([nonzero_df, zero_df])
print(test_data.shape)
print(test_predictions.shape)
submission.to_csv('submission.csv', index=False)
submission.head()
###Output
_____no_output_____ |
Feature_Selection_On_Boston_Dataset.ipynb | ###Markdown
Feature Selection with sklearn and Pandas Feature selection is one of the first and important steps while performing any machine learning task. A feature in case of a dataset simply means a column. When we get any dataset, not necessarily every column (feature) is going to have an impact on the output variable. If we add these irrelevant features in the model, it will just make the model worst (Garbage In Garbage Out). This gives rise to the need of doing feature selection. Feature selection can be done in multiple ways but there are broadly 3 categories of it:1. Filter Method 2. Wrapper Method 3. Embedded Method
###Code
#importing libraries
from sklearn.datasets import load_boston
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
import statsmodels.api as sm
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.feature_selection import RFE
from sklearn.linear_model import RidgeCV, LassoCV, Ridge, Lasso
#Loading the dataset
x = load_boston()
x.data
x.feature_names
x.target
df=pd.DataFrame(x.data, columns=x.feature_names)
df.head(5)
df["MEDV"]=x.target
df.head()
df.shape
x = df.drop("MEDV",1) #Feature Matrix
y = df["MEDV"] #Target Variable
###Output
_____no_output_____
###Markdown
Linear Regression model with all features
###Code
#Spliting the dataset into a training set and a testing set
from sklearn.cross_validation import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=5)
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
# Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization.
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(x_train,y_train)
y_pred=model.predict(x_test)
y_pred
print("Accuracy of the model is {:.2f} %" .format(model.score(x_test,y_test)*100))
###Output
Accuracy of the model is 73.30 %
###Markdown
Feature Selection It is a technique which select the most relevant features from the original dataset 1. Filter Method As the name suggest, in this method, you filter and take only the subset of the relevant features. The model is built after selecting the features. The filtering here is done using correlation matrix and it is most commonly done using Pearson correlation.
###Code
#Using Pearson Correlation
plt.figure(figsize=(17,10))
cor = df.corr()
sns.heatmap(cor, annot=True)
plt.show()
cor["MEDV"]
#Correlation with output variable
cor_target = abs(cor["MEDV"])
cor_target
#Selecting highly correlated features
relevant_features = cor_target[cor_target>0.5]
relevant_features
###Output
_____no_output_____
###Markdown
As we can see, only the features RM, PTRATIO and LSTAT are highly correlated with the output variable MEDV. Hence we will drop all other features apart from these. If these variables are correlated with each other, then we need to keep only one of them and drop the rest. So let us check the correlation of selected features with each other. This can be done either by visually checking it from the above correlation matrix or from the code snippet below.
###Code
print(df[["LSTAT","PTRATIO"]].corr())
print()
print(df[["RM","LSTAT"]].corr())
###Output
LSTAT PTRATIO
LSTAT 1.000000 0.374044
PTRATIO 0.374044 1.000000
RM LSTAT
RM 1.000000 -0.613808
LSTAT -0.613808 1.000000
###Markdown
From the above code, it is seen that the variables RM and LSTAT are highly correlated with each other (-0.613808). Hence we would keep only one variable and drop the other. We will keep LSTAT since its correlation with MEDV is higher than that of RM.After dropping RM, we are left with two feature, LSTAT and PTRATIO. These are the final features given by Pearson correlation. Linear Regression model with the selected features
###Code
#Spliting the dataset into a training set and a testing set
from sklearn.cross_validation import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x[["LSTAT","PTRATIO"]],y,test_size=0.2,random_state=5)
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
# Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization.
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(x_train,y_train)
y_pred=model.predict(x_test)
y_pred
print("Accuracy of the model is {:.2f} %" .format(model.score(x_test,y_test)*100))
###Output
Accuracy of the model is 55.28 %
###Markdown
2. Wrapper Method A wrapper method needs one machine learning algorithm and uses its performance as evaluation criteria. This means, you feed the features to the selected Machine Learning algorithm and based on the model performance you add/remove the features. This is an iterative and computationally expensive process but it is more accurate than the filter method.There are different wrapper methods such as Backward Elimination, Forward Selection, Bidirectional Elimination and RFE. i. Backward Elimination As the name suggest, we feed all the possible features to the model at first. We check the performance of the model and then iteratively remove the worst performing features one by one till the overall performance of the model comes in acceptable range.The performance metric used here to evaluate feature performance is pvalue. If the pvalue is above 0.05 then we remove the feature, else we keep it. Here we are using OLS model which stands for โOrdinary Least Squaresโ. This model is used for performing linear regression.
###Code
# pvalues for one iteration
#Adding constant column of ones, mandatory for sm.OLS model
x_1 = sm.add_constant(x)
#Fitting sm.OLS model
model = sm.OLS(y,x_1).fit()
model.pvalues
model.pvalues.idxmax()
max(model.pvalues)
###Output
_____no_output_____
###Markdown
As we can see that the variable โAGEโ has highest pvalue of 0.9582293 which is greater than 0.05. Hence we will remove this feature and build the model once again. This is an iterative process and can be performed at once with the help of loop.
###Code
#Backward Elimination
cols = list(x.columns)
while len(cols)>0:
x_1 = sm.add_constant(x[cols])
model = sm.OLS(y,x_1).fit()
pmax=max(model.pvalues)
feature_with_pmax=model.pvalues.idxmax()
if(pmax>0.05):
cols.remove(feature_with_pmax)
else:
break
selected_features_be = cols
print(selected_features_be)
###Output
['CRIM', 'ZN', 'CHAS', 'NOX', 'RM', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT']
###Markdown
This approach gives the final set of variables which are CRIM, ZN, CHAS, NOX, RM, DIS, RAD, TAX, PTRATIO, B and LSTAT Linear Regression model with the selected features
###Code
#Spliting the dataset into a training set and a testing set
from sklearn.cross_validation import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x[selected_features_be],y,test_size=0.2,random_state=5)
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
# Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization.
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(x_train,y_train)
y_pred=model.predict(x_test)
y_pred
print("Accuracy of the model is {:.2f} %" .format(model.score(x_test,y_test)*100))
###Output
Accuracy of the model is 73.30 %
###Markdown
ii. RFE (Recursive Feature Elimination) The Recursive Feature Elimination (RFE) method works by recursively removing attributes and building a model on those attributes that remain. It uses accuracy metric to rank the feature according to their importance. The RFE method takes the model to be used and the number of required features as input. It then gives the ranking of all the variables, 1 being most important. It also gives its support, True being relevant feature and False being irrelevant feature.
###Code
model = LinearRegression()
#Initializing RFE model
rfe = RFE(model, 7)
#Transforming data using RFE
x_rfe = rfe.fit_transform(x,y)
temp = pd.Series(rfe.support_, index = x.columns)
selected_features_rfe = temp[temp==True].index
print(rfe.support_)
print()
print(rfe.ranking_)
print()
print(selected_features_rfe)
print()
print(rfe.n_features_)
###Output
[False False False True True True False True True False True False
True]
[2 4 3 1 1 1 7 1 1 5 1 6 1]
Index(['CHAS', 'NOX', 'RM', 'DIS', 'RAD', 'PTRATIO', 'LSTAT'], dtype='object')
7
###Markdown
Here we took LinearRegression model with 7 features and RFE gave feature ranking as above, but the selection of number โ7โ was random. Now we need to find the optimum number of features, for which the accuracy is the highest. We do that by using loop starting with 1 feature and going up to 13. We then take the one for which the accuracy is highest.
###Code
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=5)
model = LinearRegression()
max_score=0
nof=0
for n in range(1, x.shape[1]+1):
#Initializing RFE model
rfe = RFE(model, n)
x_train_rfe = rfe.fit_transform(x_train,y_train)
x_test_rfe = rfe.transform(x_test)
#Fitting the data to model
model.fit(x_train_rfe,y_train)
# computing score of the model
score=model.score(x_test_rfe,y_test)
if(max_score<score):
max_score=score
nof=n
print("Optimum number of features: %d" %nof)
print("Score with %d features: %f" % (nof, max_score))
cols = list(x.columns)
model = LinearRegression()
#Initializing RFE model
rfe = RFE(model, nof)
#Transforming data using RFE
X_rfe = rfe.fit_transform(x,y)
#Fitting the data to model
model.fit(X_rfe,y)
temp = pd.Series(rfe.support_, index = cols)
selected_features_rfe = temp[temp==True].index
print(selected_features_rfe)
###Output
Index(['CHAS', 'NOX', 'RM', 'DIS', 'PTRATIO'], dtype='object')
###Markdown
Linear Regression model with the selected features
###Code
#Spliting the dataset into a training set and a testing set
from sklearn.cross_validation import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x[selected_features_rfe],y,test_size=0.2,random_state=5)
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
# Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization.
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(x_train,y_train)
y_pred=model.predict(x_test)
y_pred
print("Accuracy of the model is {:.2f} %" .format(model.score(x_test,y_test)*100))
###Output
Accuracy of the model is 77.47 %
###Markdown
3. Embedded Method Embedded methods are iterative in a sense that takes care of each iteration of the model training process and carefully extract those features which contribute the most to the training for a particular iteration. Regularization methods are the most commonly used embedded methods which penalize a feature given a coefficient threshold.Here we will do feature selection using Lasso regularization. If the feature is irrelevant, lasso penalizes itโs coefficient and make it 0. Hence the features with coefficient = 0 are removed and the rest are taken. i. Lasso regression
###Code
reg = LassoCV()
reg.fit(x, y)
print("Best alpha using built-in LassoCV: %f" % reg.alpha_)
print("Best score using built-in LassoCV: %f" %reg.score(x,y))
reg.coef_
coef = pd.Series(reg.coef_, index = x.columns)
coef
print("Lasso picked " + str(coef[coef!=0].count()) + " variables and eliminated the other " + str(coef[coef==0].count()) + " variables")
imp_coef = coef.sort_values()
import matplotlib
plt.figure(figsize=(15, 10))
imp_coef.plot(kind = "barh")
plt.title("Feature importance using Lasso Model", fontsize=20)
###Output
_____no_output_____
###Markdown
Here Lasso model has taken all the features except NOX, CHAS and INDUS.
###Code
selected_feature_LS=coef[coef!=0].index
selected_feature_LS
###Output
_____no_output_____
###Markdown
Linear Regression model with the selected features
###Code
#Spliting the dataset into a training set and a testing set
from sklearn.cross_validation import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x[selected_feature_LS],y,test_size=0.2,random_state=5)
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
# Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization.
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
from sklearn.linear_model import LinearRegression
model=LinearRegression()
model.fit(x_train,y_train)
y_pred=model.predict(x_test)
y_pred
print("Accuracy of the model is {:.2f} %" .format(model.score(x_test,y_test)*100))
###Output
Accuracy of the model is 70.03 %
###Markdown
ii. Ridge regression It is basically a regularization technique and an embedded feature selection techniques as well.
###Code
ridge = Ridge()
ridge.fit(x,y)
ridge.coef_
r_coef = pd.Series(ridge.coef_, index = x.columns)
r_coef
###Output
_____no_output_____
###Markdown
We can spot all the coefficient terms with the feature variables. It will again help us to choose the most essential features.
###Code
imp_coef = r_coef.sort_values()
plt.figure(figsize=(15, 10))
imp_coef.plot(kind = "barh")
plt.title("Feature importance using Ridge Model", fontsize=20)
###Output
_____no_output_____
###Markdown
iii. Tree-based feature selection Bagged decision trees like Random Forest and Extra Trees can be used to estimate the importance of features.In the example below we construct a ExtraTreesRegressor regressorUse ExtraTreesClassifier classifier for classification problemTree-based estimators can be used to compute feature importances, which in turn can be used to discard irrelevant features (when coupled with the sklearn.feature_selection.SelectFromModel meta-transformer)
###Code
# Feature Importance with Extra Trees Regressor
from pandas import read_csv
from sklearn.ensemble import ExtraTreesRegressor
# feature extraction
etr = ExtraTreesRegressor()
etr.fit(x, y)
etr_coef=etr.feature_importances_
print(etr_coef)
etr1_coef = pd.Series(etr_coef, index = x.columns)
etr1_coef
###Output
_____no_output_____
###Markdown
You can see that we are given an importance score for each attribute where the larger score the more important the attribute.
###Code
imp_coef = etr1_coef.sort_values()
plt.figure(figsize=(15, 10))
imp_coef.plot(kind = "barh")
plt.title("Feature importance using Ridge Model", fontsize=20)
from sklearn.feature_selection import SelectFromModel
sfm = SelectFromModel(etr, prefit=True)
x_n = sfm.transform(x)
x_n.shape
###Output
_____no_output_____
###Markdown
Linear Regression model with the selected features
###Code
#Spliting the dataset into a training set and a testing set
x_train,x_test,y_train,y_test=train_test_split(x_n,y,test_size=0.2,random_state=5)
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
# Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization.
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
model=LinearRegression()
model.fit(x_train,y_train)
y_pred=model.predict(x_test)
print("Accuracy of the model is {:.2f} %" .format(model.score(x_test,y_test)*100))
###Output
(404, 3)
(404,)
(102, 3)
(102,)
Accuracy of the model is 69.16 %
###Markdown
4. Univariate feature selection Univariate feature selection works by selecting the best features based on univariate statistical tests. Statistical tests can be used to select those features that have the strongest relationship with the output variable.The scikit-learn library provides the SelectKBest class that can be used with a suite of different statistical tests to select a specific number of features.Statistical Test1. For regression: f_regression, mutual_info_regression2. For classification: chi2, f_classif, mutual_info_classifThe methods based on F-test estimate the degree of linear dependency between two random variables. On the other hand, mutual information methods can capture any kind of statistical dependency, but being nonparametric, they require more samples for accurate estimation.The example below uses the f_regression statistical test
###Code
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_regression
# feature extraction
skb = SelectKBest(score_func=f_regression, k=13)
fit=skb.fit(x, y)
x_new=skb.fit_transform(x, y) # or fit.transform(x)
x_new.shape
###Output
_____no_output_____
###Markdown
Linear Regression model with the selected features
###Code
#Spliting the dataset into a training set and a testing set
x_train,x_test,y_train,y_test=train_test_split(x_new,y,test_size=0.2,random_state=5)
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
# Since the range of values of raw data varies widely, in some machine learning algorithms, objective functions will not work properly without normalization.
scaler = StandardScaler()
scaler.fit(x_train)
x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
model=LinearRegression()
model.fit(x_train,y_train)
y_pred=model.predict(x_test)
print("Accuracy of the model is {:.2f} %" .format(model.score(x_test,y_test)*100))
###Output
(404, 13)
(404,)
(102, 13)
(102,)
Accuracy of the model is 73.30 %
|
tests/test_tensorflow_utils/Taylor_Expansion.ipynb | ###Markdown
keras functional api
###Code
inputs = Input(shape=(32, 32, 3))
outputs = AutoTaylorExpansion(a=1.0, func=tf.math.sin, n_terms=3)(inputs)
model = Model(inputs, outputs)
model.summary()
###Output
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 32, 32, 3)] 0
_________________________________________________________________
auto_taylor_expansion_5 (Aut (None, 32, 32, 3, 3) 0
=================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
_________________________________________________________________
|
nbs/016_data.preprocessing.ipynb | ###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.external import *
from tsai.data.core import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: (np.ndarray, torch.Tensor)):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class Nan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True):
store_attr()
def encodes(self, o:TSTensor):
mask = torch.isnan(o)
if mask.any():
if self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
# o = torch.nan_to_num(o, torch.nanmedian(o)) # Only available in Pytorch 1.8
o = torch_nan_to_num(o, torch.nanmedian(o))
# o = torch.nan_to_num(o, self.value) # Only available in Pytorch 1.8
o = torch_nan_to_num(o, self.value)
return o
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(Nan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(Nan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(Nan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False):
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False):
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, self.range_min, self.range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False):
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6):
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, min=None, max=None, by_sample=False, by_var=False, quantile_range=(25.0, 75.0), use_single_batch=True, verbose=False):
self.median = tensor(median) if median is not None else tensor(0)
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self._setup = (median is None or min is None or max is None) and not by_sample
self.by_sample, self.by_var = by_sample, by_var
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
self.quantile_range = quantile_range
if median is not None or min is not None or max is not None:
pv(f'{self.__class__.__name__} median={median} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
median = get_percentile(o, 50, self.axis)
min, max = get_outliers_IQR(o, self.axis, quantile_range=self.quantile_range)
self.median, self.min, self.max = tensor(median), tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} median={self.median} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} median={self.median.shape} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.by_sample:
median = get_percentile(o, 50, self.axis)
min, max = get_outliers_IQR(o, axis=self.axis, quantile_range=self.quantile_range)
self.median, self.min, self.max = o.new(median), o.new(min), o.new(max)
return (o - self.median) / (self.max - self.min)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, num_workers=0)
xb, yb = next(iter(dls.train))
clipped_xb = TSRobustScale(by_sample=true)(xb)
test_ne(clipped_xb, xb)
clipped_xb.min(), clipped_xb.max(), xb.min(), xb.max()
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add):
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
import re
import sklearn
from fastcore.transform import Transform, Pipeline
from fastai.data.transforms import Categorize
from fastai.data.load import DataLoader
from fastai.tabular.core import df_shrink_dtypes, make_date
from tsai.imports import *
from tsai.utils import *
from tsai.data.core import *
from tsai.data.preparation import *
from tsai.data.external import get_UCR_data
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: np.ndarray):
return stack([self.cat.decode(oi) for oi in o])
def decodes(self, o: torch.Tensor):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class TSNan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True, sel_vars=None):
store_attr()
if not ismin_torch("1.8"):
raise ValueError('This function only works with Pytorch>=1.8.')
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = torch.isnan(o[:, self.sel_vars])
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o[:, self.sel_vars], dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[:, self.sel_vars][mask] = median[mask]
else:
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], torch.nanmedian(o[:, self.sel_vars]))
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], self.value)
else:
mask = torch.isnan(o)
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
o = torch.nan_to_num(o, torch.nanmedian(o))
o = torch.nan_to_num(o, self.value)
return o
Nan2Value = TSNan2Value
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(TSNan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(TSNan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(TSNan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
o = TSTensor(torch.randn(16, 10, 100))
o[o > .9] = float('nan')
o = TSNan2Value(median=True, sel_vars=[0,1,2,3,4])(o)
test_eq(torch.isnan(o[:, [0,1,2,3,4]]).sum().item(), 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False, **kwargs):
super().__init__(**kwargs)
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False, **kwargs):
super().__init__(**kwargs)
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, range_min, range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False, **kwargs):
super().__init__(**kwargs)
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6, **kwargs):
super().__init__(**kwargs)
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSSelfMissingness(Transform):
"Applies missingness from samples in a batch to random samples in the batch for selected variables"
order = 90
def __init__(self, sel_vars=None, **kwargs):
self.sel_vars = sel_vars
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = rotate_axis0(torch.isnan(o[:, self.sel_vars]))
o[:, self.sel_vars] = o[:, self.sel_vars].masked_fill(mask, np.nan)
else:
mask = rotate_axis0(torch.isnan(o))
o.masked_fill_(mask, np.nan)
return o
t = TSTensor(torch.randn(10, 20, 100))
t[t>.8] = np.nan
t2 = TSSelfMissingness()(t.clone())
t3 = TSSelfMissingness(sel_vars=[0,3,5,7])(t.clone())
assert (torch.isnan(t).sum() < torch.isnan(t2).sum()) and (torch.isnan(t2).sum() > torch.isnan(t3).sum())
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'iqr'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, iqr=None, quantile_range=(25.0, 75.0), use_single_batch=True, eps=1e-8, verbose=False, **kwargs):
super().__init__(**kwargs)
self.median = tensor(median) if median is not None else None
self.iqr = tensor(iqr) if iqr is not None else None
self._setup = median is None or iqr is None
self.use_single_batch = use_single_batch
self.eps = eps
self.verbose = verbose
self.quantile_range = quantile_range
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
new_o = o.permute(1,0,2).flatten(1)
median = get_percentile(new_o, 50, axis=1)
iqrmin, iqrmax = get_outliers_IQR(new_o, axis=1, quantile_range=self.quantile_range)
self.median = median.unsqueeze(0)
self.iqr = torch.clamp_min((iqrmax - iqrmin).unsqueeze(0), self.eps)
pv(f'{self.__class__.__name__} median={self.median.shape} iqr={self.iqr.shape}', self.verbose)
self._setup = False
else:
if self.median is None: self.median = torch.zeros(1, device=dl.device)
if self.iqr is None: self.iqr = torch.ones(1, device=dl.device)
def encodes(self, o:TSTensor):
return (o - self.median) / self.iqr
def __repr__(self): return f'{self.__class__.__name__}(quantile_range={self.quantile_range}, use_single_batch={self.use_single_batch})'
batch_tfms = TSRobustScale(verbose=True, use_single_batch=False)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=batch_tfms, num_workers=0)
xb, yb = next(iter(dls.train))
xb.min()
#export
class TSRandomStandardize(Transform):
r"""Transformation that applies a randomly chosen sample mean and sample standard deviation mean from a given
distribution to the training set in order to improve generalization."""
parameters, order = L('mean_dist', 'std_dist'), 90
def __init__(self, mean_dist, std_dist, sample_size=30, eps=1e-8, split_idx=0, **kwargs):
self.mean_dist, self.std_dist = torch.from_numpy(mean_dist), torch.from_numpy(std_dist)
self.size = len(self.mean_dist)
self.sample_size = sample_size
self.eps = eps
super().__init__(split_idx=split_idx, **kwargs)
def encodes(self, o:TSTensor):
rand_idxs = np.random.choice(self.size, (self.sample_size or 1) * o.shape[0])
mean = torch.stack(torch.split(self.mean_dist[rand_idxs], o.shape[0])).mean(0)
std = torch.clamp(torch.stack(torch.split(self.std_dist [rand_idxs], o.shape[0])).mean(0), self.eps)
return (o - mean) / std
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True, **kwargs):
super().__init__(**kwargs)
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
# export
class TSPosition(Transform):
"""Concatenates linear and/or cyclical positions along the sequence as additional variables"""
order = 90
def __init__(self, cyclical=True, linear=True, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
self.cyclical, self.linear = cyclical, linear
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
if self.linear:
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
o = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
if self.cyclical:
sin, cos = sincos_encoding(seq_len, device=o.device)
o = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return o
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSPosition(cyclical=True, linear=True)(t)
test_eq(enc_t.shape[1], 6)
plt.plot(enc_t[0, 3:].T);
#export
class TSMissingness(Transform):
"""Concatenates data missingness for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, **kwargs):
self.feature_idxs = listify(feature_idxs)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
missingness = o[:, self.feature_idxs].isnan()
else:
missingness = o.isnan()
return torch.cat([o, missingness], 1)
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSMissingness(feature_idxs=[0,2])(t)
test_eq(enc_t.shape[1], 5)
test_eq(enc_t[:, 3:], torch.isnan(t[:, [0,2]]).float())
#export
class TSPositionGaps(Transform):
"""Concatenates gaps for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, forward=True, backward=False, nearest=False, normalize=True, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.gap_fn = partial(get_gaps, forward=forward, backward=backward, nearest=nearest, normalize=normalize)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
gaps = self.gap_fn(o[:, self.feature_idxs])
else:
gaps = self.gap_fn(o)
return torch.cat([o, gaps], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSPositionGaps(feature_idxs=[0,2], forward=True, backward=True, nearest=True, normalize=False)(t)
test_eq(enc_t.shape[1], 9)
enc_t.data
#export
class TSRollingMean(Transform):
"""Calculates the rolling mean for all/ selected features alongside the sequence
It replaces the original values or adds additional variables (default)
If nan values are found, they will be filled forward and backward"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, window=2, replace=False, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.rolling_mean_fn = partial(rolling_moving_average, window=window)
self.replace = replace
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
if torch.isnan(o[:, self.feature_idxs]).any():
o[:, self.feature_idxs] = fbfill_sequence(o[:, self.feature_idxs])
rolling_mean = self.rolling_mean_fn(o[:, self.feature_idxs])
if self.replace:
o[:, self.feature_idxs] = rolling_mean
return o
else:
if torch.isnan(o).any():
o = fbfill_sequence(o)
rolling_mean = self.rolling_mean_fn(o)
if self.replace: return rolling_mean
return torch.cat([o, rolling_mean], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t > .6] = np.nan
print(t.data)
enc_t = TSRollingMean(feature_idxs=[0,2], window=3)(t)
test_eq(enc_t.shape[1], 5)
print(enc_t.data)
enc_t = TSRollingMean(window=3, replace=True)(t)
test_eq(enc_t.shape[1], 3)
print(enc_t.data)
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True, **kwargs):
super().__init__(**kwargs)
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add, **kwargs):
super().__init__(**kwargs)
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
#export
class TSClipByVar(Transform):
"""Clip batch of type `TSTensor` by variable
Args:
var_min_max: list of tuples containing variable index, min value (or None) and max value (or None)
"""
order = 90
def __init__(self, var_min_max, **kwargs):
super().__init__(**kwargs)
self.var_min_max = var_min_max
def encodes(self, o:TSTensor):
for v,m,M in self.var_min_max:
o[:, v] = torch.clamp(o[:, v], m, M)
return o
t = TSTensor(torch.rand(16, 3, 10) * tensor([1,10,100]).reshape(1,-1,1))
max_values = t.max(0).values.max(-1).values.data
max_values2 = TSClipByVar([(1,None,5), (2,10,50)])(t).max(0).values.max(-1).values.data
test_le(max_values2[1], 5)
test_ge(max_values2[2], 10)
test_le(max_values2[2], 50)
###Output
_____no_output_____
###Markdown
sklearn API transforms
###Code
#export
from sklearn.base import BaseEstimator, TransformerMixin
from fastai.data.transforms import CategoryMap
from joblib import dump, load
class TSShrinkDataFrame(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, skip=[], obj2cat=True, int2uint=False, verbose=True):
self.columns, self.skip, self.obj2cat, self.int2uint, self.verbose = listify(columns), skip, obj2cat, int2uint, verbose
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
self.old_dtypes = X.dtypes
if not self.columns: self.columns = X.columns
self.dt = df_shrink_dtypes(X[self.columns], self.skip, obj2cat=self.obj2cat, int2uint=self.int2uint)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X[self.columns] = X[self.columns].astype(self.dt)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X = X.astype(self.old_dtypes)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
df = pd.DataFrame()
df["ints64"] = np.random.randint(0,3,10)
df['floats64'] = np.random.rand(10)
tfm = TSShrinkDataFrame()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df["ints64"].dtype, "int8")
test_eq(df["floats64"].dtype, "float32")
#export
class TSOneHotEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, drop=True, add_na=True, dtype=np.int64):
self.columns = listify(columns)
self.drop, self.add_na, self.dtype = drop, add_na, dtype
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
handle_unknown = "ignore" if self.add_na else "error"
self.ohe_tfm = sklearn.preprocessing.OneHotEncoder(handle_unknown=handle_unknown)
if len(self.columns) == 1:
self.ohe_tfm.fit(X[self.columns].to_numpy().reshape(-1, 1))
else:
self.ohe_tfm.fit(X[self.columns])
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if len(self.columns) == 1:
output = self.ohe_tfm.transform(X[self.columns].to_numpy().reshape(-1, 1)).toarray().astype(self.dtype)
else:
output = self.ohe_tfm.transform(X[self.columns]).toarray().astype(self.dtype)
new_cols = []
for i,col in enumerate(self.columns):
for cats in self.ohe_tfm.categories_[i]:
new_cols.append(f"{str(col)}_{str(cats)}")
X[new_cols] = output
if self.drop: X = X.drop(self.columns, axis=1)
return X
df = pd.DataFrame()
df["a"] = np.random.randint(0,2,10)
df["b"] = np.random.randint(0,3,10)
unique_cols = len(df["a"].unique()) + len(df["b"].unique())
tfm = TSOneHotEncoder()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df.shape[1], unique_cols)
#export
class TSCategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, add_na=True):
self.columns = listify(columns)
self.add_na = add_na
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.cat_tfms = []
for column in self.columns:
self.cat_tfms.append(CategoryMap(X[column], add_na=self.add_na))
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_objs(X[column])
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_ids(X[column])
return X
###Output
_____no_output_____
###Markdown
Stateful transforms like TSCategoricalEncoder can easily be serialized.
###Code
import joblib
df = pd.DataFrame()
df["a"] = alphabet[np.random.randint(0,2,100)]
df["b"] = ALPHABET[np.random.randint(0,3,100)]
a_unique = len(df["a"].unique())
b_unique = len(df["b"].unique())
tfm = TSCategoricalEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSCategoricalEncoder.joblib")
tfm = joblib.load("data/TSCategoricalEncoder.joblib")
df = tfm.transform(df)
test_eq(df['a'].max(), a_unique)
test_eq(df['b'].max(), b_unique)
#export
default_date_attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
class TSDateTimeEncoder(BaseEstimator, TransformerMixin):
def __init__(self, datetime_columns=None, prefix=None, drop=True, time=False, attr=default_date_attr):
self.datetime_columns = listify(datetime_columns)
self.prefix, self.drop, self.time, self.attr = prefix, drop, time ,attr
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if self.time: self.attr = self.attr + ['Hour', 'Minute', 'Second']
if not self.datetime_columns:
self.datetime_columns = X.columns
self.prefixes = []
for dt_column in self.datetime_columns:
self.prefixes.append(re.sub('[Dd]ate$', '', dt_column) if self.prefix is None else self.prefix)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for dt_column,prefix in zip(self.datetime_columns,self.prefixes):
make_date(X, dt_column)
field = X[dt_column]
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in self.attr: X[prefix + "_" + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
if self.drop: X = X.drop(self.datetime_columns, axis=1)
return X
import datetime
df = pd.DataFrame()
df.loc[0, "date"] = datetime.datetime.now()
df.loc[1, "date"] = datetime.datetime.now() + pd.Timedelta(1, unit="D")
tfm = TSDateTimeEncoder()
joblib.dump(tfm, "data/TSDateTimeEncoder.joblib")
tfm = joblib.load("data/TSDateTimeEncoder.joblib")
tfm.fit_transform(df)
#export
class TSMissingnessEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = listify(columns)
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.missing_columns = [f"{cn}_missing" for cn in self.columns]
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
X[self.missing_columns] = X[self.columns].isnull().astype(int)
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
X.drop(self.missing_columns, axis=1, inplace=True)
return X
data = np.random.rand(10,3)
data[data > .8] = np.nan
df = pd.DataFrame(data, columns=["a", "b", "c"])
tfm = TSMissingnessEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSMissingnessEncoder.joblib")
tfm = joblib.load("data/TSMissingnessEncoder.joblib")
df = tfm.transform(df)
df
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
import re
import sklearn
from fastcore.transform import Transform, Pipeline
from fastai.data.transforms import Categorize
from fastai.data.load import DataLoader
from fastai.tabular.core import df_shrink_dtypes, make_date
from tsai.imports import *
from tsai.utils import *
from tsai.data.core import *
from tsai.data.preparation import *
from tsai.data.external import get_UCR_data
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: np.ndarray):
return stack([self.cat.decode(oi) for oi in o])
def decodes(self, o: torch.Tensor):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class TSNan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True, sel_vars=None):
store_attr()
if not ismin_torch("1.8"):
raise ValueError('This function only works with Pytorch>=1.8.')
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = torch.isnan(o[:, self.sel_vars])
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o[:, self.sel_vars], dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[:, self.sel_vars][mask] = median[mask]
else:
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], torch.nanmedian(o[:, self.sel_vars]))
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], self.value)
else:
mask = torch.isnan(o)
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
o = torch.nan_to_num(o, torch.nanmedian(o))
o = torch.nan_to_num(o, self.value)
return o
Nan2Value = TSNan2Value
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(TSNan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(TSNan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(TSNan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
o = TSTensor(torch.randn(16, 10, 100))
o[o > .9] = float('nan')
o = TSNan2Value(median=True, sel_vars=[0,1,2,3,4])(o)
test_eq(torch.isnan(o[:, [0,1,2,3,4]]).sum().item(), 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False, **kwargs):
super().__init__(**kwargs)
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False, **kwargs):
super().__init__(**kwargs)
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, range_min, range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False, **kwargs):
super().__init__(**kwargs)
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6, **kwargs):
super().__init__(**kwargs)
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSSelfMissingness(Transform):
"Applies missingness from samples in a batch to random samples in the batch for selected variables"
order = 90
def __init__(self, sel_vars=None, **kwargs):
self.sel_vars = sel_vars
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = rotate_axis0(torch.isnan(o[:, self.sel_vars]))
o[:, self.sel_vars] = o[:, self.sel_vars].masked_fill(mask, np.nan)
else:
mask = rotate_axis0(torch.isnan(o))
o.masked_fill_(mask, np.nan)
return o
t = TSTensor(torch.randn(10, 20, 100))
t[t>.8] = np.nan
t2 = TSSelfMissingness()(t.clone())
t3 = TSSelfMissingness(sel_vars=[0,3,5,7])(t.clone())
assert (torch.isnan(t).sum() < torch.isnan(t2).sum()) and (torch.isnan(t2).sum() > torch.isnan(t3).sum())
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'iqr'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, iqr=None, quantile_range=(25.0, 75.0), use_single_batch=True, eps=1e-8, verbose=False, **kwargs):
super().__init__(**kwargs)
self.median = tensor(median) if median is not None else None
self.iqr = tensor(iqr) if iqr is not None else None
self._setup = median is None or iqr is None
self.use_single_batch = use_single_batch
self.eps = eps
self.verbose = verbose
self.quantile_range = quantile_range
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
new_o = o.permute(1,0,2).flatten(1)
median = get_percentile(new_o, 50, axis=1)
iqrmin, iqrmax = get_outliers_IQR(new_o, axis=1, quantile_range=self.quantile_range)
self.median = median.unsqueeze(0)
self.iqr = torch.clamp_min((iqrmax - iqrmin).unsqueeze(0), self.eps)
pv(f'{self.__class__.__name__} median={self.median.shape} iqr={self.iqr.shape}', self.verbose)
self._setup = False
else:
if self.median is None: self.median = torch.zeros(1, device=dl.device)
if self.iqr is None: self.iqr = torch.ones(1, device=dl.device)
def encodes(self, o:TSTensor):
return (o - self.median) / self.iqr
def __repr__(self): return f'{self.__class__.__name__}(quantile_range={self.quantile_range}, use_single_batch={self.use_single_batch})'
batch_tfms = TSRobustScale(verbose=True, use_single_batch=False)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=batch_tfms, num_workers=0)
xb, yb = next(iter(dls.train))
xb.min()
#export
def get_stats_with_uncertainty(o, sel_vars=None, bs=64, n_trials=None, axis=(0,2)):
if n_trials is None: n_trials = len(o) // bs
random_idxs = np.random.choice(len(o), n_trials * bs, n_trials * bs > len(o))
oi_mean = []
oi_std = []
start = 0
for i in progress_bar(range(n_trials)):
idxs = random_idxs[start:start + bs]
start += bs
if hasattr(o, 'oindex'):
oi = o.index[idxs]
if hasattr(o, 'compute'):
oi = o[idxs].compute()
else:
oi = o[idxs]
oi_mean.append(np.nanmean(oi, axis=axis, keepdims=True))
oi_std.append(np.nanstd(oi, axis=axis, keepdims=True))
oi_mean = np.concatenate(oi_mean)
oi_std = np.concatenate(oi_std)
E_mean, S_mean = np.mean(oi_mean, axis=0, keepdims=True), np.std(oi_mean, axis=0, keepdims=True)
E_std, S_std = np.mean(oi_std, axis=0, keepdims=True), np.std(oi_std, axis=0, keepdims=True)
if sel_vars is not None:
S_mean[:, sel_vars] = 0 # no uncertainty
S_std[:, sel_vars] = 0 # no uncertainty
return E_mean, S_mean, E_std, S_std
def get_random_stats(E_mean, S_mean, E_std, S_std):
mult = np.random.normal(0, 1, 2)
new_mean = E_mean + S_mean * mult[0]
new_std = E_std + S_std * mult[1]
return new_mean, new_std
class TSGaussianStandardize(Transform):
"Scales each batch using modeled mean and std based on UNCERTAINTY MODELING FOR OUT-OF-DISTRIBUTION GENERALIZATION https://arxiv.org/abs/2202.03958"
parameters, order = L('E_mean', 'S_mean', 'E_std', 'S_std'), 90
def __init__(self,
E_mean : np.ndarray, # Mean expected value
S_mean : np.ndarray, # Uncertainty (standard deviation) of the mean
E_std : np.ndarray, # Standard deviation expected value
S_std : np.ndarray, # Uncertainty (standard deviation) of the standard deviation
eps=1e-8, # (epsilon) small amount added to standard deviation to avoid deviding by zero
split_idx=0, # Flag to indicate to which set is this transofrm applied. 0: training, 1:validation, None:both
**kwargs,
):
self.E_mean, self.S_mean = torch.from_numpy(E_mean), torch.from_numpy(S_mean)
self.E_std, self.S_std = torch.from_numpy(E_std), torch.from_numpy(S_std)
self.eps = eps
super().__init__(split_idx=split_idx, **kwargs)
def encodes(self, o:TSTensor):
mult = torch.normal(0, 1, (2,), device=o.device)
new_mean = self.E_mean + self.S_mean * mult[0]
new_std = torch.clamp(self.E_std + self.S_std * mult[1], self.eps)
return (o - new_mean) / new_std
TSRandomStandardize = TSGaussianStandardize
arr = np.random.rand(1000, 2, 50)
E_mean, S_mean, E_std, S_std = get_stats_with_uncertainty(arr, sel_vars=None, bs=64, n_trials=None, axis=(0,2))
new_mean, new_std = get_random_stats(E_mean, S_mean, E_std, S_std)
new_mean2, new_std2 = get_random_stats(E_mean, S_mean, E_std, S_std)
test_ne(new_mean, new_mean2)
test_ne(new_std, new_std2)
test_eq(new_mean.shape, (1, 2, 1))
test_eq(new_std.shape, (1, 2, 1))
new_mean, new_std
###Output
_____no_output_____
###Markdown
TSGaussianStandardize can be used jointly with TSStandardized in the following way: ```pythonX, y, splits = get_UCR_data('LSST', split_data=False)tfms = [None, TSClassification()]E_mean, S_mean, E_std, S_std = get_stats_with_uncertainty(X, sel_vars=None, bs=64, n_trials=None, axis=(0,2))batch_tfms = [TSGaussianStandardize(E_mean, S_mean, E_std, S_std, split_idx=0), TSStandardize(E_mean, S_mean, split_idx=1)]dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[32, 64])learn = ts_learner(dls, InceptionTimePlus, metrics=accuracy, cbs=[ShowGraph()])learn.fit_one_cycle(1, 1e-2)```In this way the train batches are scaled based on mean and standard deviation distributions while the valid batches are scaled with a fixed mean and standard deviation values.The intent is to improve out-of-distribution performance. This method is inspired by UNCERTAINTY MODELING FOR OUT-OF-DISTRIBUTION GENERALIZATION https://arxiv.org/abs/2202.03958.
###Code
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True, **kwargs):
super().__init__(**kwargs)
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
# export
class TSPosition(Transform):
"""Concatenates linear and/or cyclical positions along the sequence as additional variables"""
order = 90
def __init__(self, cyclical=True, linear=True, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
self.cyclical, self.linear = cyclical, linear
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
if self.linear:
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
o = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
if self.cyclical:
sin, cos = sincos_encoding(seq_len, device=o.device)
o = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return o
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSPosition(cyclical=True, linear=True)(t)
test_eq(enc_t.shape[1], 6)
plt.plot(enc_t[0, 3:].T);
#export
class TSMissingness(Transform):
"""Concatenates data missingness for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, **kwargs):
self.feature_idxs = listify(feature_idxs)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
missingness = o[:, self.feature_idxs].isnan()
else:
missingness = o.isnan()
return torch.cat([o, missingness], 1)
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSMissingness(feature_idxs=[0,2])(t)
test_eq(enc_t.shape[1], 5)
test_eq(enc_t[:, 3:], torch.isnan(t[:, [0,2]]).float())
#export
class TSPositionGaps(Transform):
"""Concatenates gaps for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, forward=True, backward=False, nearest=False, normalize=True, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.gap_fn = partial(get_gaps, forward=forward, backward=backward, nearest=nearest, normalize=normalize)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
gaps = self.gap_fn(o[:, self.feature_idxs])
else:
gaps = self.gap_fn(o)
return torch.cat([o, gaps], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSPositionGaps(feature_idxs=[0,2], forward=True, backward=True, nearest=True, normalize=False)(t)
test_eq(enc_t.shape[1], 9)
enc_t.data
#export
class TSRollingMean(Transform):
"""Calculates the rolling mean for all/ selected features alongside the sequence
It replaces the original values or adds additional variables (default)
If nan values are found, they will be filled forward and backward"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, window=2, replace=False, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.rolling_mean_fn = partial(rolling_moving_average, window=window)
self.replace = replace
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
if torch.isnan(o[:, self.feature_idxs]).any():
o[:, self.feature_idxs] = fbfill_sequence(o[:, self.feature_idxs])
rolling_mean = self.rolling_mean_fn(o[:, self.feature_idxs])
if self.replace:
o[:, self.feature_idxs] = rolling_mean
return o
else:
if torch.isnan(o).any():
o = fbfill_sequence(o)
rolling_mean = self.rolling_mean_fn(o)
if self.replace: return rolling_mean
return torch.cat([o, rolling_mean], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t > .6] = np.nan
print(t.data)
enc_t = TSRollingMean(feature_idxs=[0,2], window=3)(t)
test_eq(enc_t.shape[1], 5)
print(enc_t.data)
enc_t = TSRollingMean(window=3, replace=True)(t)
test_eq(enc_t.shape[1], 3)
print(enc_t.data)
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True, **kwargs):
super().__init__(**kwargs)
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add, **kwargs):
super().__init__(**kwargs)
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
#export
class TSClipByVar(Transform):
"""Clip batch of type `TSTensor` by variable
Args:
var_min_max: list of tuples containing variable index, min value (or None) and max value (or None)
"""
order = 90
def __init__(self, var_min_max, **kwargs):
super().__init__(**kwargs)
self.var_min_max = var_min_max
def encodes(self, o:TSTensor):
for v,m,M in self.var_min_max:
o[:, v] = torch.clamp(o[:, v], m, M)
return o
t = TSTensor(torch.rand(16, 3, 10) * tensor([1,10,100]).reshape(1,-1,1))
max_values = t.max(0).values.max(-1).values.data
max_values2 = TSClipByVar([(1,None,5), (2,10,50)])(t).max(0).values.max(-1).values.data
test_le(max_values2[1], 5)
test_ge(max_values2[2], 10)
test_le(max_values2[2], 50)
###Output
_____no_output_____
###Markdown
sklearn API transforms
###Code
#export
from sklearn.base import BaseEstimator, TransformerMixin
from fastai.data.transforms import CategoryMap
from joblib import dump, load
class TSShrinkDataFrame(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, skip=[], obj2cat=True, int2uint=False, verbose=True):
self.columns, self.skip, self.obj2cat, self.int2uint, self.verbose = listify(columns), skip, obj2cat, int2uint, verbose
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
self.old_dtypes = X.dtypes
if not self.columns: self.columns = X.columns
self.dt = df_shrink_dtypes(X[self.columns], self.skip, obj2cat=self.obj2cat, int2uint=self.int2uint)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X[self.columns] = X[self.columns].astype(self.dt)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X = X.astype(self.old_dtypes)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
df = pd.DataFrame()
df["ints64"] = np.random.randint(0,3,10)
df['floats64'] = np.random.rand(10)
tfm = TSShrinkDataFrame()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df["ints64"].dtype, "int8")
test_eq(df["floats64"].dtype, "float32")
#export
class TSOneHotEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, drop=True, add_na=True, dtype=np.int64):
self.columns = listify(columns)
self.drop, self.add_na, self.dtype = drop, add_na, dtype
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
handle_unknown = "ignore" if self.add_na else "error"
self.ohe_tfm = sklearn.preprocessing.OneHotEncoder(handle_unknown=handle_unknown)
if len(self.columns) == 1:
self.ohe_tfm.fit(X[self.columns].to_numpy().reshape(-1, 1))
else:
self.ohe_tfm.fit(X[self.columns])
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if len(self.columns) == 1:
output = self.ohe_tfm.transform(X[self.columns].to_numpy().reshape(-1, 1)).toarray().astype(self.dtype)
else:
output = self.ohe_tfm.transform(X[self.columns]).toarray().astype(self.dtype)
new_cols = []
for i,col in enumerate(self.columns):
for cats in self.ohe_tfm.categories_[i]:
new_cols.append(f"{str(col)}_{str(cats)}")
X[new_cols] = output
if self.drop: X = X.drop(self.columns, axis=1)
return X
df = pd.DataFrame()
df["a"] = np.random.randint(0,2,10)
df["b"] = np.random.randint(0,3,10)
unique_cols = len(df["a"].unique()) + len(df["b"].unique())
tfm = TSOneHotEncoder()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df.shape[1], unique_cols)
#export
class TSCategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, add_na=True):
self.columns = listify(columns)
self.add_na = add_na
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.cat_tfms = []
for column in self.columns:
self.cat_tfms.append(CategoryMap(X[column], add_na=self.add_na))
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_objs(X[column])
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_ids(X[column])
return X
###Output
_____no_output_____
###Markdown
Stateful transforms like TSCategoricalEncoder can easily be serialized.
###Code
import joblib
df = pd.DataFrame()
df["a"] = alphabet[np.random.randint(0,2,100)]
df["b"] = ALPHABET[np.random.randint(0,3,100)]
a_unique = len(df["a"].unique())
b_unique = len(df["b"].unique())
tfm = TSCategoricalEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSCategoricalEncoder.joblib")
tfm = joblib.load("data/TSCategoricalEncoder.joblib")
df = tfm.transform(df)
test_eq(df['a'].max(), a_unique)
test_eq(df['b'].max(), b_unique)
#export
default_date_attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
class TSDateTimeEncoder(BaseEstimator, TransformerMixin):
def __init__(self, datetime_columns=None, prefix=None, drop=True, time=False, attr=default_date_attr):
self.datetime_columns = listify(datetime_columns)
self.prefix, self.drop, self.time, self.attr = prefix, drop, time ,attr
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if self.time: self.attr = self.attr + ['Hour', 'Minute', 'Second']
if not self.datetime_columns:
self.datetime_columns = X.columns
self.prefixes = []
for dt_column in self.datetime_columns:
self.prefixes.append(re.sub('[Dd]ate$', '', dt_column) if self.prefix is None else self.prefix)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for dt_column,prefix in zip(self.datetime_columns,self.prefixes):
make_date(X, dt_column)
field = X[dt_column]
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in self.attr: X[prefix + "_" + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
if self.drop: X = X.drop(self.datetime_columns, axis=1)
return X
import datetime
df = pd.DataFrame()
df.loc[0, "date"] = datetime.datetime.now()
df.loc[1, "date"] = datetime.datetime.now() + pd.Timedelta(1, unit="D")
tfm = TSDateTimeEncoder()
joblib.dump(tfm, "data/TSDateTimeEncoder.joblib")
tfm = joblib.load("data/TSDateTimeEncoder.joblib")
tfm.fit_transform(df)
#export
class TSMissingnessEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = listify(columns)
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.missing_columns = [f"{cn}_missing" for cn in self.columns]
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
X[self.missing_columns] = X[self.columns].isnull().astype(int)
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
X.drop(self.missing_columns, axis=1, inplace=True)
return X
data = np.random.rand(10,3)
data[data > .8] = np.nan
df = pd.DataFrame(data, columns=["a", "b", "c"])
tfm = TSMissingnessEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSMissingnessEncoder.joblib")
tfm = joblib.load("data/TSMissingnessEncoder.joblib")
df = tfm.transform(df)
df
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.external import *
from tsai.data.core import *
from tsai.data.preparation import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: (np.ndarray, torch.Tensor)):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class TSNan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True):
store_attr()
if not ismin_torch("1.8"):
raise ValueError('This function only works with Pytorch>=1.8.')
def encodes(self, o:TSTensor):
mask = torch.isnan(o)
if mask.any():
if self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
o = torch_nan_to_num(o, torch.nanmedian(o))
o = torch_nan_to_num(o, self.value)
return o
Nan2Value = TSNan2Value
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(TSNan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(TSNan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(TSNan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False):
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False):
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, self.range_min, self.range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False):
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6):
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'iqr'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, iqr=None, quantile_range=(25.0, 75.0), use_single_batch=True, eps=1e-8, verbose=False):
self.median = tensor(median) if median is not None else None
self.iqr = tensor(iqr) if iqr is not None else None
self._setup = median is None or iqr is None
self.use_single_batch = use_single_batch
self.eps = eps
self.verbose = verbose
self.quantile_range = quantile_range
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
new_o = o.permute(1,0,2).flatten(1)
median = get_percentile(new_o, 50, axis=1)
iqrmin, iqrmax = get_outliers_IQR(new_o, axis=1, quantile_range=self.quantile_range)
self.median = median.unsqueeze(0)
self.iqr = torch.clamp_min((iqrmax - iqrmin).unsqueeze(0), self.eps)
pv(f'{self.__class__.__name__} median={self.median.shape} iqr={self.iqr.shape}', self.verbose)
self._setup = False
else:
if self.median is None: self.median = torch.zeros(1, device=dl.device)
if self.iqr is None: self.iqr = torch.ones(1, device=dl.device)
def encodes(self, o:TSTensor):
return (o - self.median) / self.iqr
def __repr__(self): return f'{self.__class__.__name__}(quantile_range={self.quantile_range}, use_single_batch={self.use_single_batch})'
batch_tfms = TSRobustScale(verbose=True, use_single_batch=False)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=batch_tfms, num_workers=0)
xb, yb = next(iter(dls.train))
xb.min()
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
# export
class TSPosition(Transform):
"""Concatenates linear and/or cyclical positions along the sequence as additional variables"""
order = 90
def __init__(self, cyclical=True, linear=True, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
self.cyclical, self.linear = cyclical, linear
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
if self.linear:
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
o = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
if self.cyclical:
sin, cos = sincos_encoding(seq_len, device=o.device)
o = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return o
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSPosition(cyclical=True, linear=True)(t)
test_eq(enc_t.shape[1], 6)
plt.plot(enc_t[0, 3:].T);
#export
class TSMissingness(Transform):
"""Concatenates data missingness for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, **kwargs):
self.feature_idxs = listify(feature_idxs)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
missingness = o[:, self.feature_idxs].isnan()
else:
missingness = o.isnan()
return torch.cat([o, missingness], 1)
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSMissingness(feature_idxs=[0,2])(t)
test_eq(enc_t.shape[1], 5)
test_eq(enc_t[:, 3:], torch.isnan(t[:, [0,2]]).float())
#export
class TSPositionGaps(Transform):
"""Concatenates gaps for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, forward=True, backward=False, nearest=False, normalize=True, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.gap_fn = partial(get_gaps, forward=forward, backward=backward, nearest=nearest, normalize=normalize)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
gaps = self.gap_fn(o[:, self.feature_idxs])
else:
gaps = self.gap_fn(o)
return torch.cat([o, gaps], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSPositionGaps(feature_idxs=[0,2], forward=True, backward=True, nearest=True, normalize=False)(t)
test_eq(enc_t.shape[1], 9)
enc_t.data
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add):
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
###Output
_____no_output_____
###Markdown
sklearn API transforms
###Code
#export
from sklearn.base import BaseEstimator, TransformerMixin
from fastai.data.transforms import CategoryMap
from joblib import dump, load
class TSShrinkDataFrame(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, skip=[], obj2cat=True, int2uint=False, verbose=True):
self.columns, self.skip, self.obj2cat, self.int2uint, self.verbose = listify(columns), skip, obj2cat, int2uint, verbose
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
self.old_dtypes = X.dtypes
if not self.columns: self.columns = X.columns
self.dt = df_shrink_dtypes(X[self.columns], self.skip, obj2cat=self.obj2cat, int2uint=self.int2uint)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X[self.columns] = X[self.columns].astype(self.dt)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X = X.astype(self.old_dtypes)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
df = pd.DataFrame()
df["ints64"] = np.random.randint(0,3,10)
df['floats64'] = np.random.rand(10)
tfm = TSShrinkDataFrame()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df["ints64"].dtype, "int8")
test_eq(df["floats64"].dtype, "float32")
#export
class TSOneHotEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, drop=True, add_na=True, dtype=np.int64):
self.columns = listify(columns)
self.drop, self.add_na, self.dtype = drop, add_na, dtype
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
handle_unknown = "ignore" if self.add_na else "error"
self.ohe_tfm = sklearn.preprocessing.OneHotEncoder(handle_unknown=handle_unknown)
if len(self.columns) == 1:
self.ohe_tfm.fit(X[self.columns].to_numpy().reshape(-1, 1))
else:
self.ohe_tfm.fit(X[self.columns])
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if len(self.columns) == 1:
output = self.ohe_tfm.transform(X[self.columns].to_numpy().reshape(-1, 1)).toarray().astype(self.dtype)
else:
output = self.ohe_tfm.transform(X[self.columns]).toarray().astype(self.dtype)
new_cols = []
for i,col in enumerate(self.columns):
for cats in self.ohe_tfm.categories_[i]:
new_cols.append(f"{str(col)}_{str(cats)}")
X[new_cols] = output
if self.drop: X = X.drop(self.columns, axis=1)
return X
df = pd.DataFrame()
df["a"] = np.random.randint(0,2,10)
df["b"] = np.random.randint(0,3,10)
unique_cols = len(df["a"].unique()) + len(df["b"].unique())
tfm = TSOneHotEncoder()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df.shape[1], unique_cols)
#export
class TSCategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, add_na=True):
self.columns = listify(columns)
self.add_na = add_na
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.cat_tfms = []
for column in self.columns:
self.cat_tfms.append(CategoryMap(X[column], add_na=self.add_na))
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_objs(X[column])
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_ids(X[column])
return X
###Output
_____no_output_____
###Markdown
Stateful transforms like TSCategoricalEncoder can easily be serialized.
###Code
import joblib
df = pd.DataFrame()
df["a"] = alphabet[np.random.randint(0,2,100)]
df["b"] = ALPHABET[np.random.randint(0,3,100)]
a_unique = len(df["a"].unique())
b_unique = len(df["b"].unique())
tfm = TSCategoricalEncoder()
tfm.fit(df)
joblib.dump(tfm, "TSCategoricalEncoder.joblib")
tfm = joblib.load("TSCategoricalEncoder.joblib")
df = tfm.transform(df)
test_eq(df['a'].max(), a_unique)
test_eq(df['b'].max(), b_unique)
#export
default_date_attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
class TSDateTimeEncoder(BaseEstimator, TransformerMixin):
def __init__(self, datetime_columns=None, prefix=None, drop=True, time=False, attr=default_date_attr):
self.datetime_columns = listify(datetime_columns)
self.prefix, self.drop, self.time, self.attr = prefix, drop, time ,attr
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if self.time: self.attr = self.attr + ['Hour', 'Minute', 'Second']
if not self.datetime_columns:
self.datetime_columns = X.columns
self.prefixes = []
for dt_column in self.datetime_columns:
self.prefixes.append(re.sub('[Dd]ate$', '', dt_column) if self.prefix is None else self.prefix)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for dt_column,prefix in zip(self.datetime_columns,self.prefixes):
make_date(X, dt_column)
field = X[dt_column]
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in self.attr: X[prefix + "_" + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
if self.drop: X = X.drop(self.datetime_columns, axis=1)
return X
import datetime
df = pd.DataFrame()
df.loc[0, "date"] = datetime.datetime.now()
df.loc[1, "date"] = datetime.datetime.now() + pd.Timedelta(1, unit="D")
tfm = TSDateTimeEncoder()
joblib.dump(tfm, "TSDateTimeEncoder.joblib")
tfm = joblib.load("TSDateTimeEncoder.joblib")
tfm.fit_transform(df)
#export
class TSMissingnessEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = listify(columns)
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.missing_columns = [f"{cn}_missing" for cn in self.columns]
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
X[self.missing_columns] = X[self.columns].isnull().astype(int)
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
X.drop(self.missing_columns, axis=1, inplace=True)
return X
data = np.random.rand(10,3)
data[data > .8] = np.nan
df = pd.DataFrame(data, columns=["a", "b", "c"])
tfm = TSMissingnessEncoder()
tfm.fit(df)
joblib.dump(tfm, "TSMissingnessEncoder.joblib")
tfm = joblib.load("TSMissingnessEncoder.joblib")
df = tfm.transform(df)
df
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.external import *
from tsai.data.core import *
from tsai.data.preparation import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: np.ndarray):
return stack([self.cat.decode(oi) for oi in o])
def decodes(self, o: torch.Tensor):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class TSNan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True, sel_vars=None):
store_attr()
if not ismin_torch("1.8"):
raise ValueError('This function only works with Pytorch>=1.8.')
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = torch.isnan(o[:, self.sel_vars])
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o[:, self.sel_vars], dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[:, self.sel_vars][mask] = median[mask]
else:
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], torch.nanmedian(o[:, self.sel_vars]))
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], self.value)
else:
mask = torch.isnan(o)
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
o = torch.nan_to_num(o, torch.nanmedian(o))
o = torch.nan_to_num(o, self.value)
return o
Nan2Value = TSNan2Value
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(TSNan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(TSNan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(TSNan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
o = TSTensor(torch.randn(16, 10, 100))
o[o > .9] = float('nan')
o = TSNan2Value(median=True, sel_vars=[0,1,2,3,4])(o)
test_eq(torch.isnan(o[:, [0,1,2,3,4]]).sum().item(), 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False, **kwargs):
super().__init__(**kwargs)
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False, **kwargs):
super().__init__(**kwargs)
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, range_min, range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False, **kwargs):
super().__init__(**kwargs)
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6, **kwargs):
super().__init__(**kwargs)
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSSelfMissingness(Transform):
"Applies missingness from samples in a batch to random samples in the batch for selected variables"
order = 90
def __init__(self, sel_vars=None, **kwargs):
self.sel_vars = sel_vars
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = rotate_axis0(torch.isnan(o[:, self.sel_vars]))
o[:, self.sel_vars] = o[:, self.sel_vars].masked_fill(mask, np.nan)
else:
mask = rotate_axis0(torch.isnan(o))
o.masked_fill_(mask, np.nan)
return o
t = TSTensor(torch.randn(10, 20, 100))
t[t>.8] = np.nan
t2 = TSSelfMissingness()(t.clone())
t3 = TSSelfMissingness(sel_vars=[0,3,5,7])(t.clone())
assert (torch.isnan(t).sum() < torch.isnan(t2).sum()) and (torch.isnan(t2).sum() > torch.isnan(t3).sum())
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'iqr'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, iqr=None, quantile_range=(25.0, 75.0), use_single_batch=True, eps=1e-8, verbose=False, **kwargs):
super().__init__(**kwargs)
self.median = tensor(median) if median is not None else None
self.iqr = tensor(iqr) if iqr is not None else None
self._setup = median is None or iqr is None
self.use_single_batch = use_single_batch
self.eps = eps
self.verbose = verbose
self.quantile_range = quantile_range
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
new_o = o.permute(1,0,2).flatten(1)
median = get_percentile(new_o, 50, axis=1)
iqrmin, iqrmax = get_outliers_IQR(new_o, axis=1, quantile_range=self.quantile_range)
self.median = median.unsqueeze(0)
self.iqr = torch.clamp_min((iqrmax - iqrmin).unsqueeze(0), self.eps)
pv(f'{self.__class__.__name__} median={self.median.shape} iqr={self.iqr.shape}', self.verbose)
self._setup = False
else:
if self.median is None: self.median = torch.zeros(1, device=dl.device)
if self.iqr is None: self.iqr = torch.ones(1, device=dl.device)
def encodes(self, o:TSTensor):
return (o - self.median) / self.iqr
def __repr__(self): return f'{self.__class__.__name__}(quantile_range={self.quantile_range}, use_single_batch={self.use_single_batch})'
batch_tfms = TSRobustScale(verbose=True, use_single_batch=False)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=batch_tfms, num_workers=0)
xb, yb = next(iter(dls.train))
xb.min()
#export
class TSRandomStandardize(Transform):
r"""Transformation that applies a randomly chosen sample mean and sample standard deviation mean from a given
distribution to the training set in order to improve generalization."""
parameters, order = L('mean_dist', 'std_dist'), 90
def __init__(self, mean_dist, std_dist, sample_size=30, eps=1e-8, split_idx=0, **kwargs):
self.mean_dist, self.std_dist = torch.from_numpy(mean_dist), torch.from_numpy(std_dist)
self.size = len(self.mean_dist)
self.sample_size = sample_size
self.eps = eps
super().__init__(split_idx=split_idx, **kwargs)
def encodes(self, o:TSTensor):
rand_idxs = np.random.choice(self.size, (self.sample_size or 1) * o.shape[0])
mean = torch.stack(torch.split(self.mean_dist[rand_idxs], o.shape[0])).mean(0)
std = torch.clamp(torch.stack(torch.split(self.std_dist [rand_idxs], o.shape[0])).mean(0), self.eps)
return (o - mean) / std
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True, **kwargs):
super().__init__(**kwargs)
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
# export
class TSPosition(Transform):
"""Concatenates linear and/or cyclical positions along the sequence as additional variables"""
order = 90
def __init__(self, cyclical=True, linear=True, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
self.cyclical, self.linear = cyclical, linear
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
if self.linear:
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
o = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
if self.cyclical:
sin, cos = sincos_encoding(seq_len, device=o.device)
o = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return o
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSPosition(cyclical=True, linear=True)(t)
test_eq(enc_t.shape[1], 6)
plt.plot(enc_t[0, 3:].T);
#export
class TSMissingness(Transform):
"""Concatenates data missingness for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, **kwargs):
self.feature_idxs = listify(feature_idxs)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
missingness = o[:, self.feature_idxs].isnan()
else:
missingness = o.isnan()
return torch.cat([o, missingness], 1)
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSMissingness(feature_idxs=[0,2])(t)
test_eq(enc_t.shape[1], 5)
test_eq(enc_t[:, 3:], torch.isnan(t[:, [0,2]]).float())
#export
class TSPositionGaps(Transform):
"""Concatenates gaps for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, forward=True, backward=False, nearest=False, normalize=True, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.gap_fn = partial(get_gaps, forward=forward, backward=backward, nearest=nearest, normalize=normalize)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
gaps = self.gap_fn(o[:, self.feature_idxs])
else:
gaps = self.gap_fn(o)
return torch.cat([o, gaps], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSPositionGaps(feature_idxs=[0,2], forward=True, backward=True, nearest=True, normalize=False)(t)
test_eq(enc_t.shape[1], 9)
enc_t.data
#export
class TSRollingMean(Transform):
"""Calculates the rolling mean for all/ selected features alongside the sequence
It replaces the original values or adds additional variables (default)
If nan values are found, they will be filled forward and backward"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, window=2, replace=False, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.rolling_mean_fn = partial(rolling_moving_average, window=window)
self.replace = replace
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
if torch.isnan(o[:, self.feature_idxs]).any():
o[:, self.feature_idxs] = fbfill_sequence(o[:, self.feature_idxs])
rolling_mean = self.rolling_mean_fn(o[:, self.feature_idxs])
if self.replace:
o[:, self.feature_idxs] = rolling_mean
return o
else:
if torch.isnan(o).any():
o = fbfill_sequence(o)
rolling_mean = self.rolling_mean_fn(o)
if self.replace: return rolling_mean
return torch.cat([o, rolling_mean], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t > .6] = np.nan
print(t.data)
enc_t = TSRollingMean(feature_idxs=[0,2], window=3)(t)
test_eq(enc_t.shape[1], 5)
print(enc_t.data)
enc_t = TSRollingMean(window=3, replace=True)(t)
test_eq(enc_t.shape[1], 3)
print(enc_t.data)
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True, **kwargs):
super().__init__(**kwargs)
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add, **kwargs):
super().__init__(**kwargs)
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
#export
class TSClipByVar(Transform):
"""Clip batch of type `TSTensor` by variable
Args:
var_min_max: list of tuples containing variable index, min value (or None) and max value (or None)
"""
order = 90
def __init__(self, var_min_max, **kwargs):
super().__init__(**kwargs)
self.var_min_max = var_min_max
def encodes(self, o:TSTensor):
for v,m,M in self.var_min_max:
o[:, v] = torch.clamp(o[:, v], m, M)
return o
t = TSTensor(torch.rand(16, 3, 10) * tensor([1,10,100]).reshape(1,-1,1))
max_values = t.max(0).values.max(-1).values.data
max_values2 = TSClipByVar([(1,None,5), (2,10,50)])(t).max(0).values.max(-1).values.data
test_le(max_values2[1], 5)
test_ge(max_values2[2], 10)
test_le(max_values2[2], 50)
###Output
_____no_output_____
###Markdown
sklearn API transforms
###Code
#export
from sklearn.base import BaseEstimator, TransformerMixin
from fastai.data.transforms import CategoryMap
from joblib import dump, load
class TSShrinkDataFrame(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, skip=[], obj2cat=True, int2uint=False, verbose=True):
self.columns, self.skip, self.obj2cat, self.int2uint, self.verbose = listify(columns), skip, obj2cat, int2uint, verbose
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
self.old_dtypes = X.dtypes
if not self.columns: self.columns = X.columns
self.dt = df_shrink_dtypes(X[self.columns], self.skip, obj2cat=self.obj2cat, int2uint=self.int2uint)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X[self.columns] = X[self.columns].astype(self.dt)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X = X.astype(self.old_dtypes)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
df = pd.DataFrame()
df["ints64"] = np.random.randint(0,3,10)
df['floats64'] = np.random.rand(10)
tfm = TSShrinkDataFrame()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df["ints64"].dtype, "int8")
test_eq(df["floats64"].dtype, "float32")
#export
class TSOneHotEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, drop=True, add_na=True, dtype=np.int64):
self.columns = listify(columns)
self.drop, self.add_na, self.dtype = drop, add_na, dtype
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
handle_unknown = "ignore" if self.add_na else "error"
self.ohe_tfm = sklearn.preprocessing.OneHotEncoder(handle_unknown=handle_unknown)
if len(self.columns) == 1:
self.ohe_tfm.fit(X[self.columns].to_numpy().reshape(-1, 1))
else:
self.ohe_tfm.fit(X[self.columns])
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if len(self.columns) == 1:
output = self.ohe_tfm.transform(X[self.columns].to_numpy().reshape(-1, 1)).toarray().astype(self.dtype)
else:
output = self.ohe_tfm.transform(X[self.columns]).toarray().astype(self.dtype)
new_cols = []
for i,col in enumerate(self.columns):
for cats in self.ohe_tfm.categories_[i]:
new_cols.append(f"{str(col)}_{str(cats)}")
X[new_cols] = output
if self.drop: X = X.drop(self.columns, axis=1)
return X
df = pd.DataFrame()
df["a"] = np.random.randint(0,2,10)
df["b"] = np.random.randint(0,3,10)
unique_cols = len(df["a"].unique()) + len(df["b"].unique())
tfm = TSOneHotEncoder()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df.shape[1], unique_cols)
#export
class TSCategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, add_na=True):
self.columns = listify(columns)
self.add_na = add_na
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.cat_tfms = []
for column in self.columns:
self.cat_tfms.append(CategoryMap(X[column], add_na=self.add_na))
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_objs(X[column])
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_ids(X[column])
return X
###Output
_____no_output_____
###Markdown
Stateful transforms like TSCategoricalEncoder can easily be serialized.
###Code
import joblib
df = pd.DataFrame()
df["a"] = alphabet[np.random.randint(0,2,100)]
df["b"] = ALPHABET[np.random.randint(0,3,100)]
a_unique = len(df["a"].unique())
b_unique = len(df["b"].unique())
tfm = TSCategoricalEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSCategoricalEncoder.joblib")
tfm = joblib.load("data/TSCategoricalEncoder.joblib")
df = tfm.transform(df)
test_eq(df['a'].max(), a_unique)
test_eq(df['b'].max(), b_unique)
#export
default_date_attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
class TSDateTimeEncoder(BaseEstimator, TransformerMixin):
def __init__(self, datetime_columns=None, prefix=None, drop=True, time=False, attr=default_date_attr):
self.datetime_columns = listify(datetime_columns)
self.prefix, self.drop, self.time, self.attr = prefix, drop, time ,attr
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if self.time: self.attr = self.attr + ['Hour', 'Minute', 'Second']
if not self.datetime_columns:
self.datetime_columns = X.columns
self.prefixes = []
for dt_column in self.datetime_columns:
self.prefixes.append(re.sub('[Dd]ate$', '', dt_column) if self.prefix is None else self.prefix)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for dt_column,prefix in zip(self.datetime_columns,self.prefixes):
make_date(X, dt_column)
field = X[dt_column]
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in self.attr: X[prefix + "_" + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
if self.drop: X = X.drop(self.datetime_columns, axis=1)
return X
import datetime
df = pd.DataFrame()
df.loc[0, "date"] = datetime.datetime.now()
df.loc[1, "date"] = datetime.datetime.now() + pd.Timedelta(1, unit="D")
tfm = TSDateTimeEncoder()
joblib.dump(tfm, "data/TSDateTimeEncoder.joblib")
tfm = joblib.load("data/TSDateTimeEncoder.joblib")
tfm.fit_transform(df)
#export
class TSMissingnessEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = listify(columns)
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.missing_columns = [f"{cn}_missing" for cn in self.columns]
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
X[self.missing_columns] = X[self.columns].isnull().astype(int)
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
X.drop(self.missing_columns, axis=1, inplace=True)
return X
data = np.random.rand(10,3)
data[data > .8] = np.nan
df = pd.DataFrame(data, columns=["a", "b", "c"])
tfm = TSMissingnessEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSMissingnessEncoder.joblib")
tfm = joblib.load("data/TSMissingnessEncoder.joblib")
df = tfm.transform(df)
df
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.external import *
from tsai.data.core import *
from tsai.data.preparation import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: (np.ndarray, torch.Tensor)):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class TSNan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True, sel_vars=None):
store_attr()
if not ismin_torch("1.8"):
raise ValueError('This function only works with Pytorch>=1.8.')
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = torch.isnan(o[:, self.sel_vars])
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o[:, self.sel_vars], dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[:, self.sel_vars][mask] = median[mask]
else:
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], torch.nanmedian(o[:, self.sel_vars]))
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], self.value)
else:
mask = torch.isnan(o)
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
o = torch.nan_to_num(o, torch.nanmedian(o))
o = torch.nan_to_num(o, self.value)
return o
Nan2Value = TSNan2Value
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(TSNan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(TSNan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(TSNan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
o = TSTensor(torch.randn(16, 10, 100))
o[o > .9] = float('nan')
o = TSNan2Value(median=True, sel_vars=[0,1,2,3,4])(o)
test_eq(torch.isnan(o[:, [0,1,2,3,4]]).sum().item(), 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False):
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False):
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, self.range_min, self.range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False):
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6):
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'iqr'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, iqr=None, quantile_range=(25.0, 75.0), use_single_batch=True, eps=1e-8, verbose=False):
self.median = tensor(median) if median is not None else None
self.iqr = tensor(iqr) if iqr is not None else None
self._setup = median is None or iqr is None
self.use_single_batch = use_single_batch
self.eps = eps
self.verbose = verbose
self.quantile_range = quantile_range
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
new_o = o.permute(1,0,2).flatten(1)
median = get_percentile(new_o, 50, axis=1)
iqrmin, iqrmax = get_outliers_IQR(new_o, axis=1, quantile_range=self.quantile_range)
self.median = median.unsqueeze(0)
self.iqr = torch.clamp_min((iqrmax - iqrmin).unsqueeze(0), self.eps)
pv(f'{self.__class__.__name__} median={self.median.shape} iqr={self.iqr.shape}', self.verbose)
self._setup = False
else:
if self.median is None: self.median = torch.zeros(1, device=dl.device)
if self.iqr is None: self.iqr = torch.ones(1, device=dl.device)
def encodes(self, o:TSTensor):
return (o - self.median) / self.iqr
def __repr__(self): return f'{self.__class__.__name__}(quantile_range={self.quantile_range}, use_single_batch={self.use_single_batch})'
batch_tfms = TSRobustScale(verbose=True, use_single_batch=False)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=batch_tfms, num_workers=0)
xb, yb = next(iter(dls.train))
xb.min()
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
# export
class TSPosition(Transform):
"""Concatenates linear and/or cyclical positions along the sequence as additional variables"""
order = 90
def __init__(self, cyclical=True, linear=True, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
self.cyclical, self.linear = cyclical, linear
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
if self.linear:
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
o = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
if self.cyclical:
sin, cos = sincos_encoding(seq_len, device=o.device)
o = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return o
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSPosition(cyclical=True, linear=True)(t)
test_eq(enc_t.shape[1], 6)
plt.plot(enc_t[0, 3:].T);
#export
class TSMissingness(Transform):
"""Concatenates data missingness for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, **kwargs):
self.feature_idxs = listify(feature_idxs)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
missingness = o[:, self.feature_idxs].isnan()
else:
missingness = o.isnan()
return torch.cat([o, missingness], 1)
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSMissingness(feature_idxs=[0,2])(t)
test_eq(enc_t.shape[1], 5)
test_eq(enc_t[:, 3:], torch.isnan(t[:, [0,2]]).float())
#export
class TSPositionGaps(Transform):
"""Concatenates gaps for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, forward=True, backward=False, nearest=False, normalize=True, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.gap_fn = partial(get_gaps, forward=forward, backward=backward, nearest=nearest, normalize=normalize)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
gaps = self.gap_fn(o[:, self.feature_idxs])
else:
gaps = self.gap_fn(o)
return torch.cat([o, gaps], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSPositionGaps(feature_idxs=[0,2], forward=True, backward=True, nearest=True, normalize=False)(t)
test_eq(enc_t.shape[1], 9)
enc_t.data
#export
class TSRollingMean(Transform):
"""Calculates the rolling mean for all/ selected features alongside the sequence
It replaces the original values or adds additional variables (default)
If nan values are found, they will be filled forward and backward"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, window=2, replace=False, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.rolling_mean_fn = partial(rolling_moving_average, window=window)
self.replace = replace
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
if torch.isnan(o[:, self.feature_idxs]).any():
o[:, self.feature_idxs] = fbfill_sequence(o[:, self.feature_idxs])
rolling_mean = self.rolling_mean_fn(o[:, self.feature_idxs])
if self.replace:
o[:, self.feature_idxs] = rolling_mean
return o
else:
if torch.isnan(o).any():
o = fbfill_sequence(o)
rolling_mean = self.rolling_mean_fn(o)
if self.replace: return rolling_mean
return torch.cat([o, rolling_mean], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t > .6] = np.nan
print(t.data)
enc_t = TSRollingMean(feature_idxs=[0,2], window=3)(t)
test_eq(enc_t.shape[1], 5)
print(enc_t.data)
enc_t = TSRollingMean(window=3, replace=True)(t)
test_eq(enc_t.shape[1], 3)
print(enc_t.data)
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add):
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
#export
class TSClipByVar(Transform):
"""Clip batch of type `TSTensor` by variable
Args:
var_min_max: list of tuples containing variable index, min value (or None) and max value (or None)
"""
order = 90
def __init__(self, var_min_max):
self.var_min_max = var_min_max
def encodes(self, o:TSTensor):
for v,m,M in self.var_min_max:
o[:, v] = torch.clamp(o[:, v], m, M)
return o
t = TSTensor(torch.rand(16, 3, 10) * tensor([1,10,100]).reshape(1,-1,1))
max_values = t.max(0).values.max(-1).values.data
max_values2 = TSClipByVar([(1,None,5), (2,10,50)])(t).max(0).values.max(-1).values.data
test_le(max_values2[1], 5)
test_ge(max_values2[2], 10)
test_le(max_values2[2], 50)
###Output
_____no_output_____
###Markdown
sklearn API transforms
###Code
#export
from sklearn.base import BaseEstimator, TransformerMixin
from fastai.data.transforms import CategoryMap
from joblib import dump, load
class TSShrinkDataFrame(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, skip=[], obj2cat=True, int2uint=False, verbose=True):
self.columns, self.skip, self.obj2cat, self.int2uint, self.verbose = listify(columns), skip, obj2cat, int2uint, verbose
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
self.old_dtypes = X.dtypes
if not self.columns: self.columns = X.columns
self.dt = df_shrink_dtypes(X[self.columns], self.skip, obj2cat=self.obj2cat, int2uint=self.int2uint)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X[self.columns] = X[self.columns].astype(self.dt)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X = X.astype(self.old_dtypes)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
df = pd.DataFrame()
df["ints64"] = np.random.randint(0,3,10)
df['floats64'] = np.random.rand(10)
tfm = TSShrinkDataFrame()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df["ints64"].dtype, "int8")
test_eq(df["floats64"].dtype, "float32")
#export
class TSOneHotEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, drop=True, add_na=True, dtype=np.int64):
self.columns = listify(columns)
self.drop, self.add_na, self.dtype = drop, add_na, dtype
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
handle_unknown = "ignore" if self.add_na else "error"
self.ohe_tfm = sklearn.preprocessing.OneHotEncoder(handle_unknown=handle_unknown)
if len(self.columns) == 1:
self.ohe_tfm.fit(X[self.columns].to_numpy().reshape(-1, 1))
else:
self.ohe_tfm.fit(X[self.columns])
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if len(self.columns) == 1:
output = self.ohe_tfm.transform(X[self.columns].to_numpy().reshape(-1, 1)).toarray().astype(self.dtype)
else:
output = self.ohe_tfm.transform(X[self.columns]).toarray().astype(self.dtype)
new_cols = []
for i,col in enumerate(self.columns):
for cats in self.ohe_tfm.categories_[i]:
new_cols.append(f"{str(col)}_{str(cats)}")
X[new_cols] = output
if self.drop: X = X.drop(self.columns, axis=1)
return X
df = pd.DataFrame()
df["a"] = np.random.randint(0,2,10)
df["b"] = np.random.randint(0,3,10)
unique_cols = len(df["a"].unique()) + len(df["b"].unique())
tfm = TSOneHotEncoder()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df.shape[1], unique_cols)
#export
class TSCategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, add_na=True):
self.columns = listify(columns)
self.add_na = add_na
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.cat_tfms = []
for column in self.columns:
self.cat_tfms.append(CategoryMap(X[column], add_na=self.add_na))
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_objs(X[column])
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_ids(X[column])
return X
###Output
_____no_output_____
###Markdown
Stateful transforms like TSCategoricalEncoder can easily be serialized.
###Code
import joblib
df = pd.DataFrame()
df["a"] = alphabet[np.random.randint(0,2,100)]
df["b"] = ALPHABET[np.random.randint(0,3,100)]
a_unique = len(df["a"].unique())
b_unique = len(df["b"].unique())
tfm = TSCategoricalEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSCategoricalEncoder.joblib")
tfm = joblib.load("data/TSCategoricalEncoder.joblib")
df = tfm.transform(df)
test_eq(df['a'].max(), a_unique)
test_eq(df['b'].max(), b_unique)
#export
default_date_attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
class TSDateTimeEncoder(BaseEstimator, TransformerMixin):
def __init__(self, datetime_columns=None, prefix=None, drop=True, time=False, attr=default_date_attr):
self.datetime_columns = listify(datetime_columns)
self.prefix, self.drop, self.time, self.attr = prefix, drop, time ,attr
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if self.time: self.attr = self.attr + ['Hour', 'Minute', 'Second']
if not self.datetime_columns:
self.datetime_columns = X.columns
self.prefixes = []
for dt_column in self.datetime_columns:
self.prefixes.append(re.sub('[Dd]ate$', '', dt_column) if self.prefix is None else self.prefix)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for dt_column,prefix in zip(self.datetime_columns,self.prefixes):
make_date(X, dt_column)
field = X[dt_column]
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in self.attr: X[prefix + "_" + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
if self.drop: X = X.drop(self.datetime_columns, axis=1)
return X
import datetime
df = pd.DataFrame()
df.loc[0, "date"] = datetime.datetime.now()
df.loc[1, "date"] = datetime.datetime.now() + pd.Timedelta(1, unit="D")
tfm = TSDateTimeEncoder()
joblib.dump(tfm, "data/TSDateTimeEncoder.joblib")
tfm = joblib.load("data/TSDateTimeEncoder.joblib")
tfm.fit_transform(df)
#export
class TSMissingnessEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = listify(columns)
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.missing_columns = [f"{cn}_missing" for cn in self.columns]
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
X[self.missing_columns] = X[self.columns].isnull().astype(int)
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
X.drop(self.missing_columns, axis=1, inplace=True)
return X
data = np.random.rand(10,3)
data[data > .8] = np.nan
df = pd.DataFrame(data, columns=["a", "b", "c"])
tfm = TSMissingnessEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSMissingnessEncoder.joblib")
tfm = joblib.load("data/TSMissingnessEncoder.joblib")
df = tfm.transform(df)
df
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.external import *
from tsai.data.core import *
from tsai.data.preparation import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: (np.ndarray, torch.Tensor)):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class TSNan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True, sel_vars=None):
store_attr()
if not ismin_torch("1.8"):
raise ValueError('This function only works with Pytorch>=1.8.')
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = torch.isnan(o[:, self.sel_vars])
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o[:, self.sel_vars], dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[:, self.sel_vars][mask] = median[mask]
else:
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], torch.nanmedian(o[:, self.sel_vars]))
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], self.value)
else:
mask = torch.isnan(o)
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
o = torch.nan_to_num(o, torch.nanmedian(o))
o = torch.nan_to_num(o, self.value)
return o
Nan2Value = TSNan2Value
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(TSNan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(TSNan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(TSNan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
o = TSTensor(torch.randn(16, 10, 100))
o[o > .9] = float('nan')
o = TSNan2Value(median=True, sel_vars=[0,1,2,3,4])(o)
test_eq(torch.isnan(o[:, [0,1,2,3,4]]).sum().item(), 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False):
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False):
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, self.range_min, self.range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False):
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6):
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'iqr'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, iqr=None, quantile_range=(25.0, 75.0), use_single_batch=True, eps=1e-8, verbose=False):
self.median = tensor(median) if median is not None else None
self.iqr = tensor(iqr) if iqr is not None else None
self._setup = median is None or iqr is None
self.use_single_batch = use_single_batch
self.eps = eps
self.verbose = verbose
self.quantile_range = quantile_range
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
new_o = o.permute(1,0,2).flatten(1)
median = get_percentile(new_o, 50, axis=1)
iqrmin, iqrmax = get_outliers_IQR(new_o, axis=1, quantile_range=self.quantile_range)
self.median = median.unsqueeze(0)
self.iqr = torch.clamp_min((iqrmax - iqrmin).unsqueeze(0), self.eps)
pv(f'{self.__class__.__name__} median={self.median.shape} iqr={self.iqr.shape}', self.verbose)
self._setup = False
else:
if self.median is None: self.median = torch.zeros(1, device=dl.device)
if self.iqr is None: self.iqr = torch.ones(1, device=dl.device)
def encodes(self, o:TSTensor):
return (o - self.median) / self.iqr
def __repr__(self): return f'{self.__class__.__name__}(quantile_range={self.quantile_range}, use_single_batch={self.use_single_batch})'
batch_tfms = TSRobustScale(verbose=True, use_single_batch=False)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=batch_tfms, num_workers=0)
xb, yb = next(iter(dls.train))
xb.min()
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
# export
class TSPosition(Transform):
"""Concatenates linear and/or cyclical positions along the sequence as additional variables"""
order = 90
def __init__(self, cyclical=True, linear=True, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
self.cyclical, self.linear = cyclical, linear
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
if self.linear:
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
o = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
if self.cyclical:
sin, cos = sincos_encoding(seq_len, device=o.device)
o = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return o
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSPosition(cyclical=True, linear=True)(t)
test_eq(enc_t.shape[1], 6)
plt.plot(enc_t[0, 3:].T);
#export
class TSMissingness(Transform):
"""Concatenates data missingness for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, **kwargs):
self.feature_idxs = listify(feature_idxs)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
missingness = o[:, self.feature_idxs].isnan()
else:
missingness = o.isnan()
return torch.cat([o, missingness], 1)
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSMissingness(feature_idxs=[0,2])(t)
test_eq(enc_t.shape[1], 5)
test_eq(enc_t[:, 3:], torch.isnan(t[:, [0,2]]).float())
#export
class TSPositionGaps(Transform):
"""Concatenates gaps for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, forward=True, backward=False, nearest=False, normalize=True, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.gap_fn = partial(get_gaps, forward=forward, backward=backward, nearest=nearest, normalize=normalize)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
gaps = self.gap_fn(o[:, self.feature_idxs])
else:
gaps = self.gap_fn(o)
return torch.cat([o, gaps], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSPositionGaps(feature_idxs=[0,2], forward=True, backward=True, nearest=True, normalize=False)(t)
test_eq(enc_t.shape[1], 9)
enc_t.data
#export
class TSRollingMean(Transform):
"""Calculates the rolling mean for all/ selected features alongside the sequence
It replaces the original values or adds additional variables (default)
If nan values are found, they will be filled forward and backward"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, window=2, replace=False, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.rolling_mean_fn = partial(rolling_moving_average, window=window)
self.replace = replace
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
if torch.isnan(o[:, self.feature_idxs]).any():
o[:, self.feature_idxs] = fbfill_sequence(o[:, self.feature_idxs])
rolling_mean = self.rolling_mean_fn(o[:, self.feature_idxs])
if self.replace:
o[:, self.feature_idxs] = rolling_mean
return o
else:
if torch.isnan(o).any():
o = fbfill_sequence(o)
rolling_mean = self.rolling_mean_fn(o)
if self.replace: return rolling_mean
return torch.cat([o, rolling_mean], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t > .6] = np.nan
print(t.data)
enc_t = TSRollingMean(feature_idxs=[0,2], window=3)(t)
test_eq(enc_t.shape[1], 5)
print(enc_t.data)
enc_t = TSRollingMean(window=3, replace=True)(t)
test_eq(enc_t.shape[1], 3)
print(enc_t.data)
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add):
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
###Output
_____no_output_____
###Markdown
sklearn API transforms
###Code
#export
from sklearn.base import BaseEstimator, TransformerMixin
from fastai.data.transforms import CategoryMap
from joblib import dump, load
class TSShrinkDataFrame(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, skip=[], obj2cat=True, int2uint=False, verbose=True):
self.columns, self.skip, self.obj2cat, self.int2uint, self.verbose = listify(columns), skip, obj2cat, int2uint, verbose
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
self.old_dtypes = X.dtypes
if not self.columns: self.columns = X.columns
self.dt = df_shrink_dtypes(X[self.columns], self.skip, obj2cat=self.obj2cat, int2uint=self.int2uint)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X[self.columns] = X[self.columns].astype(self.dt)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X = X.astype(self.old_dtypes)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
df = pd.DataFrame()
df["ints64"] = np.random.randint(0,3,10)
df['floats64'] = np.random.rand(10)
tfm = TSShrinkDataFrame()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df["ints64"].dtype, "int8")
test_eq(df["floats64"].dtype, "float32")
#export
class TSOneHotEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, drop=True, add_na=True, dtype=np.int64):
self.columns = listify(columns)
self.drop, self.add_na, self.dtype = drop, add_na, dtype
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
handle_unknown = "ignore" if self.add_na else "error"
self.ohe_tfm = sklearn.preprocessing.OneHotEncoder(handle_unknown=handle_unknown)
if len(self.columns) == 1:
self.ohe_tfm.fit(X[self.columns].to_numpy().reshape(-1, 1))
else:
self.ohe_tfm.fit(X[self.columns])
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if len(self.columns) == 1:
output = self.ohe_tfm.transform(X[self.columns].to_numpy().reshape(-1, 1)).toarray().astype(self.dtype)
else:
output = self.ohe_tfm.transform(X[self.columns]).toarray().astype(self.dtype)
new_cols = []
for i,col in enumerate(self.columns):
for cats in self.ohe_tfm.categories_[i]:
new_cols.append(f"{str(col)}_{str(cats)}")
X[new_cols] = output
if self.drop: X = X.drop(self.columns, axis=1)
return X
df = pd.DataFrame()
df["a"] = np.random.randint(0,2,10)
df["b"] = np.random.randint(0,3,10)
unique_cols = len(df["a"].unique()) + len(df["b"].unique())
tfm = TSOneHotEncoder()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df.shape[1], unique_cols)
#export
class TSCategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, add_na=True):
self.columns = listify(columns)
self.add_na = add_na
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.cat_tfms = []
for column in self.columns:
self.cat_tfms.append(CategoryMap(X[column], add_na=self.add_na))
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_objs(X[column])
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_ids(X[column])
return X
###Output
_____no_output_____
###Markdown
Stateful transforms like TSCategoricalEncoder can easily be serialized.
###Code
import joblib
df = pd.DataFrame()
df["a"] = alphabet[np.random.randint(0,2,100)]
df["b"] = ALPHABET[np.random.randint(0,3,100)]
a_unique = len(df["a"].unique())
b_unique = len(df["b"].unique())
tfm = TSCategoricalEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSCategoricalEncoder.joblib")
tfm = joblib.load("data/TSCategoricalEncoder.joblib")
df = tfm.transform(df)
test_eq(df['a'].max(), a_unique)
test_eq(df['b'].max(), b_unique)
#export
default_date_attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
class TSDateTimeEncoder(BaseEstimator, TransformerMixin):
def __init__(self, datetime_columns=None, prefix=None, drop=True, time=False, attr=default_date_attr):
self.datetime_columns = listify(datetime_columns)
self.prefix, self.drop, self.time, self.attr = prefix, drop, time ,attr
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if self.time: self.attr = self.attr + ['Hour', 'Minute', 'Second']
if not self.datetime_columns:
self.datetime_columns = X.columns
self.prefixes = []
for dt_column in self.datetime_columns:
self.prefixes.append(re.sub('[Dd]ate$', '', dt_column) if self.prefix is None else self.prefix)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for dt_column,prefix in zip(self.datetime_columns,self.prefixes):
make_date(X, dt_column)
field = X[dt_column]
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in self.attr: X[prefix + "_" + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
if self.drop: X = X.drop(self.datetime_columns, axis=1)
return X
import datetime
df = pd.DataFrame()
df.loc[0, "date"] = datetime.datetime.now()
df.loc[1, "date"] = datetime.datetime.now() + pd.Timedelta(1, unit="D")
tfm = TSDateTimeEncoder()
joblib.dump(tfm, "data/TSDateTimeEncoder.joblib")
tfm = joblib.load("data/TSDateTimeEncoder.joblib")
tfm.fit_transform(df)
#export
class TSMissingnessEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = listify(columns)
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.missing_columns = [f"{cn}_missing" for cn in self.columns]
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
X[self.missing_columns] = X[self.columns].isnull().astype(int)
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
X.drop(self.missing_columns, axis=1, inplace=True)
return X
data = np.random.rand(10,3)
data[data > .8] = np.nan
df = pd.DataFrame(data, columns=["a", "b", "c"])
tfm = TSMissingnessEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSMissingnessEncoder.joblib")
tfm = joblib.load("data/TSMissingnessEncoder.joblib")
df = tfm.transform(df)
df
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.external import *
from tsai.data.core import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: (np.ndarray, torch.Tensor)):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class Nan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True):
store_attr()
def encodes(self, o:TSTensor):
mask = torch.isnan(o)
if mask.any():
if self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
# o = torch.nan_to_num(o, torch.nanmedian(o)) # Only available in Pytorch 1.8
o = torch_nan_to_num(o, torch.nanmedian(o))
# o = torch.nan_to_num(o, self.value) # Only available in Pytorch 1.8
o = torch_nan_to_num(o, self.value)
return o
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(Nan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(Nan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(Nan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False):
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self.mean is None or self.std is None:
if not self.by_sample:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False):
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, self.range_min, self.range_max)
def setups(self, dl: DataLoader):
if self.min is None or self.max is None:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
def __init__(self, min=None, max=None, by_sample=False, by_var=False, verbose=False):
self.su = (min is None or max is None) and not by_sample
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self.su:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self.su = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6):
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'min', 'max'), 90
def __init__(self, median=None, min=None, max=None, by_sample=False, by_var=False, quantile_range=(25.0, 75.0), use_single_batch=True, verbose=False):
self._setup = (median is None or min is None or max is None) and not by_sample
self.median = tensor(median) if median is not None else tensor(0)
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
self.quantile_range = quantile_range
if median is not None or min is not None or max is not None:
pv(f'{self.__class__.__name__} median={median} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
median = get_percentile(o, 50, self.axis)
min, max = get_outliers_IQR(o, self.axis, quantile_range=self.quantile_range)
self.median, self.min, self.max = tensor(median), tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} median={self.median} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} median={self.median.shape} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.by_sample:
median = get_percentile(o, 50, self.axis)
min, max = get_outliers_IQR(o, axis=self.axis, quantile_range=self.quantile_range)
self.median, self.min, self.max = o.new(median), o.new(min), o.new(max)
return (o - self.median) / (self.max - self.min)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, num_workers=0)
xb, yb = next(iter(dls.train))
clipped_xb = TSRobustScale(by_sample=true)(xb)
test_ne(clipped_xb, xb)
clipped_xb.min(), clipped_xb.max(), xb.min(), xb.max()
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add):
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
import re
import sklearn
from fastcore.transform import Transform, Pipeline
from fastai.data.transforms import Categorize
from fastai.data.load import DataLoader
from fastai.tabular.core import df_shrink_dtypes, make_date
from tsai.imports import *
from tsai.utils import *
from tsai.data.core import *
from tsai.data.preparation import *
from tsai.data.external import get_UCR_data
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: np.ndarray):
return stack([self.cat.decode(oi) for oi in o])
def decodes(self, o: torch.Tensor):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class TSNan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True, sel_vars=None):
store_attr()
if not ismin_torch("1.8"):
raise ValueError('This function only works with Pytorch>=1.8.')
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = torch.isnan(o[:, self.sel_vars])
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o[:, self.sel_vars], dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[:, self.sel_vars][mask] = median[mask]
else:
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], torch.nanmedian(o[:, self.sel_vars]))
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], self.value)
else:
mask = torch.isnan(o)
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
o = torch.nan_to_num(o, torch.nanmedian(o))
o = torch.nan_to_num(o, self.value)
return o
Nan2Value = TSNan2Value
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(TSNan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(TSNan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(TSNan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
o = TSTensor(torch.randn(16, 10, 100))
o[o > .9] = float('nan')
o = TSNan2Value(median=True, sel_vars=[0,1,2,3,4])(o)
test_eq(torch.isnan(o[:, [0,1,2,3,4]]).sum().item(), 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False, **kwargs):
super().__init__(**kwargs)
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False, **kwargs):
super().__init__(**kwargs)
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, range_min, range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False, **kwargs):
super().__init__(**kwargs)
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6, **kwargs):
super().__init__(**kwargs)
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSSelfMissingness(Transform):
"Applies missingness from samples in a batch to random samples in the batch for selected variables"
order = 90
def __init__(self, sel_vars=None, **kwargs):
self.sel_vars = sel_vars
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = rotate_axis0(torch.isnan(o[:, self.sel_vars]))
o[:, self.sel_vars] = o[:, self.sel_vars].masked_fill(mask, np.nan)
else:
mask = rotate_axis0(torch.isnan(o))
o.masked_fill_(mask, np.nan)
return o
t = TSTensor(torch.randn(10, 20, 100))
t[t>.8] = np.nan
t2 = TSSelfMissingness()(t.clone())
t3 = TSSelfMissingness(sel_vars=[0,3,5,7])(t.clone())
assert (torch.isnan(t).sum() < torch.isnan(t2).sum()) and (torch.isnan(t2).sum() > torch.isnan(t3).sum())
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'iqr'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, iqr=None, quantile_range=(25.0, 75.0), use_single_batch=True, eps=1e-8, verbose=False, **kwargs):
super().__init__(**kwargs)
self.median = tensor(median) if median is not None else None
self.iqr = tensor(iqr) if iqr is not None else None
self._setup = median is None or iqr is None
self.use_single_batch = use_single_batch
self.eps = eps
self.verbose = verbose
self.quantile_range = quantile_range
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
new_o = o.permute(1,0,2).flatten(1)
median = get_percentile(new_o, 50, axis=1)
iqrmin, iqrmax = get_outliers_IQR(new_o, axis=1, quantile_range=self.quantile_range)
self.median = median.unsqueeze(0)
self.iqr = torch.clamp_min((iqrmax - iqrmin).unsqueeze(0), self.eps)
pv(f'{self.__class__.__name__} median={self.median.shape} iqr={self.iqr.shape}', self.verbose)
self._setup = False
else:
if self.median is None: self.median = torch.zeros(1, device=dl.device)
if self.iqr is None: self.iqr = torch.ones(1, device=dl.device)
def encodes(self, o:TSTensor):
return (o - self.median) / self.iqr
def __repr__(self): return f'{self.__class__.__name__}(quantile_range={self.quantile_range}, use_single_batch={self.use_single_batch})'
batch_tfms = TSRobustScale(verbose=True, use_single_batch=False)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=batch_tfms, num_workers=0)
xb, yb = next(iter(dls.train))
xb.min()
#export
def get_stats_with_uncertainty(o, sel_vars=None, sel_vars_zero_mean_unit_var=False, bs=64, n_trials=None, axis=(0,2)):
o_dtype = o.dtype
if n_trials is None: n_trials = len(o) // bs
random_idxs = np.random.choice(len(o), n_trials * bs, n_trials * bs > len(o))
oi_mean = []
oi_std = []
start = 0
for i in progress_bar(range(n_trials)):
idxs = random_idxs[start:start + bs]
start += bs
if hasattr(o, 'oindex'):
oi = o.index[idxs]
if hasattr(o, 'compute'):
oi = o[idxs].compute()
else:
oi = o[idxs]
oi_mean.append(np.nanmean(oi.astype('float32'), axis=axis, keepdims=True))
oi_std.append(np.nanstd(oi.astype('float32'), axis=axis, keepdims=True))
oi_mean = np.concatenate(oi_mean)
oi_std = np.concatenate(oi_std)
E_mean = np.nanmean(oi_mean, axis=0, keepdims=True).astype(o_dtype)
S_mean = np.nanstd(oi_mean, axis=0, keepdims=True).astype(o_dtype)
E_std = np.nanmean(oi_std, axis=0, keepdims=True).astype(o_dtype)
S_std = np.nanstd(oi_std, axis=0, keepdims=True).astype(o_dtype)
if sel_vars is not None:
non_sel_vars = np.isin(np.arange(o.shape[1]), sel_vars, invert=True)
if sel_vars_zero_mean_unit_var:
E_mean[:, non_sel_vars] = 0 # zero mean
E_std[:, non_sel_vars] = 1 # unit var
S_mean[:, non_sel_vars] = 0 # no uncertainty
S_std[:, non_sel_vars] = 0 # no uncertainty
return np.stack([E_mean, S_mean, E_std, S_std])
def get_random_stats(E_mean, S_mean, E_std, S_std):
mult = np.random.normal(0, 1, 2)
new_mean = E_mean + S_mean * mult[0]
new_std = E_std + S_std * mult[1]
return new_mean, new_std
class TSGaussianStandardize(Transform):
"Scales each batch using modeled mean and std based on UNCERTAINTY MODELING FOR OUT-OF-DISTRIBUTION GENERALIZATION https://arxiv.org/abs/2202.03958"
parameters, order = L('E_mean', 'S_mean', 'E_std', 'S_std'), 90
def __init__(self,
E_mean : np.ndarray, # Mean expected value
S_mean : np.ndarray, # Uncertainty (standard deviation) of the mean
E_std : np.ndarray, # Standard deviation expected value
S_std : np.ndarray, # Uncertainty (standard deviation) of the standard deviation
eps=1e-8, # (epsilon) small amount added to standard deviation to avoid deviding by zero
split_idx=0, # Flag to indicate to which set is this transofrm applied. 0: training, 1:validation, None:both
**kwargs,
):
self.E_mean, self.S_mean = torch.from_numpy(E_mean), torch.from_numpy(S_mean)
self.E_std, self.S_std = torch.from_numpy(E_std), torch.from_numpy(S_std)
self.eps = eps
super().__init__(split_idx=split_idx, **kwargs)
def encodes(self, o:TSTensor):
mult = torch.normal(0, 1, (2,), device=o.device)
new_mean = self.E_mean + self.S_mean * mult[0]
new_std = torch.clamp(self.E_std + self.S_std * mult[1], self.eps)
return (o - new_mean) / new_std
TSRandomStandardize = TSGaussianStandardize
arr = np.random.rand(1000, 2, 50)
E_mean, S_mean, E_std, S_std = get_stats_with_uncertainty(arr, sel_vars=None, bs=64, n_trials=None, axis=(0,2))
new_mean, new_std = get_random_stats(E_mean, S_mean, E_std, S_std)
new_mean2, new_std2 = get_random_stats(E_mean, S_mean, E_std, S_std)
test_ne(new_mean, new_mean2)
test_ne(new_std, new_std2)
test_eq(new_mean.shape, (1, 2, 1))
test_eq(new_std.shape, (1, 2, 1))
new_mean, new_std
###Output
_____no_output_____
###Markdown
TSGaussianStandardize can be used jointly with TSStandardized in the following way: ```pythonX, y, splits = get_UCR_data('LSST', split_data=False)tfms = [None, TSClassification()]E_mean, S_mean, E_std, S_std = get_stats_with_uncertainty(X, sel_vars=None, bs=64, n_trials=None, axis=(0,2))batch_tfms = [TSGaussianStandardize(E_mean, S_mean, E_std, S_std, split_idx=0), TSStandardize(E_mean, S_mean, split_idx=1)]dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[32, 64])learn = ts_learner(dls, InceptionTimePlus, metrics=accuracy, cbs=[ShowGraph()])learn.fit_one_cycle(1, 1e-2)```In this way the train batches are scaled based on mean and standard deviation distributions while the valid batches are scaled with a fixed mean and standard deviation values.The intent is to improve out-of-distribution performance. This method is inspired by UNCERTAINTY MODELING FOR OUT-OF-DISTRIBUTION GENERALIZATION https://arxiv.org/abs/2202.03958.
###Code
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True, **kwargs):
super().__init__(**kwargs)
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"Concatenates the position along the sequence as 2 additional variables (sine and cosine)"
order = 90
def __init__(self,
cyclical_var=None, # Optional variable to indicate the steps withing the cycle (ie minute of the day)
magnitude=None, # Added for compatibility. It's not used.
drop_var=False, # Flag to indicate if the cyclical var is removed
**kwargs
):
super().__init__(**kwargs)
self.cyclical_var, self.drop_var = cyclical_var, drop_var
def encodes(self, o: TSTensor):
bs,nvars,seq_len = o.shape
if self.cyclical_var is None:
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
else:
sin = torch.sin(o[:, [self.cyclical_var]]/seq_len * 2 * np.pi)
cos = torch.cos(o[:, [self.cyclical_var]]/seq_len * 2 * np.pi)
if self.drop_var:
exc_vars = np.isin(np.arange(nvars), self.cyclical_var, invert=True)
output = torch.cat([o[:, exc_vars], sin, cos], 1)
else:
output = torch.cat([o, sin, cos], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
bs, c_in, seq_len = 1,3,100
t1 = torch.rand(bs, c_in, seq_len)
t2 = torch.arange(seq_len)
t2 = torch.cat([t2[35:], t2[:35]]).reshape(1, 1, -1)
t = TSTensor(torch.cat([t1, t2], 1))
mask = torch.rand_like(t) > .8
t[mask] = np.nan
enc_t = TSCyclicalPosition(3)(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"Concatenates the position along the sequence as 1 additional variable"
order = 90
def __init__(self,
linear_var:int=None, # Optional variable to indicate the steps withing the cycle (ie minute of the day)
var_range:tuple=None, # Optional range indicating min and max values of the linear variable
magnitude=None, # Added for compatibility. It's not used.
drop_var:bool=False, # Flag to indicate if the cyclical var is removed
lin_range:tuple=(-1,1),
**kwargs):
self.linear_var, self.var_range, self.drop_var, self.lin_range = linear_var, var_range, drop_var, lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,nvars,seq_len = o.shape
if self.linear_var is None:
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
else:
linear_var = o[:, [self.linear_var]]
if self.var_range is None:
lin = (linear_var - linear_var.min()) / (linear_var.max() - linear_var.min())
else:
lin = (linear_var - self.var_range[0]) / (self.var_range[1] - self.var_range[0])
lin = (linear_var - self.lin_range[0]) / (self.lin_range[1] - self.lin_range[0])
if self.drop_var:
exc_vars = np.isin(np.arange(nvars), self.linear_var, invert=True)
output = torch.cat([o[:, exc_vars], lin], 1)
else:
output = torch.cat([o, lin], 1)
return output
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
t = torch.arange(100)
t1 = torch.cat([t[30:], t[:30]]).reshape(1, 1, -1)
t2 = torch.cat([t[52:], t[:52]]).reshape(1, 1, -1)
t = torch.cat([t1, t2]).float()
mask = torch.rand_like(t) > .8
t[mask] = np.nan
t = TSTensor(t)
enc_t = TSLinearPosition(linear_var=0, var_range=(0, 100), drop_var=True)(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1]
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
#export
class TSMissingness(Transform):
"""Concatenates data missingness for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, **kwargs):
self.feature_idxs = listify(feature_idxs)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
missingness = o[:, self.feature_idxs].isnan()
else:
missingness = o.isnan()
return torch.cat([o, missingness], 1)
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSMissingness(feature_idxs=[0,2])(t)
test_eq(enc_t.shape[1], 5)
test_eq(enc_t[:, 3:], torch.isnan(t[:, [0,2]]).float())
#export
class TSPositionGaps(Transform):
"""Concatenates gaps for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, forward=True, backward=False, nearest=False, normalize=True, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.gap_fn = partial(get_gaps, forward=forward, backward=backward, nearest=nearest, normalize=normalize)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
gaps = self.gap_fn(o[:, self.feature_idxs])
else:
gaps = self.gap_fn(o)
return torch.cat([o, gaps], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSPositionGaps(feature_idxs=[0,2], forward=True, backward=True, nearest=True, normalize=False)(t)
test_eq(enc_t.shape[1], 9)
enc_t.data
#export
class TSRollingMean(Transform):
"""Calculates the rolling mean for all/ selected features alongside the sequence
It replaces the original values or adds additional variables (default)
If nan values are found, they will be filled forward and backward"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, window=2, replace=False, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.rolling_mean_fn = partial(rolling_moving_average, window=window)
self.replace = replace
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
if torch.isnan(o[:, self.feature_idxs]).any():
o[:, self.feature_idxs] = fbfill_sequence(o[:, self.feature_idxs])
rolling_mean = self.rolling_mean_fn(o[:, self.feature_idxs])
if self.replace:
o[:, self.feature_idxs] = rolling_mean
return o
else:
if torch.isnan(o).any():
o = fbfill_sequence(o)
rolling_mean = self.rolling_mean_fn(o)
if self.replace: return rolling_mean
return torch.cat([o, rolling_mean], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t > .6] = np.nan
print(t.data)
enc_t = TSRollingMean(feature_idxs=[0,2], window=3)(t)
test_eq(enc_t.shape[1], 5)
print(enc_t.data)
enc_t = TSRollingMean(window=3, replace=True)(t)
test_eq(enc_t.shape[1], 3)
print(enc_t.data)
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True, **kwargs):
super().__init__(**kwargs)
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add, **kwargs):
super().__init__(**kwargs)
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
#export
class TSClipByVar(Transform):
"""Clip batch of type `TSTensor` by variable
Args:
var_min_max: list of tuples containing variable index, min value (or None) and max value (or None)
"""
order = 90
def __init__(self, var_min_max, **kwargs):
super().__init__(**kwargs)
self.var_min_max = var_min_max
def encodes(self, o:TSTensor):
for v,m,M in self.var_min_max:
o[:, v] = torch.clamp(o[:, v], m, M)
return o
t = TSTensor(torch.rand(16, 3, 10) * tensor([1,10,100]).reshape(1,-1,1))
max_values = t.max(0).values.max(-1).values.data
max_values2 = TSClipByVar([(1,None,5), (2,10,50)])(t).max(0).values.max(-1).values.data
test_le(max_values2[1], 5)
test_ge(max_values2[2], 10)
test_le(max_values2[2], 50)
#export
class TSDropVars(Transform):
"Drops selected variable from the input"
order = 90
def __init__(self, drop_vars, **kwargs):
super().__init__(**kwargs)
self.drop_vars = drop_vars
def encodes(self, o:TSTensor):
exc_vars = np.isin(np.arange(o.shape[1]), self.drop_vars, invert=True)
return o[:, exc_vars]
t = TSTensor(torch.arange(24).reshape(2, 3, 4))
enc_t = TSDropVars(2)(t)
test_ne(t, enc_t)
enc_t.data
#export
class TSOneHotEncode(Transform):
order = 90
def __init__(self,
sel_var:int, # Variable that is one-hot encoded
unique_labels:list, # List containing all labels (excluding nan values)
add_na:bool=False, # Flag to indicate if values not included in vocab should be set as 0
drop_var:bool=True, # Flag to indicate if the selected var is removed
magnitude=None, # Added for compatibility. It's not used.
**kwargs
):
unique_labels = listify(unique_labels)
self.sel_var = sel_var
self.unique_labels = unique_labels
self.n_classes = len(unique_labels) + add_na
self.add_na = add_na
self.drop_var = drop_var
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs, n_vars, seq_len = o.shape
o_var = o[:, [self.sel_var]]
ohe_var = torch.zeros(bs, self.n_classes, seq_len, device=o.device)
if self.add_na:
is_na = torch.isin(o_var, o_var.new(list(self.unique_labels)), invert=True) # not available in dict
ohe_var[:, [0]] = is_na.to(ohe_var.dtype)
for i,l in enumerate(self.unique_labels):
ohe_var[:, [i + self.add_na]] = (o_var == l).to(ohe_var.dtype)
if self.drop_var:
exc_vars = torch.isin(torch.arange(o.shape[1], device=o.device), self.sel_var, invert=True)
output = torch.cat([o[:, exc_vars], ohe_var], 1)
else:
output = torch.cat([o, ohe_var], 1)
return output
bs = 2
seq_len = 5
t_cont = torch.rand(bs, 1, seq_len)
t_cat = torch.randint(0, 3, t_cont.shape)
t = TSTensor(torch.cat([t_cat, t_cont], 1))
t_cat
tfm = TSOneHotEncode(0, [0, 1, 2])
output = tfm(t)[:, -3:].data
test_eq(t_cat, torch.argmax(tfm(t)[:, -3:], 1)[:, None])
tfm(t)[:, -3:].data
bs = 2
seq_len = 5
t_cont = torch.rand(bs, 1, seq_len)
t_cat = torch.tensor([[10., 5., 11., np.nan, 12.], [ 5., 12., 10., np.nan, 11.]])[:, None]
t = TSTensor(torch.cat([t_cat, t_cont], 1))
t_cat
tfm = TSOneHotEncode(0, [10, 11, 12], drop_var=False)
mask = ~torch.isnan(t[:, 0])
test_eq(tfm(t)[:, 0][mask], t[:, 0][mask])
tfm(t)[:, -3:].data
t1 = torch.randint(3, 7, (2, 1, 10))
t2 = torch.rand(2, 1, 10)
t = TSTensor(torch.cat([t1, t2], 1))
output = TSOneHotEncode(0, [3, 4, 5], add_na=True, drop_var=True)(t)
test_eq((t1 > 5).float(), output.data[:, [1]])
test_eq((t1 == 3).float(), output.data[:, [2]])
test_eq((t1 == 4).float(), output.data[:, [3]])
test_eq((t1 == 5).float(), output.data[:, [4]])
test_eq(output.shape, (t.shape[0], 5, t.shape[-1]))
#export
class TSPosition(Transform):
order = 90
def __init__(self,
steps:list, # List containing the steps passed as an additional variable. Theu should be normalized.
magnitude=None, # Added for compatibility. It's not used.
**kwargs
):
self.steps = torch.from_numpy(np.asarray(steps)).reshape(1, 1, -1)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs = o.shape[0]
steps = self.steps.expand(bs, -1, -1).to(device=o.device, dtype=o.dtype)
return torch.cat([o, steps], 1)
t = TSTensor(torch.rand(2, 1, 10)).float()
a = np.linspace(-1, 1, 10).astype('float64')
TSPosition(a)(t).data.dtype, t.dtype
###Output
_____no_output_____
###Markdown
sklearn API transforms
###Code
#export
from sklearn.base import BaseEstimator, TransformerMixin
from fastai.data.transforms import CategoryMap
from joblib import dump, load
class TSShrinkDataFrame(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, skip=[], obj2cat=True, int2uint=False, verbose=True):
self.columns, self.skip, self.obj2cat, self.int2uint, self.verbose = listify(columns), skip, obj2cat, int2uint, verbose
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
self.old_dtypes = X.dtypes
if not self.columns: self.columns = X.columns
self.dt = df_shrink_dtypes(X[self.columns], self.skip, obj2cat=self.obj2cat, int2uint=self.int2uint)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X[self.columns] = X[self.columns].astype(self.dt)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X = X.astype(self.old_dtypes)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
df = pd.DataFrame()
df["ints64"] = np.random.randint(0,3,10)
df['floats64'] = np.random.rand(10)
tfm = TSShrinkDataFrame()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df["ints64"].dtype, "int8")
test_eq(df["floats64"].dtype, "float32")
#export
class TSOneHotEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, drop=True, add_na=True, dtype=np.int64):
self.columns = listify(columns)
self.drop, self.add_na, self.dtype = drop, add_na, dtype
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
handle_unknown = "ignore" if self.add_na else "error"
self.ohe_tfm = sklearn.preprocessing.OneHotEncoder(handle_unknown=handle_unknown)
if len(self.columns) == 1:
self.ohe_tfm.fit(X[self.columns].to_numpy().reshape(-1, 1))
else:
self.ohe_tfm.fit(X[self.columns])
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if len(self.columns) == 1:
output = self.ohe_tfm.transform(X[self.columns].to_numpy().reshape(-1, 1)).toarray().astype(self.dtype)
else:
output = self.ohe_tfm.transform(X[self.columns]).toarray().astype(self.dtype)
new_cols = []
for i,col in enumerate(self.columns):
for cats in self.ohe_tfm.categories_[i]:
new_cols.append(f"{str(col)}_{str(cats)}")
X[new_cols] = output
if self.drop: X = X.drop(self.columns, axis=1)
return X
df = pd.DataFrame()
df["a"] = np.random.randint(0,2,10)
df["b"] = np.random.randint(0,3,10)
unique_cols = len(df["a"].unique()) + len(df["b"].unique())
tfm = TSOneHotEncoder()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df.shape[1], unique_cols)
#export
class TSCategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, add_na=True):
self.columns = listify(columns)
self.add_na = add_na
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.cat_tfms = []
for column in self.columns:
self.cat_tfms.append(CategoryMap(X[column], add_na=self.add_na))
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_objs(X[column])
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_ids(X[column])
return X
###Output
_____no_output_____
###Markdown
Stateful transforms like TSCategoricalEncoder can easily be serialized.
###Code
import joblib
df = pd.DataFrame()
df["a"] = alphabet[np.random.randint(0,2,100)]
df["b"] = ALPHABET[np.random.randint(0,3,100)]
a_unique = len(df["a"].unique())
b_unique = len(df["b"].unique())
tfm = TSCategoricalEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSCategoricalEncoder.joblib")
tfm = joblib.load("data/TSCategoricalEncoder.joblib")
df = tfm.transform(df)
test_eq(df['a'].max(), a_unique)
test_eq(df['b'].max(), b_unique)
#export
default_date_attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
class TSDateTimeEncoder(BaseEstimator, TransformerMixin):
def __init__(self, datetime_columns=None, prefix=None, drop=True, time=False, attr=default_date_attr):
self.datetime_columns = listify(datetime_columns)
self.prefix, self.drop, self.time, self.attr = prefix, drop, time ,attr
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if self.time: self.attr = self.attr + ['Hour', 'Minute', 'Second']
if not self.datetime_columns:
self.datetime_columns = X.columns
self.prefixes = []
for dt_column in self.datetime_columns:
self.prefixes.append(re.sub('[Dd]ate$', '', dt_column) if self.prefix is None else self.prefix)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for dt_column,prefix in zip(self.datetime_columns,self.prefixes):
make_date(X, dt_column)
field = X[dt_column]
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in self.attr: X[prefix + "_" + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
if self.drop: X = X.drop(self.datetime_columns, axis=1)
return X
import datetime
df = pd.DataFrame()
df.loc[0, "date"] = datetime.datetime.now()
df.loc[1, "date"] = datetime.datetime.now() + pd.Timedelta(1, unit="D")
tfm = TSDateTimeEncoder()
joblib.dump(tfm, "data/TSDateTimeEncoder.joblib")
tfm = joblib.load("data/TSDateTimeEncoder.joblib")
tfm.fit_transform(df)
#export
class TSMissingnessEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = listify(columns)
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.missing_columns = [f"{cn}_missing" for cn in self.columns]
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
X[self.missing_columns] = X[self.columns].isnull().astype(int)
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
X.drop(self.missing_columns, axis=1, inplace=True)
return X
data = np.random.rand(10,3)
data[data > .8] = np.nan
df = pd.DataFrame(data, columns=["a", "b", "c"])
tfm = TSMissingnessEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSMissingnessEncoder.joblib")
tfm = joblib.load("data/TSMissingnessEncoder.joblib")
df = tfm.transform(df)
df
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.external import *
from tsai.data.core import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: (np.ndarray, torch.Tensor)):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class Nan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True):
store_attr()
def encodes(self, o:TSTensor):
mask = torch.isnan(o)
if mask.any():
if self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
o = torch.nan_to_num(o, torch.nanmedian(o))
o = torch.nan_to_num(o, self.value)
return o
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(Nan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(Nan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(Nan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False):
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self.mean is None or self.std is None:
if not self.by_sample:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False):
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, self.range_min, self.range_max)
def setups(self, dl: DataLoader):
if self.min is None or self.max is None:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
def __init__(self, min=None, max=None, by_sample=False, by_var=False, verbose=False):
self.su = (min is None or max is None) and not by_sample
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self.su:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self.su = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6):
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'min', 'max'), 90
def __init__(self, median=None, min=None, max=None, by_sample=False, by_var=False, verbose=False):
self.su = (median is None or min is None or max is None) and not by_sample
self.median = tensor(median) if median is not None else tensor(0)
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.verbose = verbose
if median is not None or min is not None or max is not None:
pv(f'{self.__class__.__name__} median={median} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self.su:
o, *_ = dl.one_batch()
median = get_percentile(o, 50, self.axis)
min, max = get_outliers_IQR(o, self.axis)
self.median, self.min, self.max = tensor(median), tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} median={self.median} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} median={self.median.shape} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self.su = False
def encodes(self, o:TSTensor):
if self.by_sample:
median = get_percentile(o, 50, self.axis)
min, max = get_outliers_IQR(o, axis=self.axis)
self.median, self.min, self.max = o.new(median), o.new(min), o.new(max)
return (o - self.median) / (self.max - self.min)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, num_workers=0)
xb, yb = next(iter(dls.train))
clipped_xb = TSRobustScale(by_sample=true)(xb)
test_ne(clipped_xb, xb)
clipped_xb.min(), clipped_xb.max(), xb.min(), xb.max()
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor`."
order = 90
def __init__(self, ex=None, add=0, **kwargs):
self.ex, self.add = ex, add
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.log(o + self.add)
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.exp(o) - self.add
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4))
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
t = TSTensor(torch.rand(2,3,4))
tfm = TSLog(add=1)
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add):
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.external import *
from tsai.data.core import *
from tsai.data.preparation import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: (np.ndarray, torch.Tensor)):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class TSNan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True, sel_vars=None):
store_attr()
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = torch.isnan(o[:, self.sel_vars])
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o[:, self.sel_vars], dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[:, self.sel_vars][mask] = median[mask]
else:
o[:, self.sel_vars] = torch_nan_to_num(o[:, self.sel_vars], torch.nanmedian(o[:, self.sel_vars]))
o[:, self.sel_vars] = torch_nan_to_num(o[:, self.sel_vars], self.value)
else:
mask = torch.isnan(o)
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
o = torch_nan_to_num(o, torch.nanmedian(o))
o = torch_nan_to_num(o, self.value)
return o
Nan2Value = TSNan2Value
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(TSNan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(TSNan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(TSNan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
o = TSTensor(torch.randn(16, 10, 100))
o[o > .9] = float('nan')
o = TSNan2Value(median=True, sel_vars=[0,1,2,3,4])(o)
test_eq(torch.isnan(o[:, [0,1,2,3,4]]).sum().item(), 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False):
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False):
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, self.range_min, self.range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False):
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6):
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'iqr'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, iqr=None, quantile_range=(25.0, 75.0), use_single_batch=True, eps=1e-8, verbose=False):
self.median = tensor(median) if median is not None else None
self.iqr = tensor(iqr) if iqr is not None else None
self._setup = median is None or iqr is None
self.use_single_batch = use_single_batch
self.eps = eps
self.verbose = verbose
self.quantile_range = quantile_range
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
new_o = o.permute(1,0,2).flatten(1)
median = get_percentile(new_o, 50, axis=1)
iqrmin, iqrmax = get_outliers_IQR(new_o, axis=1, quantile_range=self.quantile_range)
self.median = median.unsqueeze(0)
self.iqr = torch.clamp_min((iqrmax - iqrmin).unsqueeze(0), self.eps)
pv(f'{self.__class__.__name__} median={self.median.shape} iqr={self.iqr.shape}', self.verbose)
self._setup = False
else:
if self.median is None: self.median = torch.zeros(1, device=dl.device)
if self.iqr is None: self.iqr = torch.ones(1, device=dl.device)
def encodes(self, o:TSTensor):
return (o - self.median) / self.iqr
def __repr__(self): return f'{self.__class__.__name__}(quantile_range={self.quantile_range}, use_single_batch={self.use_single_batch})'
batch_tfms = TSRobustScale(verbose=True, use_single_batch=False)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=batch_tfms, num_workers=0)
xb, yb = next(iter(dls.train))
xb.min()
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
# export
class TSPosition(Transform):
"""Concatenates linear and/or cyclical positions along the sequence as additional variables"""
order = 90
def __init__(self, cyclical=True, linear=True, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
self.cyclical, self.linear = cyclical, linear
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
if self.linear:
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
o = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
if self.cyclical:
sin, cos = sincos_encoding(seq_len, device=o.device)
o = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return o
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSPosition(cyclical=True, linear=True)(t)
test_eq(enc_t.shape[1], 6)
plt.plot(enc_t[0, 3:].T);
#export
class TSMissingness(Transform):
"""Concatenates data missingness for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, **kwargs):
self.feature_idxs = listify(feature_idxs)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
missingness = o[:, self.feature_idxs].isnan()
else:
missingness = o.isnan()
return torch.cat([o, missingness], 1)
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSMissingness(feature_idxs=[0,2])(t)
test_eq(enc_t.shape[1], 5)
test_eq(enc_t[:, 3:], torch.isnan(t[:, [0,2]]).float())
#export
class TSPositionGaps(Transform):
"""Concatenates gaps for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, forward=True, backward=False, nearest=False, normalize=True, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.gap_fn = partial(get_gaps, forward=forward, backward=backward, nearest=nearest, normalize=normalize)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
gaps = self.gap_fn(o[:, self.feature_idxs])
else:
gaps = self.gap_fn(o)
return torch.cat([o, gaps], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSPositionGaps(feature_idxs=[0,2], forward=True, backward=True, nearest=True, normalize=False)(t)
test_eq(enc_t.shape[1], 9)
enc_t.data
#export
class TSRollingMean(Transform):
"""Calculates the rolling mean for all/ selected features alongside the sequence
It replaces the original values or adds additional variables (default)
If nan values are found, they will be filled forward and backward"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, window=2, replace=False, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.rolling_mean_fn = partial(rolling_moving_average, window=window)
self.replace = replace
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
if torch.isnan(o[:, self.feature_idxs]).any():
o[:, self.feature_idxs] = fbfill_sequence(o[:, self.feature_idxs])
rolling_mean = self.rolling_mean_fn(o[:, self.feature_idxs])
if self.replace:
o[:, self.feature_idxs] = rolling_mean
return o
else:
if torch.isnan(o).any():
o = fbfill_sequence(o)
rolling_mean = self.rolling_mean_fn(o)
if self.replace: return rolling_mean
return torch.cat([o, rolling_mean], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t > .6] = np.nan
print(t.data)
enc_t = TSRollingMean(feature_idxs=[0,2], window=3)(t)
test_eq(enc_t.shape[1], 5)
print(enc_t.data)
enc_t = TSRollingMean(window=3, replace=True)(t)
test_eq(enc_t.shape[1], 3)
print(enc_t.data)
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add):
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
###Output
_____no_output_____
###Markdown
sklearn API transforms
###Code
#export
from sklearn.base import BaseEstimator, TransformerMixin
from fastai.data.transforms import CategoryMap
from joblib import dump, load
class TSShrinkDataFrame(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, skip=[], obj2cat=True, int2uint=False, verbose=True):
self.columns, self.skip, self.obj2cat, self.int2uint, self.verbose = listify(columns), skip, obj2cat, int2uint, verbose
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
self.old_dtypes = X.dtypes
if not self.columns: self.columns = X.columns
self.dt = df_shrink_dtypes(X[self.columns], self.skip, obj2cat=self.obj2cat, int2uint=self.int2uint)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X[self.columns] = X[self.columns].astype(self.dt)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X = X.astype(self.old_dtypes)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
df = pd.DataFrame()
df["ints64"] = np.random.randint(0,3,10)
df['floats64'] = np.random.rand(10)
tfm = TSShrinkDataFrame()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df["ints64"].dtype, "int8")
test_eq(df["floats64"].dtype, "float32")
#export
class TSOneHotEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, drop=True, add_na=True, dtype=np.int64):
self.columns = listify(columns)
self.drop, self.add_na, self.dtype = drop, add_na, dtype
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
handle_unknown = "ignore" if self.add_na else "error"
self.ohe_tfm = sklearn.preprocessing.OneHotEncoder(handle_unknown=handle_unknown)
if len(self.columns) == 1:
self.ohe_tfm.fit(X[self.columns].to_numpy().reshape(-1, 1))
else:
self.ohe_tfm.fit(X[self.columns])
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if len(self.columns) == 1:
output = self.ohe_tfm.transform(X[self.columns].to_numpy().reshape(-1, 1)).toarray().astype(self.dtype)
else:
output = self.ohe_tfm.transform(X[self.columns]).toarray().astype(self.dtype)
new_cols = []
for i,col in enumerate(self.columns):
for cats in self.ohe_tfm.categories_[i]:
new_cols.append(f"{str(col)}_{str(cats)}")
X[new_cols] = output
if self.drop: X = X.drop(self.columns, axis=1)
return X
df = pd.DataFrame()
df["a"] = np.random.randint(0,2,10)
df["b"] = np.random.randint(0,3,10)
unique_cols = len(df["a"].unique()) + len(df["b"].unique())
tfm = TSOneHotEncoder()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df.shape[1], unique_cols)
#export
class TSCategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, add_na=True):
self.columns = listify(columns)
self.add_na = add_na
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.cat_tfms = []
for column in self.columns:
self.cat_tfms.append(CategoryMap(X[column], add_na=self.add_na))
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_objs(X[column])
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_ids(X[column])
return X
###Output
_____no_output_____
###Markdown
Stateful transforms like TSCategoricalEncoder can easily be serialized.
###Code
import joblib
df = pd.DataFrame()
df["a"] = alphabet[np.random.randint(0,2,100)]
df["b"] = ALPHABET[np.random.randint(0,3,100)]
a_unique = len(df["a"].unique())
b_unique = len(df["b"].unique())
tfm = TSCategoricalEncoder()
tfm.fit(df)
joblib.dump(tfm, "TSCategoricalEncoder.joblib")
tfm = joblib.load("TSCategoricalEncoder.joblib")
df = tfm.transform(df)
test_eq(df['a'].max(), a_unique)
test_eq(df['b'].max(), b_unique)
#export
default_date_attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
class TSDateTimeEncoder(BaseEstimator, TransformerMixin):
def __init__(self, datetime_columns=None, prefix=None, drop=True, time=False, attr=default_date_attr):
self.datetime_columns = listify(datetime_columns)
self.prefix, self.drop, self.time, self.attr = prefix, drop, time ,attr
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if self.time: self.attr = self.attr + ['Hour', 'Minute', 'Second']
if not self.datetime_columns:
self.datetime_columns = X.columns
self.prefixes = []
for dt_column in self.datetime_columns:
self.prefixes.append(re.sub('[Dd]ate$', '', dt_column) if self.prefix is None else self.prefix)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for dt_column,prefix in zip(self.datetime_columns,self.prefixes):
make_date(X, dt_column)
field = X[dt_column]
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in self.attr: X[prefix + "_" + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
if self.drop: X = X.drop(self.datetime_columns, axis=1)
return X
import datetime
df = pd.DataFrame()
df.loc[0, "date"] = datetime.datetime.now()
df.loc[1, "date"] = datetime.datetime.now() + pd.Timedelta(1, unit="D")
tfm = TSDateTimeEncoder()
joblib.dump(tfm, "TSDateTimeEncoder.joblib")
tfm = joblib.load("TSDateTimeEncoder.joblib")
tfm.fit_transform(df)
#export
class TSMissingnessEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = listify(columns)
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.missing_columns = [f"{cn}_missing" for cn in self.columns]
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
X[self.missing_columns] = X[self.columns].isnull().astype(int)
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
X.drop(self.missing_columns, axis=1, inplace=True)
return X
data = np.random.rand(10,3)
data[data > .8] = np.nan
df = pd.DataFrame(data, columns=["a", "b", "c"])
tfm = TSMissingnessEncoder()
tfm.fit(df)
joblib.dump(tfm, "TSMissingnessEncoder.joblib")
tfm = joblib.load("TSMissingnessEncoder.joblib")
df = tfm.transform(df)
df
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
import re
import sklearn
from fastcore.transform import Transform, Pipeline
from fastai.data.transforms import Categorize
from fastai.data.load import DataLoader
from fastai.tabular.core import df_shrink_dtypes, make_date
from tsai.imports import *
from tsai.utils import *
from tsai.data.core import *
from tsai.data.preparation import *
from tsai.data.external import get_UCR_data
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: np.ndarray):
return stack([self.cat.decode(oi) for oi in o])
def decodes(self, o: torch.Tensor):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class TSNan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True, sel_vars=None):
store_attr()
if not ismin_torch("1.8"):
raise ValueError('This function only works with Pytorch>=1.8.')
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = torch.isnan(o[:, self.sel_vars])
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o[:, self.sel_vars], dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[:, self.sel_vars][mask] = median[mask]
else:
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], torch.nanmedian(o[:, self.sel_vars]))
o[:, self.sel_vars] = torch.nan_to_num(o[:, self.sel_vars], self.value)
else:
mask = torch.isnan(o)
if mask.any() and self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
o = torch.nan_to_num(o, torch.nanmedian(o))
o = torch.nan_to_num(o, self.value)
return o
Nan2Value = TSNan2Value
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(TSNan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(TSNan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(TSNan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
o = TSTensor(torch.randn(16, 10, 100))
o[o > .9] = float('nan')
o = TSNan2Value(median=True, sel_vars=[0,1,2,3,4])(o)
test_eq(torch.isnan(o[:, [0,1,2,3,4]]).sum().item(), 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False, **kwargs):
super().__init__(**kwargs)
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False, **kwargs):
super().__init__(**kwargs)
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, range_min, range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False, **kwargs):
super().__init__(**kwargs)
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6, **kwargs):
super().__init__(**kwargs)
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSSelfMissingness(Transform):
"Applies missingness from samples in a batch to random samples in the batch for selected variables"
order = 90
def __init__(self, sel_vars=None, **kwargs):
self.sel_vars = sel_vars
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
if self.sel_vars is not None:
mask = rotate_axis0(torch.isnan(o[:, self.sel_vars]))
o[:, self.sel_vars] = o[:, self.sel_vars].masked_fill(mask, np.nan)
else:
mask = rotate_axis0(torch.isnan(o))
o.masked_fill_(mask, np.nan)
return o
t = TSTensor(torch.randn(10, 20, 100))
t[t>.8] = np.nan
t2 = TSSelfMissingness()(t.clone())
t3 = TSSelfMissingness(sel_vars=[0,3,5,7])(t.clone())
assert (torch.isnan(t).sum() < torch.isnan(t2).sum()) and (torch.isnan(t2).sum() > torch.isnan(t3).sum())
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'iqr'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, iqr=None, quantile_range=(25.0, 75.0), use_single_batch=True, eps=1e-8, verbose=False, **kwargs):
super().__init__(**kwargs)
self.median = tensor(median) if median is not None else None
self.iqr = tensor(iqr) if iqr is not None else None
self._setup = median is None or iqr is None
self.use_single_batch = use_single_batch
self.eps = eps
self.verbose = verbose
self.quantile_range = quantile_range
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
new_o = o.permute(1,0,2).flatten(1)
median = get_percentile(new_o, 50, axis=1)
iqrmin, iqrmax = get_outliers_IQR(new_o, axis=1, quantile_range=self.quantile_range)
self.median = median.unsqueeze(0)
self.iqr = torch.clamp_min((iqrmax - iqrmin).unsqueeze(0), self.eps)
pv(f'{self.__class__.__name__} median={self.median.shape} iqr={self.iqr.shape}', self.verbose)
self._setup = False
else:
if self.median is None: self.median = torch.zeros(1, device=dl.device)
if self.iqr is None: self.iqr = torch.ones(1, device=dl.device)
def encodes(self, o:TSTensor):
return (o - self.median) / self.iqr
def __repr__(self): return f'{self.__class__.__name__}(quantile_range={self.quantile_range}, use_single_batch={self.use_single_batch})'
batch_tfms = TSRobustScale(verbose=True, use_single_batch=False)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=batch_tfms, num_workers=0)
xb, yb = next(iter(dls.train))
xb.min()
#export
def get_stats_with_uncertainty(o, sel_vars=None, sel_vars_zero_mean_unit_var=False, bs=64, n_trials=None, axis=(0,2)):
o_dtype = o.dtype
if n_trials is None: n_trials = len(o) // bs
random_idxs = np.random.choice(len(o), n_trials * bs, n_trials * bs > len(o))
oi_mean = []
oi_std = []
start = 0
for i in progress_bar(range(n_trials)):
idxs = random_idxs[start:start + bs]
start += bs
if hasattr(o, 'oindex'):
oi = o.index[idxs]
if hasattr(o, 'compute'):
oi = o[idxs].compute()
else:
oi = o[idxs]
oi_mean.append(np.nanmean(oi.astype('float32'), axis=axis, keepdims=True))
oi_std.append(np.nanstd(oi.astype('float32'), axis=axis, keepdims=True))
oi_mean = np.concatenate(oi_mean)
oi_std = np.concatenate(oi_std)
E_mean = np.nanmean(oi_mean, axis=0, keepdims=True).astype(o_dtype)
S_mean = np.nanstd(oi_mean, axis=0, keepdims=True).astype(o_dtype)
E_std = np.nanmean(oi_std, axis=0, keepdims=True).astype(o_dtype)
S_std = np.nanstd(oi_std, axis=0, keepdims=True).astype(o_dtype)
if sel_vars is not None:
non_sel_vars = np.isin(np.arange(o.shape[1]), sel_vars, invert=True)
if sel_vars_zero_mean_unit_var:
E_mean[:, non_sel_vars] = 0 # zero mean
E_std[:, non_sel_vars] = 1 # unit var
S_mean[:, non_sel_vars] = 0 # no uncertainty
S_std[:, non_sel_vars] = 0 # no uncertainty
return np.stack([E_mean, S_mean, E_std, S_std])
def get_random_stats(E_mean, S_mean, E_std, S_std):
mult = np.random.normal(0, 1, 2)
new_mean = E_mean + S_mean * mult[0]
new_std = E_std + S_std * mult[1]
return new_mean, new_std
class TSGaussianStandardize(Transform):
"Scales each batch using modeled mean and std based on UNCERTAINTY MODELING FOR OUT-OF-DISTRIBUTION GENERALIZATION https://arxiv.org/abs/2202.03958"
parameters, order = L('E_mean', 'S_mean', 'E_std', 'S_std'), 90
def __init__(self,
E_mean : np.ndarray, # Mean expected value
S_mean : np.ndarray, # Uncertainty (standard deviation) of the mean
E_std : np.ndarray, # Standard deviation expected value
S_std : np.ndarray, # Uncertainty (standard deviation) of the standard deviation
eps=1e-8, # (epsilon) small amount added to standard deviation to avoid deviding by zero
split_idx=0, # Flag to indicate to which set is this transofrm applied. 0: training, 1:validation, None:both
**kwargs,
):
self.E_mean, self.S_mean = torch.from_numpy(E_mean), torch.from_numpy(S_mean)
self.E_std, self.S_std = torch.from_numpy(E_std), torch.from_numpy(S_std)
self.eps = eps
super().__init__(split_idx=split_idx, **kwargs)
def encodes(self, o:TSTensor):
mult = torch.normal(0, 1, (2,), device=o.device)
new_mean = self.E_mean + self.S_mean * mult[0]
new_std = torch.clamp(self.E_std + self.S_std * mult[1], self.eps)
return (o - new_mean) / new_std
TSRandomStandardize = TSGaussianStandardize
arr = np.random.rand(1000, 2, 50)
E_mean, S_mean, E_std, S_std = get_stats_with_uncertainty(arr, sel_vars=None, bs=64, n_trials=None, axis=(0,2))
new_mean, new_std = get_random_stats(E_mean, S_mean, E_std, S_std)
new_mean2, new_std2 = get_random_stats(E_mean, S_mean, E_std, S_std)
test_ne(new_mean, new_mean2)
test_ne(new_std, new_std2)
test_eq(new_mean.shape, (1, 2, 1))
test_eq(new_std.shape, (1, 2, 1))
new_mean, new_std
###Output
_____no_output_____
###Markdown
TSGaussianStandardize can be used jointly with TSStandardized in the following way: ```pythonX, y, splits = get_UCR_data('LSST', split_data=False)tfms = [None, TSClassification()]E_mean, S_mean, E_std, S_std = get_stats_with_uncertainty(X, sel_vars=None, bs=64, n_trials=None, axis=(0,2))batch_tfms = [TSGaussianStandardize(E_mean, S_mean, E_std, S_std, split_idx=0), TSStandardize(E_mean, S_mean, split_idx=1)]dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[32, 64])learn = ts_learner(dls, InceptionTimePlus, metrics=accuracy, cbs=[ShowGraph()])learn.fit_one_cycle(1, 1e-2)```In this way the train batches are scaled based on mean and standard deviation distributions while the valid batches are scaled with a fixed mean and standard deviation values.The intent is to improve out-of-distribution performance. This method is inspired by UNCERTAINTY MODELING FOR OUT-OF-DISTRIBUTION GENERALIZATION https://arxiv.org/abs/2202.03958.
###Code
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True, **kwargs):
super().__init__(**kwargs)
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"Concatenates the position along the sequence as 2 additional variables (sine and cosine)"
order = 90
def __init__(self,
cyclical_var=None, # Optional variable to indicate the steps withing the cycle (ie minute of the day)
magnitude=None, # Added for compatibility. It's not used.
drop_var=False, # Flag to indicate if the cyclical var is removed
**kwargs
):
super().__init__(**kwargs)
self.cyclical_var, self.drop_var = cyclical_var, drop_var
def encodes(self, o: TSTensor):
bs,nvars,seq_len = o.shape
if self.cyclical_var is None:
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
else:
sin = torch.sin(o[:, [self.cyclical_var]]/seq_len * 2 * np.pi)
cos = torch.cos(o[:, [self.cyclical_var]]/seq_len * 2 * np.pi)
if self.drop_var:
exc_vars = np.isin(np.arange(nvars), self.cyclical_var, invert=True)
output = torch.cat([o[:, exc_vars], sin, cos], 1)
else:
output = torch.cat([o, sin, cos], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
bs, c_in, seq_len = 1,3,100
t1 = torch.rand(bs, c_in, seq_len)
t2 = torch.arange(seq_len)
t2 = torch.cat([t2[35:], t2[:35]]).reshape(1, 1, -1)
t = TSTensor(torch.cat([t1, t2], 1))
mask = torch.rand_like(t) > .8
t[mask] = np.nan
enc_t = TSCyclicalPosition(3)(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"Concatenates the position along the sequence as 1 additional variable"
order = 90
def __init__(self,
linear_var:int=None, # Optional variable to indicate the steps withing the cycle (ie minute of the day)
var_range:tuple=None, # Optional range indicating min and max values of the linear variable
magnitude=None, # Added for compatibility. It's not used.
drop_var:bool=False, # Flag to indicate if the cyclical var is removed
lin_range:tuple=(-1,1),
**kwargs):
self.linear_var, self.var_range, self.drop_var, self.lin_range = linear_var, var_range, drop_var, lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,nvars,seq_len = o.shape
if self.linear_var is None:
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
else:
linear_var = o[:, [self.linear_var]]
if self.var_range is None:
lin = (linear_var - linear_var.min()) / (linear_var.max() - linear_var.min())
else:
lin = (linear_var - self.var_range[0]) / (self.var_range[1] - self.var_range[0])
lin = (linear_var - self.lin_range[0]) / (self.lin_range[1] - self.lin_range[0])
if self.drop_var:
exc_vars = np.isin(np.arange(nvars), self.linear_var, invert=True)
output = torch.cat([o[:, exc_vars], lin], 1)
else:
output = torch.cat([o, lin], 1)
return output
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
t = torch.arange(100)
t1 = torch.cat([t[30:], t[:30]]).reshape(1, 1, -1)
t2 = torch.cat([t[52:], t[:52]]).reshape(1, 1, -1)
t = torch.cat([t1, t2]).float()
mask = torch.rand_like(t) > .8
t[mask] = np.nan
t = TSTensor(t)
enc_t = TSLinearPosition(linear_var=0, var_range=(0, 100), drop_var=True)(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1]
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
#export
class TSMissingness(Transform):
"""Concatenates data missingness for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, **kwargs):
self.feature_idxs = listify(feature_idxs)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
missingness = o[:, self.feature_idxs].isnan()
else:
missingness = o.isnan()
return torch.cat([o, missingness], 1)
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSMissingness(feature_idxs=[0,2])(t)
test_eq(enc_t.shape[1], 5)
test_eq(enc_t[:, 3:], torch.isnan(t[:, [0,2]]).float())
#export
class TSPositionGaps(Transform):
"""Concatenates gaps for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, forward=True, backward=False, nearest=False, normalize=True, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.gap_fn = partial(get_gaps, forward=forward, backward=backward, nearest=nearest, normalize=normalize)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
gaps = self.gap_fn(o[:, self.feature_idxs])
else:
gaps = self.gap_fn(o)
return torch.cat([o, gaps], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSPositionGaps(feature_idxs=[0,2], forward=True, backward=True, nearest=True, normalize=False)(t)
test_eq(enc_t.shape[1], 9)
enc_t.data
#export
class TSRollingMean(Transform):
"""Calculates the rolling mean for all/ selected features alongside the sequence
It replaces the original values or adds additional variables (default)
If nan values are found, they will be filled forward and backward"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, window=2, replace=False, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.rolling_mean_fn = partial(rolling_moving_average, window=window)
self.replace = replace
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
if torch.isnan(o[:, self.feature_idxs]).any():
o[:, self.feature_idxs] = fbfill_sequence(o[:, self.feature_idxs])
rolling_mean = self.rolling_mean_fn(o[:, self.feature_idxs])
if self.replace:
o[:, self.feature_idxs] = rolling_mean
return o
else:
if torch.isnan(o).any():
o = fbfill_sequence(o)
rolling_mean = self.rolling_mean_fn(o)
if self.replace: return rolling_mean
return torch.cat([o, rolling_mean], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t > .6] = np.nan
print(t.data)
enc_t = TSRollingMean(feature_idxs=[0,2], window=3)(t)
test_eq(enc_t.shape[1], 5)
print(enc_t.data)
enc_t = TSRollingMean(window=3, replace=True)(t)
test_eq(enc_t.shape[1], 3)
print(enc_t.data)
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True, **kwargs):
super().__init__(**kwargs)
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add, **kwargs):
super().__init__(**kwargs)
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
#export
class TSClipByVar(Transform):
"""Clip batch of type `TSTensor` by variable
Args:
var_min_max: list of tuples containing variable index, min value (or None) and max value (or None)
"""
order = 90
def __init__(self, var_min_max, **kwargs):
super().__init__(**kwargs)
self.var_min_max = var_min_max
def encodes(self, o:TSTensor):
for v,m,M in self.var_min_max:
o[:, v] = torch.clamp(o[:, v], m, M)
return o
t = TSTensor(torch.rand(16, 3, 10) * tensor([1,10,100]).reshape(1,-1,1))
max_values = t.max(0).values.max(-1).values.data
max_values2 = TSClipByVar([(1,None,5), (2,10,50)])(t).max(0).values.max(-1).values.data
test_le(max_values2[1], 5)
test_ge(max_values2[2], 10)
test_le(max_values2[2], 50)
#export
class TSDropVars(Transform):
"Drops selected variable from the input"
order = 90
def __init__(self, drop_vars, **kwargs):
super().__init__(**kwargs)
self.drop_vars = drop_vars
def encodes(self, o:TSTensor):
exc_vars = np.isin(np.arange(o.shape[1]), self.drop_vars, invert=True)
return o[:, exc_vars]
t = TSTensor(torch.arange(24).reshape(2, 3, 4))
enc_t = TSDropVars(2)(t)
test_ne(t, enc_t)
enc_t.data
#export
class TSOneHotEncode(Transform):
order = 90
def __init__(self,
sel_var:int, # Variable that is one-hot encoded
unique_labels:list, # List containing all labels (excluding nan values)
add_na:bool=False, # Flag to indicate if values not included in vocab should be set as 0
drop_var:bool=True, # Flag to indicate if the selected var is removed
magnitude=None, # Added for compatibility. It's not used.
**kwargs
):
unique_labels = listify(unique_labels)
self.sel_var = sel_var
self.unique_labels = unique_labels
self.n_classes = len(unique_labels) + add_na
self.add_na = add_na
self.drop_var = drop_var
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs, n_vars, seq_len = o.shape
o_var = o[:, [self.sel_var]]
ohe_var = torch.zeros(bs, self.n_classes, seq_len, device=o.device)
if self.add_na:
is_na = torch.isin(o_var, o_var.new(list(self.unique_labels)), invert=True) # not available in dict
ohe_var[:, [0]] = is_na.to(ohe_var.dtype)
for i,l in enumerate(self.unique_labels):
ohe_var[:, [i + self.add_na]] = (o_var == l).to(ohe_var.dtype)
if self.drop_var:
exc_vars = torch.isin(torch.arange(o.shape[1], device=o.device), self.sel_var, invert=True)
output = torch.cat([o[:, exc_vars], ohe_var], 1)
else:
output = torch.cat([o, ohe_var], 1)
return output
bs = 2
seq_len = 5
t_cont = torch.rand(bs, 1, seq_len)
t_cat = torch.randint(0, 3, t_cont.shape)
t = TSTensor(torch.cat([t_cat, t_cont], 1))
t_cat
tfm = TSOneHotEncode(0, [0, 1, 2])
output = tfm(t)[:, -3:].data
test_eq(t_cat, torch.argmax(tfm(t)[:, -3:], 1)[:, None])
tfm(t)[:, -3:].data
bs = 2
seq_len = 5
t_cont = torch.rand(bs, 1, seq_len)
t_cat = torch.tensor([[10., 5., 11., np.nan, 12.], [ 5., 12., 10., np.nan, 11.]])[:, None]
t = TSTensor(torch.cat([t_cat, t_cont], 1))
t_cat
tfm = TSOneHotEncode(0, [10, 11, 12], drop_var=False)
mask = ~torch.isnan(t[:, 0])
test_eq(tfm(t)[:, 0][mask], t[:, 0][mask])
tfm(t)[:, -3:].data
t1 = torch.randint(3, 7, (2, 1, 10))
t2 = torch.rand(2, 1, 10)
t = TSTensor(torch.cat([t1, t2], 1))
output = TSOneHotEncode(0, [3, 4, 5], add_na=True, drop_var=True)(t)
test_eq((t1 > 5).float(), output.data[:, [1]])
test_eq((t1 == 3).float(), output.data[:, [2]])
test_eq((t1 == 4).float(), output.data[:, [3]])
test_eq((t1 == 5).float(), output.data[:, [4]])
test_eq(output.shape, (t.shape[0], 5, t.shape[-1]))
#export
class TSPosition(Transform):
order = 90
def __init__(self,
steps:list, # Flag to indicate if the selected var is removed
magnitude=None, # Added for compatibility. It's not used.
**kwargs
):
self.steps = torch.from_numpy(np.asarray(steps)).reshape(1, 1, -1)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs = o.shape[0]
steps = self.steps.expand(bs, -1, -1).to(device=o.device, dtype=o.dtype)
return torch.cat([o, steps], 1)
t = TSTensor(torch.rand(2, 1, 10)).float()
a = np.linspace(-1, 1, 10).astype('float64')
TSPosition(a)(t).data.dtype, t.dtype
###Output
_____no_output_____
###Markdown
sklearn API transforms
###Code
#export
from sklearn.base import BaseEstimator, TransformerMixin
from fastai.data.transforms import CategoryMap
from joblib import dump, load
class TSShrinkDataFrame(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, skip=[], obj2cat=True, int2uint=False, verbose=True):
self.columns, self.skip, self.obj2cat, self.int2uint, self.verbose = listify(columns), skip, obj2cat, int2uint, verbose
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
self.old_dtypes = X.dtypes
if not self.columns: self.columns = X.columns
self.dt = df_shrink_dtypes(X[self.columns], self.skip, obj2cat=self.obj2cat, int2uint=self.int2uint)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X[self.columns] = X[self.columns].astype(self.dt)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X = X.astype(self.old_dtypes)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
df = pd.DataFrame()
df["ints64"] = np.random.randint(0,3,10)
df['floats64'] = np.random.rand(10)
tfm = TSShrinkDataFrame()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df["ints64"].dtype, "int8")
test_eq(df["floats64"].dtype, "float32")
#export
class TSOneHotEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, drop=True, add_na=True, dtype=np.int64):
self.columns = listify(columns)
self.drop, self.add_na, self.dtype = drop, add_na, dtype
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
handle_unknown = "ignore" if self.add_na else "error"
self.ohe_tfm = sklearn.preprocessing.OneHotEncoder(handle_unknown=handle_unknown)
if len(self.columns) == 1:
self.ohe_tfm.fit(X[self.columns].to_numpy().reshape(-1, 1))
else:
self.ohe_tfm.fit(X[self.columns])
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if len(self.columns) == 1:
output = self.ohe_tfm.transform(X[self.columns].to_numpy().reshape(-1, 1)).toarray().astype(self.dtype)
else:
output = self.ohe_tfm.transform(X[self.columns]).toarray().astype(self.dtype)
new_cols = []
for i,col in enumerate(self.columns):
for cats in self.ohe_tfm.categories_[i]:
new_cols.append(f"{str(col)}_{str(cats)}")
X[new_cols] = output
if self.drop: X = X.drop(self.columns, axis=1)
return X
df = pd.DataFrame()
df["a"] = np.random.randint(0,2,10)
df["b"] = np.random.randint(0,3,10)
unique_cols = len(df["a"].unique()) + len(df["b"].unique())
tfm = TSOneHotEncoder()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df.shape[1], unique_cols)
#export
class TSCategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, add_na=True):
self.columns = listify(columns)
self.add_na = add_na
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.cat_tfms = []
for column in self.columns:
self.cat_tfms.append(CategoryMap(X[column], add_na=self.add_na))
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_objs(X[column])
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_ids(X[column])
return X
###Output
_____no_output_____
###Markdown
Stateful transforms like TSCategoricalEncoder can easily be serialized.
###Code
import joblib
df = pd.DataFrame()
df["a"] = alphabet[np.random.randint(0,2,100)]
df["b"] = ALPHABET[np.random.randint(0,3,100)]
a_unique = len(df["a"].unique())
b_unique = len(df["b"].unique())
tfm = TSCategoricalEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSCategoricalEncoder.joblib")
tfm = joblib.load("data/TSCategoricalEncoder.joblib")
df = tfm.transform(df)
test_eq(df['a'].max(), a_unique)
test_eq(df['b'].max(), b_unique)
#export
default_date_attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
class TSDateTimeEncoder(BaseEstimator, TransformerMixin):
def __init__(self, datetime_columns=None, prefix=None, drop=True, time=False, attr=default_date_attr):
self.datetime_columns = listify(datetime_columns)
self.prefix, self.drop, self.time, self.attr = prefix, drop, time ,attr
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if self.time: self.attr = self.attr + ['Hour', 'Minute', 'Second']
if not self.datetime_columns:
self.datetime_columns = X.columns
self.prefixes = []
for dt_column in self.datetime_columns:
self.prefixes.append(re.sub('[Dd]ate$', '', dt_column) if self.prefix is None else self.prefix)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for dt_column,prefix in zip(self.datetime_columns,self.prefixes):
make_date(X, dt_column)
field = X[dt_column]
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in self.attr: X[prefix + "_" + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
if self.drop: X = X.drop(self.datetime_columns, axis=1)
return X
import datetime
df = pd.DataFrame()
df.loc[0, "date"] = datetime.datetime.now()
df.loc[1, "date"] = datetime.datetime.now() + pd.Timedelta(1, unit="D")
tfm = TSDateTimeEncoder()
joblib.dump(tfm, "data/TSDateTimeEncoder.joblib")
tfm = joblib.load("data/TSDateTimeEncoder.joblib")
tfm.fit_transform(df)
#export
class TSMissingnessEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = listify(columns)
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.missing_columns = [f"{cn}_missing" for cn in self.columns]
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
X[self.missing_columns] = X[self.columns].isnull().astype(int)
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
X.drop(self.missing_columns, axis=1, inplace=True)
return X
data = np.random.rand(10,3)
data[data > .8] = np.nan
df = pd.DataFrame(data, columns=["a", "b", "c"])
tfm = TSMissingnessEncoder()
tfm.fit(df)
joblib.dump(tfm, "data/TSMissingnessEncoder.joblib")
tfm = joblib.load("data/TSMissingnessEncoder.joblib")
df = tfm.transform(df)
df
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.external import *
from tsai.data.core import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: (np.ndarray, torch.Tensor)):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class Nan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True):
store_attr()
def encodes(self, o:TSTensor):
mask = torch.isnan(o)
if mask.any():
if self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
# o = torch.nan_to_num(o, torch.nanmedian(o)) # Only available in Pytorch 1.8
o = torch_nan_to_num(o, torch.nanmedian(o))
# o = torch.nan_to_num(o, self.value) # Only available in Pytorch 1.8
o = torch_nan_to_num(o, self.value)
return o
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(Nan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(Nan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(Nan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False):
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False):
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, self.range_min, self.range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False):
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6):
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, min=None, max=None, by_sample=False, by_var=False, quantile_range=(25.0, 75.0), use_single_batch=True, verbose=False):
self.median = tensor(median) if median is not None else tensor(0)
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self._setup = (median is None or min is None or max is None) and not by_sample
self.by_sample, self.by_var = by_sample, by_var
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
self.quantile_range = quantile_range
if median is not None or min is not None or max is not None:
pv(f'{self.__class__.__name__} median={median} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
median = get_percentile(o, 50, self.axis)
min, max = get_outliers_IQR(o, self.axis, quantile_range=self.quantile_range)
self.median, self.min, self.max = tensor(median), tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} median={self.median} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} median={self.median.shape} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.by_sample:
median = get_percentile(o, 50, self.axis)
min, max = get_outliers_IQR(o, axis=self.axis, quantile_range=self.quantile_range)
self.median, self.min, self.max = o.new(median), o.new(min), o.new(max)
return (o - self.median) / (self.max - self.min)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, num_workers=0)
xb, yb = next(iter(dls.train))
clipped_xb = TSRobustScale(by_sample=true)(xb)
test_ne(clipped_xb, xb)
clipped_xb.min(), clipped_xb.max(), xb.min(), xb.max()
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add):
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
###Output
_____no_output_____
###Markdown
sklearn API transforms
###Code
#export
from sklearn.base import BaseEstimator, TransformerMixin
from fastai.data.transforms import CategoryMap
from joblib import dump, load
class TSShrinkDataFrame(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, skip=[], obj2cat=True, int2uint=False, verbose=True):
self.columns, self.skip, self.obj2cat, self.int2uint, self.verbose = listify(columns), skip, obj2cat, int2uint, verbose
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
self.old_dtypes = X.dtypes
if not self.columns: self.columns = X.columns
self.dt = df_shrink_dtypes(X[self.columns], self.skip, obj2cat=self.obj2cat, int2uint=self.int2uint)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X[self.columns] = X[self.columns].astype(self.dt)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X = X.astype(self.old_dtypes)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
df = pd.DataFrame()
df["ints64"] = np.random.randint(0,3,10)
df['floats64'] = np.random.rand(10)
tfm = TSShrinkDataFrame()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df["ints64"].dtype, "int8")
test_eq(df["floats64"].dtype, "float32")
#export
class TSOneHotEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, drop=True, add_na=True, dtype=np.int64):
self.columns = listify(columns)
self.drop, self.add_na, self.dtype = drop, add_na, dtype
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
handle_unknown = "ignore" if self.add_na else "error"
self.ohe_tfm = sklearn.preprocessing.OneHotEncoder(handle_unknown=handle_unknown)
if len(self.columns) == 1:
self.ohe_tfm.fit(X[self.columns].to_numpy().reshape(-1, 1))
else:
self.ohe_tfm.fit(X[self.columns])
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if len(self.columns) == 1:
output = self.ohe_tfm.transform(X[self.columns].to_numpy().reshape(-1, 1)).toarray().astype(self.dtype)
else:
output = self.ohe_tfm.transform(X[self.columns]).toarray().astype(self.dtype)
new_cols = []
for i,col in enumerate(self.columns):
for cats in self.ohe_tfm.categories_[i]:
new_cols.append(f"{str(col)}_{str(cats)}")
X[new_cols] = output
if self.drop: X = X.drop(self.columns, axis=1)
return X
df = pd.DataFrame()
df["a"] = np.random.randint(0,2,10)
df["b"] = np.random.randint(0,3,10)
unique_cols = len(df["a"].unique()) + len(df["b"].unique())
tfm = TSOneHotEncoder()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df.shape[1], unique_cols)
#export
class TSCategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, add_na=True):
self.columns = listify(columns)
self.add_na = add_na
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.cat_tfms = []
for column in self.columns:
self.cat_tfms.append(CategoryMap(X[column], add_na=self.add_na))
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_objs(X[column])
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_ids(X[column])
return X
###Output
_____no_output_____
###Markdown
Stateful transforms like TSCategoricalEncoder can easily be serialized.
###Code
import joblib
df = pd.DataFrame()
df["a"] = alphabet[np.random.randint(0,2,100)]
df["b"] = ALPHABET[np.random.randint(0,3,100)]
a_unique = len(df["a"].unique())
b_unique = len(df["b"].unique())
tfm = TSCategoricalEncoder()
tfm.fit(df)
joblib.dump(tfm, "TSCategoricalEncoder.joblib")
tfm = joblib.load("TSCategoricalEncoder.joblib")
df = tfm.transform(df)
test_eq(df['a'].max(), a_unique)
test_eq(df['b'].max(), b_unique)
#export
default_date_attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
class TSDateTimeEncoder(BaseEstimator, TransformerMixin):
def __init__(self, datetime_columns=None, prefix=None, drop=True, time=False, attr=default_date_attr):
self.datetime_columns = listify(datetime_columns)
self.prefix, self.drop, self.time, self.attr = prefix, drop, time ,attr
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if self.time: self.attr = self.attr + ['Hour', 'Minute', 'Second']
if not self.datetime_columns:
self.datetime_columns = X.columns
self.prefixes = []
for dt_column in self.datetime_columns:
self.prefixes.append(re.sub('[Dd]ate$', '', dt_column) if self.prefix is None else self.prefix)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for dt_column,prefix in zip(self.datetime_columns,self.prefixes):
make_date(X, dt_column)
field = X[dt_column]
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in self.attr: X[prefix + "_" + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
if self.drop: X = X.drop(self.datetime_columns, axis=1)
return X
import datetime
df = pd.DataFrame()
df.loc[0, "date"] = datetime.datetime.now()
df.loc[1, "date"] = datetime.datetime.now() + pd.Timedelta(1, unit="D")
tfm = TSDateTimeEncoder()
joblib.dump(tfm, "TSDateTimeEncoder.joblib")
tfm = joblib.load("TSDateTimeEncoder.joblib")
tfm.fit_transform(df)
#export
class TSMissingnessEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = listify(columns)
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.missing_columns = [f"{cn}_missing" for cn in self.columns]
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
X[self.missing_columns] = X[self.columns].isnull().astype(int)
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
X.drop(self.missing_columns, axis=1, inplace=True)
return X
data = np.random.rand(10,3)
data[data > .8] = np.nan
df = pd.DataFrame(data, columns=["a", "b", "c"])
tfm = TSMissingnessEncoder()
tfm.fit(df)
joblib.dump(tfm, "TSMissingnessEncoder.joblib")
tfm = joblib.load("TSMissingnessEncoder.joblib")
df = tfm.transform(df)
df
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____
###Markdown
Data preprocessing> Functions used to preprocess time series (both X and y).
###Code
#export
from tsai.imports import *
from tsai.utils import *
from tsai.data.external import *
from tsai.data.core import *
from tsai.data.preparation import *
dsid = 'NATOPS'
X, y, splits = get_UCR_data(dsid, return_split=False)
tfms = [None, Categorize()]
dsets = TSDatasets(X, y, tfms=tfms, splits=splits)
#export
class ToNumpyCategory(Transform):
"Categorize a numpy batch"
order = 90
def __init__(self, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: np.ndarray):
self.type = type(o)
self.cat = Categorize()
self.cat.setup(o)
self.vocab = self.cat.vocab
return np.asarray(stack([self.cat(oi) for oi in o]))
def decodes(self, o: (np.ndarray, torch.Tensor)):
return stack([self.cat.decode(oi) for oi in o])
t = ToNumpyCategory()
y_cat = t(y)
y_cat[:10]
test_eq(t.decode(tensor(y_cat)), y)
test_eq(t.decode(np.array(y_cat)), y)
#export
class OneHot(Transform):
"One-hot encode/ decode a batch"
order = 90
def __init__(self, n_classes=None, **kwargs):
self.n_classes = n_classes
super().__init__(**kwargs)
def encodes(self, o: torch.Tensor):
if not self.n_classes: self.n_classes = len(np.unique(o))
return torch.eye(self.n_classes)[o]
def encodes(self, o: np.ndarray):
o = ToNumpyCategory()(o)
if not self.n_classes: self.n_classes = len(np.unique(o))
return np.eye(self.n_classes)[o]
def decodes(self, o: torch.Tensor): return torch.argmax(o, dim=-1)
def decodes(self, o: np.ndarray): return np.argmax(o, axis=-1)
oh_encoder = OneHot()
y_cat = ToNumpyCategory()(y)
oht = oh_encoder(y_cat)
oht[:10]
n_classes = 10
n_samples = 100
t = torch.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oht = oh_encoder(t)
test_eq(oht.shape, (n_samples, n_classes))
test_eq(torch.argmax(oht, dim=-1), t)
test_eq(oh_encoder.decode(oht), t)
n_classes = 10
n_samples = 100
a = np.random.randint(0, n_classes, (n_samples,))
oh_encoder = OneHot()
oha = oh_encoder(a)
test_eq(oha.shape, (n_samples, n_classes))
test_eq(np.argmax(oha, axis=-1), a)
test_eq(oh_encoder.decode(oha), a)
#export
class TSNan2Value(Transform):
"Replaces any nan values by a predefined value or median"
order = 90
def __init__(self, value=0, median=False, by_sample_and_var=True):
store_attr()
if not ismin_torch("1.8"):
raise ValueError('This function only works with Pytorch>=1.8.')
def encodes(self, o:TSTensor):
mask = torch.isnan(o)
if mask.any():
if self.median:
if self.by_sample_and_var:
median = torch.nanmedian(o, dim=2, keepdim=True)[0].repeat(1, 1, o.shape[-1])
o[mask] = median[mask]
else:
o = torch_nan_to_num(o, torch.nanmedian(o))
o = torch_nan_to_num(o, self.value)
return o
Nan2Value = TSNan2Value
o = TSTensor(torch.randn(16, 10, 100))
o[0,0] = float('nan')
o[o > .9] = float('nan')
o[[0,1,5,8,14,15], :, -20:] = float('nan')
nan_vals1 = torch.isnan(o).sum()
o2 = Pipeline(TSNan2Value(), split_idx=0)(o.clone())
o3 = Pipeline(TSNan2Value(median=True, by_sample_and_var=True), split_idx=0)(o.clone())
o4 = Pipeline(TSNan2Value(median=True, by_sample_and_var=False), split_idx=0)(o.clone())
nan_vals2 = torch.isnan(o2).sum()
nan_vals3 = torch.isnan(o3).sum()
nan_vals4 = torch.isnan(o4).sum()
test_ne(nan_vals1, 0)
test_eq(nan_vals2, 0)
test_eq(nan_vals3, 0)
test_eq(nan_vals4, 0)
# export
class TSStandardize(Transform):
"""Standardizes batch of type `TSTensor`
Args:
- mean: you can pass a precalculated mean value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch.
- std: you can pass a precalculated std value as a torch tensor which is the one that will be used, or leave as None, in which case
it will be estimated using a batch. If both mean and std values are passed when instantiating TSStandardize, the rest of arguments won't be used.
- by_sample: if True, it will calculate mean and std for each individual sample. Otherwise based on the entire batch.
- by_var:
* False: mean and std will be the same for all variables.
* True: a mean and std will be be different for each variable.
* a list of ints: (like [0,1,3]) a different mean and std will be set for each variable on the list. Variables not included in the list
won't be standardized.
* a list that contains a list/lists: (like[0, [1,3]]) a different mean and std will be set for each element of the list. If multiple elements are
included in a list, the same mean and std will be set for those variable in the sublist/s. (in the example a mean and std is determined for
variable 0, and another one for variables 1 & 3 - the same one). Variables not included in the list won't be standardized.
- by_step: if False, it will standardize values for each time step.
- eps: it avoids dividing by 0
- use_single_batch: if True a single training batch will be used to calculate mean & std. Else the entire training set will be used.
"""
parameters, order = L('mean', 'std'), 90
_setup = True # indicates it requires set up
def __init__(self, mean=None, std=None, by_sample=False, by_var=False, by_step=False, eps=1e-8, use_single_batch=True, verbose=False):
self.mean = tensor(mean) if mean is not None else None
self.std = tensor(std) if std is not None else None
self._setup = (mean is None or std is None) and not by_sample
self.eps = eps
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.mean is not None or self.std is not None:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, mean, std): return cls(mean, std)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
if len(self.mean.shape) == 0:
pv(f'{self.__class__.__name__} mean={self.mean}, std={self.std}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} mean shape={self.mean.shape}, std shape={self.std.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.mean, self.std = torch.zeros(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
mean = torch.zeros(*shape, device=o.device)
std = torch.ones(*shape, device=o.device)
for v in self.by_var:
if not is_listy(v): v = [v]
mean[:, v] = torch_nanmean(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True)
std[:, v] = torch.clamp_min(torch_nanstd(o[:, v], dim=self.axes if len(v) == 1 else self.list_axes, keepdim=True), self.eps)
else:
mean = torch_nanmean(o, dim=self.axes, keepdim=self.axes!=())
std = torch.clamp_min(torch_nanstd(o, dim=self.axes, keepdim=self.axes!=()), self.eps)
self.mean, self.std = mean, std
return (o - self.mean) / self.std
def decodes(self, o:TSTensor):
if self.mean is None or self.std is None: return o
return o * self.std + self.mean
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, batch_tfms=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
from tsai.data.validation import TimeSplitter
X_nan = np.random.rand(100, 5, 10)
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 0] = float('nan')
idxs = np.random.choice(len(X_nan), int(len(X_nan)*.5), False)
X_nan[idxs, 1, -10:] = float('nan')
batch_tfms = TSStandardize(by_var=True)
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
test_eq(torch.isnan(dls.after_batch[0].mean).sum(), 0)
test_eq(torch.isnan(dls.after_batch[0].std).sum(), 0)
xb = first(dls.train)[0]
test_ne(torch.isnan(xb).sum(), 0)
test_ne(torch.isnan(xb).sum(), torch.isnan(xb).numel())
batch_tfms = [TSStandardize(by_var=True), Nan2Value()]
dls = get_ts_dls(X_nan, batch_tfms=batch_tfms, splits=TimeSplitter(show_plot=False)(range_of(X_nan)))
xb = first(dls.train)[0]
test_eq(torch.isnan(xb).sum(), 0)
batch_tfms=[TSStandardize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=True)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
tfms = [None, TSClassification()]
batch_tfms = TSStandardize(by_sample=True, by_var=False, verbose=False)
dls = get_ts_dls(X, y, splits=splits, tfms=tfms, batch_tfms=batch_tfms, bs=[64, 128], inplace=False)
xb, yb = dls.train.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
xb, yb = dls.valid.one_batch()
test_close(xb.mean(), 0, eps=1e-1)
test_close(xb.std(), 1, eps=1e-1)
#export
@patch
def mul_min(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.min(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
min_x = x
for ax in axes: min_x, _ = min_x.min(ax, keepdim)
return retain_type(min_x, x)
@patch
def mul_max(x:(torch.Tensor, TSTensor, NumpyTensor), axes=(), keepdim=False):
if axes == (): return retain_type(x.max(), x)
axes = reversed(sorted(axes if is_listy(axes) else [axes]))
max_x = x
for ax in axes: max_x, _ = max_x.max(ax, keepdim)
return retain_type(max_x, x)
class TSNormalize(Transform):
"Normalizes batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, range=(-1, 1), by_sample=False, by_var=False, by_step=False, clip_values=True,
use_single_batch=True, verbose=False):
self.min = tensor(min) if min is not None else None
self.max = tensor(max) if max is not None else None
self._setup = (self.min is None and self.max is None) and not by_sample
self.range_min, self.range_max = range
self.by_sample, self.by_var, self.by_step = by_sample, by_var, by_step
drop_axes = []
if by_sample: drop_axes.append(0)
if by_var: drop_axes.append(1)
if by_step: drop_axes.append(2)
self.axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes])
if by_var and is_listy(by_var):
self.list_axes = tuple([ax for ax in (0, 1, 2) if ax not in drop_axes]) + (1,)
self.clip_values = clip_values
self.use_single_batch = use_single_batch
self.verbose = verbose
if self.min is not None or self.max is not None:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n', self.verbose)
@classmethod
def from_stats(cls, min, max, range_min=0, range_max=1): return cls(min, max, self.range_min, self.range_max)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.zeros(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes if len(v) == 1 else self.list_axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
if len(self.min.shape) == 0:
pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
else:
pv(f'{self.__class__.__name__} min shape={self.min.shape}, max shape={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step}\n',
self.verbose)
self._setup = False
elif self.by_sample: self.min, self.max = -torch.ones(1), torch.ones(1)
def encodes(self, o:TSTensor):
if self.by_sample:
if self.by_var and is_listy(self.by_var):
shape = torch.mean(o, dim=self.axes, keepdim=self.axes!=()).shape
_min = torch.zeros(*shape, device=o.device) + self.range_min
_max = torch.ones(*shape, device=o.device) + self.range_max
for v in self.by_var:
if not is_listy(v): v = [v]
_min[:, v] = o[:, v].mul_min(self.axes, keepdim=self.axes!=())
_max[:, v] = o[:, v].mul_max(self.axes, keepdim=self.axes!=())
else:
_min, _max = o.mul_min(self.axes, keepdim=self.axes!=()), o.mul_max(self.axes, keepdim=self.axes!=())
self.min, self.max = _min, _max
output = ((o - self.min) / (self.max - self.min)) * (self.range_max - self.range_min) + self.range_min
if self.clip_values:
if self.by_var and is_listy(self.by_var):
for v in self.by_var:
if not is_listy(v): v = [v]
output[:, v] = torch.clamp(output[:, v], self.range_min, self.range_max)
else:
output = torch.clamp(output, self.range_min, self.range_max)
return output
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var}, by_step={self.by_step})'
batch_tfms = [TSNormalize()]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms=[TSNormalize(by_sample=True, by_var=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
batch_tfms = [TSNormalize(by_var=[0, [1, 2]], use_single_batch=False, clip_values=False, verbose=False)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb[:, [0, 1, 2]].max() <= 1
assert xb[:, [0, 1, 2]].min() >= -1
#export
class TSClipOutliers(Transform):
"Clip outliers batch of type `TSTensor` based on the IQR"
parameters, order = L('min', 'max'), 90
_setup = True # indicates it requires set up
def __init__(self, min=None, max=None, by_sample=False, by_var=False, use_single_batch=False, verbose=False):
self.min = tensor(min) if min is not None else tensor(-np.inf)
self.max = tensor(max) if max is not None else tensor(np.inf)
self.by_sample, self.by_var = by_sample, by_var
self._setup = (min is None or max is None) and not by_sample
if by_sample and by_var: self.axis = (2)
elif by_sample: self.axis = (1, 2)
elif by_var: self.axis = (0, 2)
else: self.axis = None
self.use_single_batch = use_single_batch
self.verbose = verbose
if min is not None or max is not None:
pv(f'{self.__class__.__name__} min={min}, max={max}\n', self.verbose)
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
min, max = get_outliers_IQR(o, self.axis)
self.min, self.max = tensor(min), tensor(max)
if self.axis is None: pv(f'{self.__class__.__name__} min={self.min}, max={self.max}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
else: pv(f'{self.__class__.__name__} min={self.min.shape}, max={self.max.shape}, by_sample={self.by_sample}, by_var={self.by_var}\n',
self.verbose)
self._setup = False
def encodes(self, o:TSTensor):
if self.axis is None: return torch.clamp(o, self.min, self.max)
elif self.by_sample:
min, max = get_outliers_IQR(o, axis=self.axis)
self.min, self.max = o.new(min), o.new(max)
return torch_clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(by_sample={self.by_sample}, by_var={self.by_var})'
batch_tfms=[TSClipOutliers(-1, 1, verbose=True)]
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, bs=128, num_workers=0, after_batch=batch_tfms)
xb, yb = next(iter(dls.train))
assert xb.max() <= 1
assert xb.min() >= -1
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
xb, yb = next(iter(dls.valid))
test_close(xb.min(), -1, eps=1e-1)
test_close(xb.max(), 1, eps=1e-1)
# export
class TSClip(Transform):
"Clip batch of type `TSTensor`"
parameters, order = L('min', 'max'), 90
def __init__(self, min=-6, max=6):
self.min = torch.tensor(min)
self.max = torch.tensor(max)
def encodes(self, o:TSTensor):
return torch.clamp(o, self.min, self.max)
def __repr__(self): return f'{self.__class__.__name__}(min={self.min}, max={self.max})'
t = TSTensor(torch.randn(10, 20, 100)*10)
test_le(TSClip()(t).max().item(), 6)
test_ge(TSClip()(t).min().item(), -6)
#export
class TSRobustScale(Transform):
r"""This Scaler removes the median and scales the data according to the quantile range (defaults to IQR: Interquartile Range)"""
parameters, order = L('median', 'iqr'), 90
_setup = True # indicates it requires set up
def __init__(self, median=None, iqr=None, quantile_range=(25.0, 75.0), use_single_batch=True, eps=1e-8, verbose=False):
self.median = tensor(median) if median is not None else None
self.iqr = tensor(iqr) if iqr is not None else None
self._setup = median is None or iqr is None
self.use_single_batch = use_single_batch
self.eps = eps
self.verbose = verbose
self.quantile_range = quantile_range
def setups(self, dl: DataLoader):
if self._setup:
if not self.use_single_batch:
o = dl.dataset.__getitem__([slice(None)])[0]
else:
o, *_ = dl.one_batch()
new_o = o.permute(1,0,2).flatten(1)
median = get_percentile(new_o, 50, axis=1)
iqrmin, iqrmax = get_outliers_IQR(new_o, axis=1, quantile_range=self.quantile_range)
self.median = median.unsqueeze(0)
self.iqr = torch.clamp_min((iqrmax - iqrmin).unsqueeze(0), self.eps)
pv(f'{self.__class__.__name__} median={self.median.shape} iqr={self.iqr.shape}', self.verbose)
self._setup = False
else:
if self.median is None: self.median = torch.zeros(1, device=dl.device)
if self.iqr is None: self.iqr = torch.ones(1, device=dl.device)
def encodes(self, o:TSTensor):
return (o - self.median) / self.iqr
def __repr__(self): return f'{self.__class__.__name__}(quantile_range={self.quantile_range}, use_single_batch={self.use_single_batch})'
batch_tfms = TSRobustScale(verbose=True, use_single_batch=False)
dls = TSDataLoaders.from_dsets(dsets.train, dsets.valid, batch_tfms=batch_tfms, num_workers=0)
xb, yb = next(iter(dls.train))
xb.min()
#export
class TSDiff(Transform):
"Differences batch of type `TSTensor`"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(o, lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor(torch.arange(24).reshape(2,3,4))
test_eq(TSDiff()(t)[..., 1:].float().mean(), 1)
test_eq(TSDiff(lag=2, pad=False)(t).float().mean(), 2)
#export
class TSLog(Transform):
"Log transforms batch of type `TSTensor` + 1. Accepts positive and negative numbers"
order = 90
def __init__(self, ex=None, **kwargs):
self.ex = ex
super().__init__(**kwargs)
def encodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.log1p(o[o > 0])
output[o < 0] = -torch.log1p(torch.abs(o[o < 0]))
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def decodes(self, o:TSTensor):
output = torch.zeros_like(o)
output[o > 0] = torch.exp(o[o > 0]) - 1
output[o < 0] = -torch.exp(torch.abs(o[o < 0])) + 1
if self.ex is not None: output[...,self.ex,:] = o[...,self.ex,:]
return output
def __repr__(self): return f'{self.__class__.__name__}()'
t = TSTensor(torch.rand(2,3,4)) * 2 - 1
tfm = TSLog()
enc_t = tfm(t)
test_ne(enc_t, t)
test_close(tfm.decodes(enc_t).data, t.data)
#export
class TSCyclicalPosition(Transform):
"""Concatenates the position along the sequence as 2 additional variables (sine and cosine)
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, **kwargs):
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
sin, cos = sincos_encoding(seq_len, device=o.device)
output = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSCyclicalPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 2
plt.plot(enc_t[0, -2:].cpu().numpy().T)
plt.show()
#export
class TSLinearPosition(Transform):
"""Concatenates the position along the sequence as 1 additional variable
Args:
magnitude: added for compatibility. It's not used.
"""
order = 90
def __init__(self, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
output = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
return output
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSLinearPosition()(t)
test_ne(enc_t, t)
assert t.shape[1] == enc_t.shape[1] - 1
plt.plot(enc_t[0, -1].cpu().numpy().T)
plt.show()
# export
class TSPosition(Transform):
"""Concatenates linear and/or cyclical positions along the sequence as additional variables"""
order = 90
def __init__(self, cyclical=True, linear=True, magnitude=None, lin_range=(-1,1), **kwargs):
self.lin_range = lin_range
self.cyclical, self.linear = cyclical, linear
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
bs,_,seq_len = o.shape
if self.linear:
lin = linear_encoding(seq_len, device=o.device, lin_range=self.lin_range)
o = torch.cat([o, lin.reshape(1,1,-1).repeat(bs,1,1)], 1)
if self.cyclical:
sin, cos = sincos_encoding(seq_len, device=o.device)
o = torch.cat([o, sin.reshape(1,1,-1).repeat(bs,1,1), cos.reshape(1,1,-1).repeat(bs,1,1)], 1)
return o
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
enc_t = TSPosition(cyclical=True, linear=True)(t)
test_eq(enc_t.shape[1], 6)
plt.plot(enc_t[0, 3:].T);
#export
class TSMissingness(Transform):
"""Concatenates data missingness for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, **kwargs):
self.feature_idxs = listify(feature_idxs)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
missingness = o[:, self.feature_idxs].isnan()
else:
missingness = o.isnan()
return torch.cat([o, missingness], 1)
bs, c_in, seq_len = 1,3,100
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSMissingness(feature_idxs=[0,2])(t)
test_eq(enc_t.shape[1], 5)
test_eq(enc_t[:, 3:], torch.isnan(t[:, [0,2]]).float())
#export
class TSPositionGaps(Transform):
"""Concatenates gaps for selected features along the sequence as additional variables"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, forward=True, backward=False, nearest=False, normalize=True, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.gap_fn = partial(get_gaps, forward=forward, backward=backward, nearest=nearest, normalize=normalize)
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
gaps = self.gap_fn(o[:, self.feature_idxs])
else:
gaps = self.gap_fn(o)
return torch.cat([o, gaps], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t>.5] = np.nan
enc_t = TSPositionGaps(feature_idxs=[0,2], forward=True, backward=True, nearest=True, normalize=False)(t)
test_eq(enc_t.shape[1], 9)
enc_t.data
#export
class TSRollingMean(Transform):
"""Calculates the rolling mean for all/ selected features alongside the sequence
It replaces the original values or adds additional variables (default)
If nan values are found, they will be filled forward and backward"""
order = 90
def __init__(self, feature_idxs=None, magnitude=None, window=2, replace=False, **kwargs):
self.feature_idxs = listify(feature_idxs)
self.rolling_mean_fn = partial(rolling_moving_average, window=window)
self.replace = replace
super().__init__(**kwargs)
def encodes(self, o: TSTensor):
if self.feature_idxs:
if torch.isnan(o[:, self.feature_idxs]).any():
o[:, self.feature_idxs] = fbfill_sequence(o[:, self.feature_idxs])
rolling_mean = self.rolling_mean_fn(o[:, self.feature_idxs])
if self.replace:
o[:, self.feature_idxs] = rolling_mean
return o
else:
if torch.isnan(o).any():
o = fbfill_sequence(o)
rolling_mean = self.rolling_mean_fn(o)
if self.replace: return rolling_mean
return torch.cat([o, rolling_mean], 1)
bs, c_in, seq_len = 1,3,8
t = TSTensor(torch.rand(bs, c_in, seq_len))
t[t > .6] = np.nan
print(t.data)
enc_t = TSRollingMean(feature_idxs=[0,2], window=3)(t)
test_eq(enc_t.shape[1], 5)
print(enc_t.data)
enc_t = TSRollingMean(window=3, replace=True)(t)
test_eq(enc_t.shape[1], 3)
print(enc_t.data)
#export
class TSLogReturn(Transform):
"Calculates log-return of batch of type `TSTensor`. For positive values only"
order = 90
def __init__(self, lag=1, pad=True):
self.lag, self.pad = lag, pad
def encodes(self, o:TSTensor):
return torch_diff(torch.log(o), lag=self.lag, pad=self.pad)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,4,8,16,32,64,128,256]).float()
test_eq(TSLogReturn(pad=False)(t).std(), 0)
#export
class TSAdd(Transform):
"Add a defined amount to each batch of type `TSTensor`."
order = 90
def __init__(self, add):
self.add = add
def encodes(self, o:TSTensor):
return torch.add(o, self.add)
def __repr__(self): return f'{self.__class__.__name__}(lag={self.lag}, pad={self.pad})'
t = TSTensor([1,2,3]).float()
test_eq(TSAdd(1)(t), TSTensor([2,3,4]).float())
###Output
_____no_output_____
###Markdown
sklearn API transforms
###Code
#export
from sklearn.base import BaseEstimator, TransformerMixin
from fastai.data.transforms import CategoryMap
from joblib import dump, load
class TSShrinkDataFrame(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, skip=[], obj2cat=True, int2uint=False, verbose=True):
self.columns, self.skip, self.obj2cat, self.int2uint, self.verbose = listify(columns), skip, obj2cat, int2uint, verbose
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
self.old_dtypes = X.dtypes
if not self.columns: self.columns = X.columns
self.dt = df_shrink_dtypes(X[self.columns], self.skip, obj2cat=self.obj2cat, int2uint=self.int2uint)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X[self.columns] = X[self.columns].astype(self.dt)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
if self.verbose:
start_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe is {start_memory} MB")
X = X.astype(self.old_dtypes)
if self.verbose:
end_memory = X.memory_usage().sum() / 1024**2
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by {100 * (start_memory - end_memory) / start_memory} % ")
return X
df = pd.DataFrame()
df["ints64"] = np.random.randint(0,3,10)
df['floats64'] = np.random.rand(10)
tfm = TSShrinkDataFrame()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df["ints64"].dtype, "int8")
test_eq(df["floats64"].dtype, "float32")
#export
class TSOneHotEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, drop=True, add_na=True, dtype=np.int64):
self.columns = listify(columns)
self.drop, self.add_na, self.dtype = drop, add_na, dtype
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
handle_unknown = "ignore" if self.add_na else "error"
self.ohe_tfm = sklearn.preprocessing.OneHotEncoder(handle_unknown=handle_unknown)
if len(self.columns) == 1:
self.ohe_tfm.fit(X[self.columns].to_numpy().reshape(-1, 1))
else:
self.ohe_tfm.fit(X[self.columns])
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
if len(self.columns) == 1:
output = self.ohe_tfm.transform(X[self.columns].to_numpy().reshape(-1, 1)).toarray().astype(self.dtype)
else:
output = self.ohe_tfm.transform(X[self.columns]).toarray().astype(self.dtype)
new_cols = []
for i,col in enumerate(self.columns):
for cats in self.ohe_tfm.categories_[i]:
new_cols.append(f"{str(col)}_{str(cats)}")
X[new_cols] = output
if self.drop: X = X.drop(self.columns, axis=1)
return X
df = pd.DataFrame()
df["a"] = np.random.randint(0,2,10)
df["b"] = np.random.randint(0,3,10)
unique_cols = len(df["a"].unique()) + len(df["b"].unique())
tfm = TSOneHotEncoder()
tfm.fit(df)
df = tfm.transform(df)
test_eq(df.shape[1], unique_cols)
#export
class TSCategoricalEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None, add_na=True):
self.columns = listify(columns)
self.add_na = add_na
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.cat_tfms = []
for column in self.columns:
self.cat_tfms.append(CategoryMap(X[column], add_na=self.add_na))
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_objs(X[column])
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
for cat_tfm, column in zip(self.cat_tfms, self.columns):
X[column] = cat_tfm.map_ids(X[column])
return X
###Output
_____no_output_____
###Markdown
Stateful transforms like TSCategoricalEncoder can easily be serialized.
###Code
import joblib
df = pd.DataFrame()
df["a"] = alphabet[np.random.randint(0,2,100)]
df["b"] = ALPHABET[np.random.randint(0,3,100)]
a_unique = len(df["a"].unique())
b_unique = len(df["b"].unique())
tfm = TSCategoricalEncoder()
tfm.fit(df)
joblib.dump(tfm, "TSCategoricalEncoder.joblib")
tfm = joblib.load("TSCategoricalEncoder.joblib")
df = tfm.transform(df)
test_eq(df['a'].max(), a_unique)
test_eq(df['b'].max(), b_unique)
#export
default_date_attr = ['Year', 'Month', 'Week', 'Day', 'Dayofweek', 'Dayofyear', 'Is_month_end', 'Is_month_start',
'Is_quarter_end', 'Is_quarter_start', 'Is_year_end', 'Is_year_start']
class TSDateTimeEncoder(BaseEstimator, TransformerMixin):
def __init__(self, datetime_columns=None, prefix=None, drop=True, time=False, attr=default_date_attr):
self.datetime_columns = listify(datetime_columns)
self.prefix, self.drop, self.time, self.attr = prefix, drop, time ,attr
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if self.time: self.attr = self.attr + ['Hour', 'Minute', 'Second']
if not self.datetime_columns:
self.datetime_columns = X.columns
self.prefixes = []
for dt_column in self.datetime_columns:
self.prefixes.append(re.sub('[Dd]ate$', '', dt_column) if self.prefix is None else self.prefix)
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
for dt_column,prefix in zip(self.datetime_columns,self.prefixes):
make_date(X, dt_column)
field = X[dt_column]
# Pandas removed `dt.week` in v1.1.10
week = field.dt.isocalendar().week.astype(field.dt.day.dtype) if hasattr(field.dt, 'isocalendar') else field.dt.week
for n in self.attr: X[prefix + "_" + n] = getattr(field.dt, n.lower()) if n != 'Week' else week
if self.drop: X = X.drop(self.datetime_columns, axis=1)
return X
import datetime
df = pd.DataFrame()
df.loc[0, "date"] = datetime.datetime.now()
df.loc[1, "date"] = datetime.datetime.now() + pd.Timedelta(1, unit="D")
tfm = TSDateTimeEncoder()
joblib.dump(tfm, "TSDateTimeEncoder.joblib")
tfm = joblib.load("TSDateTimeEncoder.joblib")
tfm.fit_transform(df)
#export
class TSMissingnessEncoder(BaseEstimator, TransformerMixin):
def __init__(self, columns=None):
self.columns = listify(columns)
def fit(self, X:pd.DataFrame, y=None, **fit_params):
assert isinstance(X, pd.DataFrame)
if not self.columns: self.columns = X.columns
self.missing_columns = [f"{cn}_missing" for cn in self.columns]
return self
def transform(self, X:pd.DataFrame, y=None, **transform_params):
assert isinstance(X, pd.DataFrame)
X[self.missing_columns] = X[self.columns].isnull().astype(int)
return X
def inverse_transform(self, X):
assert isinstance(X, pd.DataFrame)
X.drop(self.missing_columns, axis=1, inplace=True)
return X
data = np.random.rand(10,3)
data[data > .8] = np.nan
df = pd.DataFrame(data, columns=["a", "b", "c"])
tfm = TSMissingnessEncoder()
tfm.fit(df)
joblib.dump(tfm, "TSMissingnessEncoder.joblib")
tfm = joblib.load("TSMissingnessEncoder.joblib")
df = tfm.transform(df)
df
###Output
_____no_output_____
###Markdown
y transforms
###Code
# export
class Preprocessor():
def __init__(self, preprocessor, **kwargs):
self.preprocessor = preprocessor(**kwargs)
def fit(self, o):
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
self.fit_preprocessor = self.preprocessor.fit(o)
return self.fit_preprocessor
def transform(self, o, copy=True):
if type(o) in [float, int]: o = array([o]).reshape(-1,1)
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
def inverse_transform(self, o, copy=True):
o_shape = o.shape
if isinstance(o, pd.Series): o = o.values.reshape(-1,1)
else: o = o.reshape(-1,1)
output = self.fit_preprocessor.inverse_transform(o).reshape(*o_shape)
if isinstance(o, torch.Tensor): return o.new(output)
return output
StandardScaler = partial(sklearn.preprocessing.StandardScaler)
setattr(StandardScaler, '__name__', 'StandardScaler')
RobustScaler = partial(sklearn.preprocessing.RobustScaler)
setattr(RobustScaler, '__name__', 'RobustScaler')
Normalizer = partial(sklearn.preprocessing.MinMaxScaler, feature_range=(-1, 1))
setattr(Normalizer, '__name__', 'Normalizer')
BoxCox = partial(sklearn.preprocessing.PowerTransformer, method='box-cox')
setattr(BoxCox, '__name__', 'BoxCox')
YeoJohnshon = partial(sklearn.preprocessing.PowerTransformer, method='yeo-johnson')
setattr(YeoJohnshon, '__name__', 'YeoJohnshon')
Quantile = partial(sklearn.preprocessing.QuantileTransformer, n_quantiles=1_000, output_distribution='normal', random_state=0)
setattr(Quantile, '__name__', 'Quantile')
# Standardize
from tsai.data.validation import TimeSplitter
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(StandardScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# RobustScaler
y = random_shuffle(np.random.randn(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(RobustScaler)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# Normalize
y = random_shuffle(np.random.rand(1000) * 3 + .5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Normalizer)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# BoxCox
y = random_shuffle(np.random.rand(1000) * 10 + 5)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(BoxCox)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# YeoJohnshon
y = random_shuffle(np.random.randn(1000) * 10 + 5)
y = np.random.beta(.5, .5, size=1000)
splits = TimeSplitter()(y)
preprocessor = Preprocessor(YeoJohnshon)
preprocessor.fit(y[splits[0]])
y_tfm = preprocessor.transform(y)
test_close(preprocessor.inverse_transform(y_tfm), y)
plt.hist(y, 50, label='ori',)
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
# QuantileTransformer
y = - np.random.beta(1, .5, 10000) * 10
splits = TimeSplitter()(y)
preprocessor = Preprocessor(Quantile)
preprocessor.fit(y[splits[0]])
plt.hist(y, 50, label='ori',)
y_tfm = preprocessor.transform(y)
plt.legend(loc='best')
plt.show()
plt.hist(y_tfm, 50, label='tfm')
plt.legend(loc='best')
plt.show()
test_close(preprocessor.inverse_transform(y_tfm), y, 1e-1)
#export
def ReLabeler(cm):
r"""Changes the labels in a dataset based on a dictionary (class mapping)
Args:
cm = class mapping dictionary
"""
def _relabel(y):
obj = len(set([len(listify(v)) for v in cm.values()])) > 1
keys = cm.keys()
if obj:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y], dtype=object).reshape(*y.shape)
else:
new_cm = {k:v for k,v in zip(keys, [listify(v) for v in cm.values()])}
return np.array([new_cm[yi] if yi in keys else listify(yi) for yi in y]).reshape(*y.shape)
return _relabel
vals = {0:'a', 1:'b', 2:'c', 3:'d', 4:'e'}
y = np.array([vals[i] for i in np.random.randint(0, 5, 20)])
labeler = ReLabeler(dict(a='x', b='x', c='y', d='z', e='z'))
y_new = labeler(y)
test_eq(y.shape, y_new.shape)
y, y_new
#hide
from tsai.imports import create_scripts
from tsai.export import get_nb_name
nb_name = get_nb_name()
create_scripts(nb_name);
###Output
_____no_output_____ |
angiogram_blockage_detection.ipynb | ###Markdown
Object Detection Framework
###Code
# If you forked the repository, you can replace the link.
repo_url = 'https://github.com/shaheer1995/angiogram_detection'
# Number of training steps.
num_steps = 1000 # 200000
# Number of evaluation steps.
num_eval_steps = 50
MODELS_CONFIG = {
'ssd_mobilenet_v2': {
'model_name': 'ssd_mobilenet_v2_coco_2018_03_29',
'pipeline_file': 'ssd_mobilenet_v2_coco.config',
'batch_size': 12
},
'faster_rcnn_inception_v2': {
'model_name': 'faster_rcnn_inception_v2_coco_2018_01_28',
'pipeline_file': 'faster_rcnn_inception_v2_pets.config',
'batch_size': 12
},
'rfcn_resnet101': {
'model_name': 'rfcn_resnet101_coco_2018_01_28',
'pipeline_file': 'rfcn_resnet101_pets.config',
'batch_size': 8
}
}
# Pick the model you want to use
# Select a model in `MODELS_CONFIG`.
selected_model = 'faster_rcnn_inception_v2'
# Name of the object detection model to use.
MODEL = MODELS_CONFIG[selected_model]['model_name']
# Name of the pipline file in tensorflow object detection API.
pipeline_file = MODELS_CONFIG[selected_model]['pipeline_file']
# Training batch size fits in Colabe's Tesla K80 GPU memory for selected model.
batch_size = MODELS_CONFIG[selected_model]['batch_size']
###Output
_____no_output_____
###Markdown
Clone the `object_detection_demo` repository or your fork.
###Code
import os
%cd /content
repo_dir_path = os.path.abspath(os.path.join('.', os.path.basename(repo_url)))
!git clone {repo_url}
%cd {repo_dir_path}
!git pull
%tensorflow_version 1.x
import tensorflow as tf
###Output
TensorFlow 1.x selected.
###Markdown
Install required packages
###Code
%cd /content
!git clone https://github.com/tensorflow/models.git
!apt-get install -qq protobuf-compiler python-pil python-lxml python-tk
!pip install -q Cython contextlib2 pillow lxml matplotlib
!pip install -q pycocotools
%cd /content/models/research
!protoc object_detection/protos/*.proto --python_out=.
import os
os.environ['PYTHONPATH'] = '/content/models/research:/content/models/research/slim:' + os.environ['PYTHONPATH']
!python object_detection/builders/model_builder_test.py
!pip install tf-slim
###Output
Requirement already satisfied: tf-slim in /usr/local/lib/python3.7/dist-packages (1.1.0)
Requirement already satisfied: absl-py>=0.2.2 in /usr/local/lib/python3.7/dist-packages (from tf-slim) (0.12.0)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from absl-py>=0.2.2->tf-slim) (1.15.0)
###Markdown
Prepare `tfrecord` filesUse the following scripts to generate the `tfrecord` files.```bash Convert train folder annotation xml files to a single csv file, generate the `label_map.pbtxt` file to `data/` directory as well.python xml_to_csv.py -i data/images/train -o data/annotations/train_labels.csv -l data/annotations Convert test folder annotation xml files to a single csv.python xml_to_csv.py -i data/images/test -o data/annotations/test_labels.csv Generate `train.record`python generate_tfrecord.py --csv_input=data/annotations/train_labels.csv --output_path=data/annotations/train.record --img_path=data/images/train --label_map data/annotations/label_map.pbtxt Generate `test.record`python generate_tfrecord.py --csv_input=data/annotations/test_labels.csv --output_path=data/annotations/test.record --img_path=data/images/test --label_map data/annotations/label_map.pbtxt```
###Code
%cd {repo_dir_path}
# Convert train folder annotation xml files to a single csv file,
# generate the `label_map.pbtxt` file to `data/` directory as well.
!python xml_to_csv.py -i data/images/train -o data/annotations/train_labels.csv -l data/annotations
# Convert test folder annotation xml files to a single csv.
!python xml_to_csv.py -i data/images/test -o data/annotations/test_labels.csv
# Generate `train.record`
!python generate_tfrecord.py --csv_input=data/annotations/train_labels.csv --output_path=data/annotations/train.record --img_path=data/images/train --label_map data/annotations/label_map.pbtxt
# Generate `test.record`
!python generate_tfrecord.py --csv_input=data/annotations/test_labels.csv --output_path=data/annotations/test.record --img_path=data/images/test --label_map data/annotations/label_map.pbtxt
test_record_fname = '/content/angiogram_detection/data/annotations/test.record'
train_record_fname = '/content/angiogram_detection/data/annotations/train.record'
label_map_pbtxt_fname = '/content/angiogram_detection/data/annotations/label_map.pbtxt'
###Output
_____no_output_____
###Markdown
Download base model
###Code
%cd /content/models/research
import os
import shutil
import glob
import urllib.request
import tarfile
MODEL_FILE = MODEL + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
DEST_DIR = '/content/models/research/pretrained_model'
if not (os.path.exists(MODEL_FILE)):
urllib.request.urlretrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar = tarfile.open(MODEL_FILE)
tar.extractall()
tar.close()
os.remove(MODEL_FILE)
if (os.path.exists(DEST_DIR)):
shutil.rmtree(DEST_DIR)
os.rename(MODEL, DEST_DIR)
!echo {DEST_DIR}
!ls -alh {DEST_DIR}
fine_tune_checkpoint = os.path.join(DEST_DIR, "model.ckpt")
fine_tune_checkpoint
###Output
_____no_output_____
###Markdown
Configuring a Training Pipeline
###Code
import os
pipeline_fname = os.path.join('/content/models/research/object_detection/samples/configs/', pipeline_file)
assert os.path.isfile(pipeline_fname), '`{}` not exist'.format(pipeline_fname)
def get_num_classes(pbtxt_fname):
from object_detection.utils import label_map_util
label_map = label_map_util.load_labelmap(pbtxt_fname)
categories = label_map_util.convert_label_map_to_categories(
label_map, max_num_classes=90, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
return len(category_index.keys())
import re
num_classes = get_num_classes(label_map_pbtxt_fname)
with open(pipeline_fname) as f:
s = f.read()
with open(pipeline_fname, 'w') as f:
# fine_tune_checkpoint
s = re.sub('fine_tune_checkpoint: ".*?"',
'fine_tune_checkpoint: "{}"'.format(fine_tune_checkpoint), s)
# tfrecord files train and test.
s = re.sub(
'(input_path: ".*?)(train.record)(.*?")', 'input_path: "{}"'.format(train_record_fname), s)
s = re.sub(
'(input_path: ".*?)(val.record)(.*?")', 'input_path: "{}"'.format(test_record_fname), s)
# label_map_path
s = re.sub(
'label_map_path: ".*?"', 'label_map_path: "{}"'.format(label_map_pbtxt_fname), s)
# Set training batch_size.
s = re.sub('batch_size: [0-9]+',
'batch_size: {}'.format(batch_size), s)
# Set training steps, num_steps
s = re.sub('num_steps: [0-9]+',
'num_steps: {}'.format(num_steps), s)
# Set number of classes num_classes.
s = re.sub('num_classes: [0-9]+',
'num_classes: {}'.format(num_classes), s)
f.write(s)
!cat {pipeline_fname}
!pwd
model_dir = 'training/'
# Optionally remove content in output model directory to fresh start.
!rm -rf {model_dir}
os.makedirs(model_dir, exist_ok=True)
###Output
_____no_output_____
###Markdown
Run Tensorboard(Optional)
###Code
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip -o ngrok-stable-linux-amd64.zip
LOG_DIR = model_dir
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
get_ipython().system_raw('./ngrok http 6006 &')
###Output
_____no_output_____
###Markdown
Get Tensorboard link
###Code
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
!pip install lvis
###Output
Collecting lvis
Downloading lvis-0.5.3-py3-none-any.whl (14 kB)
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.7/dist-packages (from lvis) (1.15.0)
Requirement already satisfied: opencv-python>=4.1.0.25 in /usr/local/lib/python3.7/dist-packages (from lvis) (4.1.2.30)
Requirement already satisfied: python-dateutil>=2.8.0 in /usr/local/lib/python3.7/dist-packages (from lvis) (2.8.1)
Requirement already satisfied: pyparsing>=2.4.0 in /usr/local/lib/python3.7/dist-packages (from lvis) (2.4.7)
Requirement already satisfied: numpy>=1.18.2 in /usr/local/lib/python3.7/dist-packages (from lvis) (1.19.5)
Requirement already satisfied: matplotlib>=3.1.1 in /usr/local/lib/python3.7/dist-packages (from lvis) (3.2.2)
Requirement already satisfied: kiwisolver>=1.1.0 in /usr/local/lib/python3.7/dist-packages (from lvis) (1.3.1)
Requirement already satisfied: Cython>=0.29.12 in /usr/local/lib/python3.7/dist-packages (from lvis) (0.29.23)
Requirement already satisfied: cycler>=0.10.0 in /usr/local/lib/python3.7/dist-packages (from lvis) (0.10.0)
Installing collected packages: lvis
Successfully installed lvis-0.5.3
###Markdown
Train the model
###Code
!python /content/models/research/object_detection/model_main.py \
--pipeline_config_path={pipeline_fname} \
--model_dir={model_dir} \
--alsologtostderr \
--num_train_steps={num_steps} \
--num_eval_steps={num_eval_steps}
!ls {model_dir}
###Output
checkpoint model.ckpt-1000.meta
eval_0 model.ckpt-283.data-00000-of-00001
events.out.tfevents.1627472117.ba3bee89f1cf model.ckpt-283.index
export model.ckpt-283.meta
graph.pbtxt model.ckpt-576.data-00000-of-00001
model.ckpt-0.data-00000-of-00001 model.ckpt-576.index
model.ckpt-0.index model.ckpt-576.meta
model.ckpt-0.meta model.ckpt-871.data-00000-of-00001
model.ckpt-1000.data-00000-of-00001 model.ckpt-871.index
model.ckpt-1000.index model.ckpt-871.meta
###Markdown
Exporting a Trained Inference GraphOnce your training job is complete, you need to extract the newly trained inference graph, which will be later used to perform the object detection. This can be done as follows:
###Code
import re
import numpy as np
output_directory = './fine_tuned_final_model'
lst = os.listdir(model_dir)
lst = [l for l in lst if 'model.ckpt-' in l and '.meta' in l]
steps=np.array([int(re.findall('\d+', l)[0]) for l in lst])
last_model = lst[steps.argmax()].replace('.meta', '')
last_model_path = os.path.join(model_dir, last_model)
print(last_model_path)
!python /content/models/research/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path={pipeline_fname} \
--output_directory={output_directory} \
--trained_checkpoint_prefix={last_model_path}
!ls {output_directory}
###Output
checkpoint model.ckpt.index saved_model
frozen_inference_graph.pb model.ckpt.meta
model.ckpt.data-00000-of-00001 pipeline.config
###Markdown
Download the model `.pb` file
###Code
import os
pb_fname = os.path.join(os.path.abspath(output_directory), "frozen_inference_graph.pb")
assert os.path.isfile(pb_fname), '`{}` not exist'.format(pb_fname)
!ls -alh {pb_fname}
###Output
-rw-r--r-- 1 root root 50M Jul 28 12:23 /content/models/research/fine_tuned_final_model/frozen_inference_graph.pb
###Markdown
Option1 : upload the `.pb` file to your Google DriveThen download it from your Google Drive to local file system.During this step, you will be prompted to enter the token.
###Code
# Install the PyDrive wrapper & import libraries.
# This only needs to be done once in a notebook.
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once in a notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
fname = os.path.basename(pb_fname)
# Create & upload a text file.
uploaded = drive.CreateFile({'title': fname})
uploaded.SetContentFile(pb_fname)
uploaded.Upload()
print('Uploaded file with ID {}'.format(uploaded.get('id')))
###Output
Uploaded file with ID 16IbGG2xcfrNbp5wlXWGJXfmIev_dBUXS
###Markdown
Option2 : Download the `.pb` file directly to your local file systemThis method may not be stable when downloading large files like the model `.pb` file. Try **option 1** instead if not working.
###Code
from google.colab import files
files.download(pb_fname)
###Output
_____no_output_____
###Markdown
Download the `label_map.pbtxt` file
###Code
from google.colab import files
files.download(label_map_pbtxt_fname)
###Output
_____no_output_____
###Markdown
Download the modified pipline fileIf you plan to use OpenVINO toolkit to convert the `.pb` file to inference faster on Intel's hardware (CPU/GPU, Movidius, etc.)
###Code
files.download(pipeline_fname)
###Output
_____no_output_____
###Markdown
Run inference testTest with images in repository `object_detection_demo/test` directory.
###Code
import os
import glob
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = pb_fname
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = label_map_pbtxt_fname
# If you want to test the code with your images, just add images files to the PATH_TO_TEST_IMAGES_DIR.
PATH_TO_TEST_IMAGES_DIR = os.path.join(repo_dir_path, "test")
assert os.path.isfile(pb_fname)
assert os.path.isfile(PATH_TO_LABELS)
TEST_IMAGE_PATHS = glob.glob(os.path.join(PATH_TO_TEST_IMAGES_DIR, "*.*"))
assert len(TEST_IMAGE_PATHS) > 0, 'No image found in `{}`.'.format(PATH_TO_TEST_IMAGES_DIR)
print(TEST_IMAGE_PATHS)
pip install opencv-python
%cd /content/models/research/object_detection
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import cv2
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops
# This is needed to display the images.
%matplotlib inline
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(
label_map, max_num_classes=num_classes, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
channel_dict = {'L':1, 'RGB':3} # 'L' for Grayscale, 'RGB' : for 3 channel images
return np.array(image.getdata()).reshape(
(im_height, im_width, channel_dict[image.mode])).astype(np.uint8)
# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)
def drawBoundingBoxes(xmin,ymin,xmax,ymax,r,g,b,t):
x1,y1,x2,y2 = np.int64(xmin * im_width), np.int64(ymin * im_height), np.int64(xmax * im_width), np.int64(ymax * im_height)
cv2.rectangle(image_np, (x1, y1), (x2, y2), (r, g, b), t)
def run_inference_for_single_image(image, graph):
with graph.as_default():
with tf.Session() as sess:
# Get handles to input and output tensors
ops = tf.get_default_graph().get_operations()
all_tensor_names = {
output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in [
'num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks'
]:
tensor_name = key + ':0'
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
tensor_name)
if 'detection_masks' in tensor_dict:
# The following processing is only for single image
detection_boxes = tf.squeeze(
tensor_dict['detection_boxes'], [0])
detection_masks = tf.squeeze(
tensor_dict['detection_masks'], [0])
# Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
real_num_detection = tf.cast(
tensor_dict['num_detections'][0], tf.int32)
detection_boxes = tf.slice(detection_boxes, [0, 0], [
real_num_detection, -1])
detection_masks = tf.slice(detection_masks, [0, 0, 0], [
real_num_detection, -1, -1])
detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
detection_masks, detection_boxes, image.shape[0], image.shape[1])
detection_masks_reframed = tf.cast(
tf.greater(detection_masks_reframed, 0.5), tf.uint8)
# Follow the convention by adding back the batch dimension
tensor_dict['detection_masks'] = tf.expand_dims(
detection_masks_reframed, 0)
image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')
# Run inference
output_dict = sess.run(tensor_dict,
feed_dict={image_tensor: np.expand_dims(image,0)})
# all outputs are float32 numpy arrays, so convert types as appropriate
output_dict['num_detections'] = int(
output_dict['num_detections'][0])
output_dict['detection_classes'] = output_dict[
'detection_classes'][0].astype(np.uint8)
output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
output_dict['detection_scores'] = output_dict['detection_scores'][0]
if 'detection_masks' in output_dict:
output_dict['detection_masks'] = output_dict['detection_masks'][0]
return output_dict
#
for image_path in TEST_IMAGE_PATHS:
image = Image.open(image_path)
# the array based representation of the image will be used later in order to prepare the
# result image with boxes and labels on it.
image_np = load_image_into_numpy_array(image)
image_to_crop = load_image_into_numpy_array(image)
if image_np.shape[2] != 3:
image_np = np.broadcast_to(image_np, (image_np.shape[0], image_np.shape[1], 3)).copy() # Duplicating the Content
## adding Zeros to other Channels
## This adds Red Color stuff in background -- not recommended
# z = np.zeros(image_np.shape[:-1] + (2,), dtype=image_np.dtype)
# image_np = np.concatenate((image_np, z), axis=-1)
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
output_dict = run_inference_for_single_image(image_np, detection_graph)
# Visualization of the results of a detection.4
#Obtaining detection boxes, classes and detection scores
boxes = np.squeeze(output_dict['detection_boxes'])
scores = np.squeeze(output_dict['detection_scores'])
classes = np.squeeze(output_dict['detection_classes'])
#set a min thresh score
########
min_score_thresh = 0.60
########
#Filtering the bounding boxes
bboxes = boxes[scores > min_score_thresh]
d_classes = classes[scores > min_score_thresh]
block_boxes = bboxes[d_classes == 1]
#get image size
im_width, im_height = image.size
final_box = []
for box in bboxes:
ymin, xmin, ymax, xmax = box
final_box.append([xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height])
#print(final_box)
b_box = []
for box in block_boxes:
ymin, xmin, ymax, xmax = box
b_box.append([xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height])
drawBoundingBoxes(xmin,ymin,xmax,ymax,255,100,25,2)
for box in b_box:
ymin, xmin, ymax, xmax = box
y,h,x,w = np.int64(ymin), np.int64(ymax),np.int64(xmin), np.int64(xmax)
print(y,h,w,x)
crop_img = image_to_crop[h:w,y:x]
plt.figure(figsize=(3,3))
plt.imshow(crop_img)
#print(category_index)
#print(d_classes)
#print(m_box)
plt.figure(figsize=IMAGE_SIZE)
plt.imshow(image_np)
plt.xticks([])
plt.yticks([])
# dict = {'type': s_type, 'id':s_id, 'milepost':milepost}
# df = pd.DataFrame(dict)
# print(df)
###Output
_____no_output_____ |
tutorials/W1D2_ModelingPractice/hyo_W1D2_Tutorial2.ipynb | ###Markdown
Neuromatch Academy: Week1, Day 2, Tutorial 2 Tutorial objectivesWe are investigating a simple phenomena, working through the 10 steps of modeling ([Blohm et al., 2019](https://doi.org/10.1523/ENEURO.0352-19.2019)) in two notebooks: **Framing the question**1. finding a phenomenon and a question to ask about it2. understanding the state of the art3. determining the basic ingredients4. formulating specific, mathematically defined hypotheses**Implementing the model**5. selecting the toolkit6. planning the model7. implementing the model**Model testing**8. completing the model9. testing and evaluating the model**Publishing**10. publishing modelsWe did steps 1-5 in Tutorial 1 and will cover steps 6-10 in Tutorial 2 (this notebook). Utilities Setup and Convenience FunctionsPlease run the following **3** chunks to have functions and data available.
###Code
#@title Utilities and setup
# set up the environment for this tutorial
import time # import time
import numpy as np # import numpy
import scipy as sp # import scipy
from scipy.stats import gamma # import gamma distribution
import math # import basic math functions
import random # import basic random number generator functions
import matplotlib.pyplot as plt # import matplotlib
from IPython import display
fig_w, fig_h = (12, 8)
plt.rcParams.update({'figure.figsize': (fig_w, fig_h)})
plt.style.use('ggplot')
%matplotlib inline
#%config InlineBackend.figure_format = 'retina'
from scipy.signal import medfilt
# make
#@title Convenience functions: Plotting and Filtering
# define some convenience functions to be used later
def my_moving_window(x, window=3, FUN=np.mean):
'''
Calculates a moving estimate for a signal
Args:
x (numpy.ndarray): a vector array of size N
window (int): size of the window, must be a positive integer
FUN (function): the function to apply to the samples in the window
Returns:
(numpy.ndarray): a vector array of size N, containing the moving average
of x, calculated with a window of size window
There are smarter and faster solutions (e.g. using convolution) but this
function shows what the output really means. This function skips NaNs, and
should not be susceptible to edge effects: it will simply use
all the available samples, which means that close to the edges of the
signal or close to NaNs, the output will just be based on fewer samples. By
default, this function will apply a mean to the samples in the window, but
this can be changed to be a max/min/median or other function that returns a
single numeric value based on a sequence of values.
'''
# if data is a matrix, apply filter to each row:
if len(x.shape) == 2:
output = np.zeros(x.shape)
for rown in range(x.shape[0]):
output[rown,:] = my_moving_window(x[rown,:],window=window,FUN=FUN)
return output
# make output array of the same size as x:
output = np.zeros(x.size)
# loop through the signal in x
for samp_i in range(x.size):
values = []
# loop through the window:
for wind_i in range(int(-window), 1):
if ((samp_i+wind_i) < 0) or (samp_i+wind_i) > (x.size - 1):
# out of range
continue
# sample is in range and not nan, use it:
if not(np.isnan(x[samp_i+wind_i])):
values += [x[samp_i+wind_i]]
# calculate the mean in the window for this point in the output:
output[samp_i] = FUN(values)
return output
def my_plot_percepts(datasets=None, plotconditions=False):
if isinstance(datasets,dict):
# try to plot the datasets
# they should be named...
# 'expectations', 'judgments', 'predictions'
fig = plt.figure(figsize=(8, 8)) # set aspect ratio = 1? not really
plt.ylabel('perceived self motion [m/s]')
plt.xlabel('perceived world motion [m/s]')
plt.title('perceived velocities')
# loop through the entries in datasets
# plot them in the appropriate way
for k in datasets.keys():
if k == 'expectations':
expect = datasets[k]
plt.scatter(expect['world'],expect['self'],marker='*',color='xkcd:green',label='my expectations')
elif k == 'judgments':
judgments = datasets[k]
for condition in np.unique(judgments[:,0]):
c_idx = np.where(judgments[:,0] == condition)[0]
cond_self_motion = judgments[c_idx[0],1]
cond_world_motion = judgments[c_idx[0],2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'world-motion condition judgments'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'self-motion condition judgments'
else:
c_label = 'condition [%d] judgments'%condition
plt.scatter(judgments[c_idx,3],judgments[c_idx,4], label=c_label, alpha=0.2)
elif k == 'predictions':
predictions = datasets[k]
for condition in np.unique(predictions[:,0]):
c_idx = np.where(predictions[:,0] == condition)[0]
cond_self_motion = predictions[c_idx[0],1]
cond_world_motion = predictions[c_idx[0],2]
if cond_world_motion == -1 and cond_self_motion == 0:
c_label = 'predicted world-motion condition'
elif cond_world_motion == 0 and cond_self_motion == 1:
c_label = 'predicted self-motion condition'
else:
c_label = 'condition [%d] prediction'%condition
plt.scatter(predictions[c_idx,4],predictions[c_idx,3], marker='x', label=c_label)
else:
print("datasets keys should be 'hypothesis', 'judgments' and 'predictions'")
if plotconditions:
# this code is simplified but only works for the dataset we have:
plt.scatter([1],[0],marker='<',facecolor='none',edgecolor='xkcd:black',linewidths=2,label='world-motion stimulus',s=80)
plt.scatter([0],[1],marker='>',facecolor='none',edgecolor='xkcd:black',linewidths=2,label='self-motion stimulus',s=80)
plt.legend(facecolor='xkcd:white')
plt.show()
else:
if datasets is not None:
print('datasets argument should be a dict')
raise TypeError
def my_plot_motion_signals():
dt = 1/10
a = gamma.pdf( np.arange(0,10,dt), 2.5, 0 )
t = np.arange(0,10,dt)
v = np.cumsum(a*dt)
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharex='col', sharey='row', figsize=(14,6))
fig.suptitle('Sensory ground truth')
ax1.set_title('world-motion condition')
ax1.plot(t,-v,label='visual [$m/s$]')
ax1.plot(t,np.zeros(a.size),label='vestibular [$m/s^2$]')
ax1.set_xlabel('time [s]')
ax1.set_ylabel('motion')
ax1.legend(facecolor='xkcd:white')
ax2.set_title('self-motion condition')
ax2.plot(t,-v,label='visual [$m/s$]')
ax2.plot(t,a,label='vestibular [$m/s^2$]')
ax2.set_xlabel('time [s]')
ax2.set_ylabel('motion')
ax2.legend(facecolor='xkcd:white')
plt.show()
def my_plot_sensorysignals(judgments, opticflow, vestibular, returnaxes=False, addaverages=False):
wm_idx = np.where(judgments[:,0] == 0)
sm_idx = np.where(judgments[:,0] == 1)
opticflow = opticflow.transpose()
wm_opticflow = np.squeeze(opticflow[:,wm_idx])
sm_opticflow = np.squeeze(opticflow[:,sm_idx])
vestibular = vestibular.transpose()
wm_vestibular = np.squeeze(vestibular[:,wm_idx])
sm_vestibular = np.squeeze(vestibular[:,sm_idx])
X = np.arange(0,10,.1)
fig, my_axes = plt.subplots(nrows=2, ncols=2, sharex='col', sharey='row', figsize=(15,10))
fig.suptitle('Sensory signals')
my_axes[0][0].plot(X,wm_opticflow, color='xkcd:light red', alpha=0.1)
my_axes[0][0].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[0][0].plot(X,np.average(wm_opticflow, axis=1), color='xkcd:red', alpha=1)
my_axes[0][0].set_title('world-motion optic flow')
my_axes[0][0].set_ylabel('[motion]')
my_axes[0][1].plot(X,sm_opticflow, color='xkcd:azure', alpha=0.1)
my_axes[0][1].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[0][1].plot(X,np.average(sm_opticflow, axis=1), color='xkcd:blue', alpha=1)
my_axes[0][1].set_title('self-motion optic flow')
my_axes[1][0].plot(X,wm_vestibular, color='xkcd:light red', alpha=0.1)
my_axes[1][0].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[1][0].plot(X,np.average(wm_vestibular, axis=1), color='xkcd:red', alpha=1)
my_axes[1][0].set_title('world-motion vestibular signal')
my_axes[1][0].set_xlabel('time [s]')
my_axes[1][0].set_ylabel('[motion]')
my_axes[1][1].plot(X,sm_vestibular, color='xkcd:azure', alpha=0.1)
my_axes[1][1].plot([0,10], [0,0], ':', color='xkcd:black')
if addaverages:
my_axes[1][1].plot(X,np.average(sm_vestibular, axis=1), color='xkcd:blue', alpha=1)
my_axes[1][1].set_title('self-motion vestibular signal')
my_axes[1][1].set_xlabel('time [s]')
if returnaxes:
return my_axes
else:
plt.show()
def my_plot_thresholds(thresholds, world_prop, self_prop, prop_correct):
plt.figure(figsize=(12,8))
plt.title('threshold effects')
plt.plot([min(thresholds),max(thresholds)],[0,0],':',color='xkcd:black')
plt.plot([min(thresholds),max(thresholds)],[0.5,0.5],':',color='xkcd:black')
plt.plot([min(thresholds),max(thresholds)],[1,1],':',color='xkcd:black')
plt.plot(thresholds, world_prop, label='world motion')
plt.plot(thresholds, self_prop, label='self motion')
plt.plot(thresholds, prop_correct, color='xkcd:purple', label='correct classification')
plt.xlabel('threshold')
plt.ylabel('proportion correct or classified as self motion')
plt.legend(facecolor='xkcd:white')
plt.show()
def my_plot_predictions_data(judgments, predictions):
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
# self:
conditions_self = np.abs(judgments[:,1])
veljudgmnt_self = judgments[:,3]
velpredict_self = predictions[:,3]
# world:
conditions_world = np.abs(judgments[:,2])
veljudgmnt_world = judgments[:,4]
velpredict_world = predictions[:,4]
fig, [ax1, ax2] = plt.subplots(nrows=1, ncols=2, sharey='row', figsize=(12,5))
ax1.scatter(veljudgmnt_self,velpredict_self, alpha=0.2)
ax1.plot([0,1],[0,1],':',color='xkcd:black')
ax1.set_title('self-motion judgments')
ax1.set_xlabel('observed')
ax1.set_ylabel('predicted')
ax2.scatter(veljudgmnt_world,velpredict_world, alpha=0.2)
ax2.plot([0,1],[0,1],':',color='xkcd:black')
ax2.set_title('world-motion judgments')
ax2.set_xlabel('observed')
ax2.set_ylabel('predicted')
plt.show()
#@title Data generation code (needs to go on OSF and deleted here)
def my_simulate_data(repetitions=100, conditions=[(0,-1),(+1,0)] ):
"""
Generate simulated data for this tutorial. You do not need to run this
yourself.
Args:
repetitions: (int) number of repetitions of each condition (default: 30)
conditions: list of 2-tuples of floats, indicating the self velocity and
world velocity in each condition (default: returns data that is
good for exploration: [(-1,0),(0,+1)] but can be flexibly
extended)
The total number of trials used (ntrials) is equal to:
repetitions * len(conditions)
Returns:
dict with three entries:
'judgments': ntrials * 5 matrix
'opticflow': ntrials * 100 matrix
'vestibular': ntrials * 100 matrix
The default settings would result in data where first 30 trials reflect a
situation where the world (other train) moves in one direction, supposedly
at 1 m/s (perhaps to the left: -1) while the participant does not move at
all (0), and 30 trials from a second condition, where the world does not
move, while the participant moves with 1 m/s in the opposite direction from
where the world is moving in the first condition (0,+1). The optic flow
should be the same, but the vestibular input is not.
"""
# reproducible output
np.random.seed(1937)
# set up some variables:
ntrials = repetitions * len(conditions)
# the following arrays will contain the simulated data:
judgments = np.empty(shape=(ntrials,5))
opticflow = np.empty(shape=(ntrials,100))
vestibular = np.empty(shape=(ntrials,100))
# acceleration:
a = gamma.pdf(np.arange(0,10,.1), 2.5, 0 )
# divide by 10 so that velocity scales from 0 to 1 (m/s)
# max acceleration ~ .308 m/s^2
# not realistic! should be about 1/10 of that
# velocity:
v = np.cumsum(a*.1)
# position: (not necessary)
#x = np.cumsum(v)
#################################
# REMOVE ARBITRARY SCALING & CORRECT NOISE PARAMETERS
vest_amp = 1
optf_amp = 1
# we start at the first trial:
trialN = 0
# we start with only a single velocity, but it should be possible to extend this
for conditionno in range(len(conditions)):
condition = conditions[conditionno]
for repetition in range(repetitions):
#
# generate optic flow signal
OF = v * np.diff(condition) # optic flow: difference between self & world motion
OF = (OF * optf_amp) # fairly large spike range
OF = OF + (np.random.randn(len(OF)) * .1) # adding noise
# generate vestibular signal
VS = a * condition[0] # vestibular signal: only self motion
VS = (VS * vest_amp) # less range
VS = VS + (np.random.randn(len(VS)) * 1.) # acceleration is a smaller signal, what is a good noise level?
# store in matrices, corrected for sign
#opticflow[trialN,:] = OF * -1 if (np.sign(np.diff(condition)) < 0) else OF
#vestibular[trialN,:] = VS * -1 if (np.sign(condition[1]) < 0) else VS
opticflow[trialN,:], vestibular[trialN,:] = OF, VS
#########################################################
# store conditions in judgments matrix:
judgments[trialN,0:3] = [ conditionno, condition[0], condition[1] ]
# vestibular SD: 1.0916052957046194 and 0.9112684509277528
# visual SD: 0.10228834313079663 and 0.10975472557444346
# generate judgments:
if (abs(np.average(np.cumsum(medfilt(VS/vest_amp,5)*.1)[70:90])) < 1):
###########################
# NO self motion detected
###########################
selfmotion_weights = np.array([.01,.01]) # there should be low/no self motion
worldmotion_weights = np.array([.01,.99]) # world motion is dictated by optic flow
else:
########################
# self motion DETECTED
########################
#if (abs(np.average(np.cumsum(medfilt(VS/vest_amp,15)*.1)[70:90]) - np.average(medfilt(OF,15)[70:90])) < 5):
if True:
####################
# explain all self motion by optic flow
selfmotion_weights = np.array([.01,.99]) # there should be lots of self motion, but determined by optic flow
worldmotion_weights = np.array([.01,.01]) # very low world motion?
else:
# we use both optic flow and vestibular info to explain both
selfmotion_weights = np.array([ 1, 0]) # motion, but determined by vestibular signal
worldmotion_weights = np.array([ 1, 1]) # very low world motion?
#
integrated_signals = np.array([
np.average( np.cumsum(medfilt(VS/vest_amp,15))[90:100]*.1 ),
np.average((medfilt(OF/optf_amp,15))[90:100])
])
selfmotion = np.sum(integrated_signals * selfmotion_weights)
worldmotion = np.sum(integrated_signals * worldmotion_weights)
#print(worldmotion,selfmotion)
judgments[trialN,3] = abs(selfmotion)
judgments[trialN,4] = abs(worldmotion)
# this ends the trial loop, so we increment the counter:
trialN += 1
return {'judgments':judgments,
'opticflow':opticflow,
'vestibular':vestibular}
simulated_data = my_simulate_data()
judgments = simulated_data['judgments']
opticflow = simulated_data['opticflow']
vestibular = simulated_data['vestibular']
###Output
_____no_output_____
###Markdown
Micro-tutorial 6 - planning the model
###Code
#@title Video: Planning the model
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='daEtkVporBE', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=daEtkVporBE
###Markdown
**Goal:** Identify the key components of the model and how they work together.Our goal all along has been to model our perceptual estimates of sensory data.Now that we have some idea of what we want to do, we need to line up the components of the model: what are the input and output? Which computations are done and in what order? The figure below shows a generic model we will use to guide our code construction. Our model will have:* **inputs**: the values the system has available - for this tutorial the sensory information in a trial. We want to gather these together and plan how to process them. * **parameters**: unless we are lucky, our functions will have unknown parameters - we want to identify these and plan for them.* **outputs**: these are the predictions our model will make - for this tutorial these are the perceptual judgments on each trial. Ideally these are directly comparable to our data. * **Model functions**: A set of functions that perform the hypothesized computations.>Using Python (with Numpy and Scipy) we will define a set of functions that take our data and some parameters as input, can run our model, and output a prediction for the judgment data.Recap of what we've accomplished so far:To model perceptual estimates from our sensory data, we need to 1. _integrate_ to ensure sensory information are in appropriate units2. _reduce noise and set timescale_ by filtering3. _threshold_ to model detection Remember the kind of operations we identified:* integration: `np.cumsum()`* filtering: `my_moving_window()`* threshold: `if` with a comparison (`>` or `<`) and `else`We will collect all the components we've developed and design the code by:1. **identifying the key functions** we need2. **sketching the operations** needed in each. **_Planning our model:_**We know what we want the model to do, but we need to plan and organize the model into functions and operations. We're providing a draft of the first function. For each of the two other code chunks, write mostly comments and help text first. This should put into words what role each of the functions plays in the overall model, implementing one of the steps decided above. _______Below is the main function with a detailed explanation of what the function is supposed to do: what input is expected, and what output will generated. The code is not complete, and only returns nans for now. However, this outlines how most model code works: it gets some measured data (the sensory signals) and a set of parameters as input, and as output returns a prediction on other measured data (the velocity judgments). The goal of this function is to define the top level of a simulation model which:* receives all input* loops through the cases* calls functions that computes predicted values for each case* outputs the predictions **TD 6.1**: Complete main model functionThe function `my_train_illusion_model()` below should call one other function: `my_perceived_motion()`. What input do you think this function should get? **Complete main model function**
###Code
def my_train_illusion_model(sensorydata, params):
'''
Generate output predictions of perceived self-motion and perceived world-motion velocity
based on input visual and vestibular signals.
Args (Input variables passed into function):
sensorydata: (dict) dictionary with two named entries:
opticflow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows
and M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindow: (list of int) determines the strength of filtering for
the visual and vestibular signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
'''
# sanitize input a little
if not('FUN' in params.keys()):
params['FUN'] = np.mean
if not('integrate' in params.keys()):
params['integrate'] = True
if not('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials?
for trialN in range(ntrials):
#these are our sensory variables (inputs)
vis = sensorydata['opticflow'][trialN,:]
ves = sensorydata['vestibular'][trialN,:]
########################################################
# generate output predicted perception:
########################################################
#our inputs our vis, ves, and params
selfmotion[trialN], worldmotion[trialN] = [np.nan, np.nan]
########################################################
# replace above with
# selfmotion[trialN], worldmotion[trialN] = my_perceived_motion( ???, ???, params=params)
# and fill in question marks
########################################################
# comment this out when you've filled
raise NotImplementedError("Student excercise: generate predictions")
return {'selfmotion':selfmotion, 'worldmotion':worldmotion}
# uncomment the following lines to run the main model function:
## here is a mock version of my_perceived motion.
## so you can test my_train_illusion_model()
#def my_perceived_motion(*args, **kwargs):
#return np.random.rand(2)
##let's look at the preditions we generated for two sample trials (0,100)
##we should get a 1x2 vector of self-motion prediction and another for world-motion
#sensorydata={'opticflow':opticflow[[0,100],:0], 'vestibular':vestibular[[0,100],:0]}
#params={'threshold':0.33, 'filterwindow':[100,50]}
#my_train_illusion_model(sensorydata=sensorydata, params=params)
# to_remove solution
def my_train_illusion_model(sensorydata, params):
'''
Generate predictions of perceived self motion and perceived world motion
based on the visual and vestibular signals.
Args:
sensorydata: (dict) dictionary with two named entries:
opticalfow: (numpy.ndarray of float) NxM array with N trials on rows
and M visual signal samples in columns
vestibular: (numpy.ndarray of float) NxM array with N trials on rows and
M vestibular signal samples in columns
params: (dict) dictionary with named entries:
threshold: (float) vestibular threshold for credit assignment
filterwindow: (list of int) determines the strength of filtering for
the visual and vestibular signals, respectively
integrate (bool): whether to integrate the vestibular signals, will
be set to True if absent
FUN (function): function used in the filter, will be set to
np.mean if absent
samplingrate (float): the number of samples per second in the
sensory data, will be set to 10 if absent
Returns:
dict with two entries:
selfmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived self motion
worldmotion: (numpy.ndarray) vector array of length N, with predictions
of perceived world motion
'''
# sanitize input a little
if not('FUN' in params.keys()):
params['FUN'] = np.mean
if not('integrate' in params.keys()):
params['integrate'] = True
if not('samplingrate' in params.keys()):
params['samplingrate'] = 10
# number of trials:
ntrials = sensorydata['opticflow'].shape[0]
# set up variables to collect output
selfmotion = np.empty(ntrials)
worldmotion = np.empty(ntrials)
# loop through trials
for trialN in range(ntrials):
vis = sensorydata['opticflow'][trialN,:]
ves = sensorydata['vestibular'][trialN,:]
########################################################
# get predicted perception in respective output vectors:
########################################################
selfmotion[trialN], worldmotion[trialN] = my_perceived_motion( vis=vis, ves=ves, params=params)
return {'selfmotion':selfmotion, 'worldmotion':worldmotion}
# here is a mock version of my_perceived motion
# now you can test my_train_illusion_model()
def my_perceived_motion(*args, **kwargs):
return np.random.rand(2)
##let's look at the preditions we generated for n=2 sample trials (0,100)
##we should get a 1x2 vector of self-motion prediction and another for world-motion
sensorydata={'opticflow':opticflow[[0,100],:0], 'vestibular':vestibular[[0,100],:0]}
params={'threshold':0.33, 'filterwindow':[100,50]}
my_train_illusion_model(sensorydata=sensorydata, params=params)
###Output
_____no_output_____
###Markdown
**TD 6.2**: Draft perceived motion functionsNow we draft a set of functions, the first of which is used in the main model function (see above) and serves to generate perceived velocities. The other two are used in the first one. Only write help text and/or comments, you don't have to write the whole function. Each time ask yourself these questions:* what sensory data is necessary? * what other input does the function need, if any?* which operations are performed on the input?* what is the output?(the number of arguments is correct) **Template perceived motion**
###Code
# fill in the input arguments the function should have:
# write the help text for the function:
def my_perceived_motion(arg1, arg2, arg3):
'''
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
argument 3: explain the format and content of the third argument
Returns:
what output does the function generate?
Any further description?
'''
# structure your code into two functions: "my_selfmotion" and "my_worldmotion"
# write comments outlining the operations to be performed on the inputs by each of these functions
# use the elements from micro-tutorials 3, 4, and 5 (found in W1D2 Tutorial Part 1)
#
#
#
# what kind of output should this function produce?
return output
###Output
_____no_output_____
###Markdown
We've completed the `my_perceived_motion()` function for you below. Follow this example to complete the template for `my_selfmotion()` and `my_worldmotion()`. Write out the inputs and outputs, and the steps required to calculate the outputs from the inputs.**Perceived motion function**
###Code
#Full perceived motion function
def my_perceived_motion(vis, ves, params):
'''
Takes sensory data and parameters and returns predicted percepts
Args:
vis (numpy.ndarray): 1xM array of optic flow velocity data
ves (numpy.ndarray): 1xM array of vestibular acceleration data
params: (dict) dictionary with named entries:
see my_train_illusion_model() for details
Returns:
[list of floats]: prediction for perceived self-motion based on
vestibular data, and prediction for perceived world-motion based on
perceived self-motion and visual data
'''
# estimate self motion based on only the vestibular data
# pass on the parameters
selfmotion = my_selfmotion(ves=ves,
params=params)
# estimate the world motion, based on the selfmotion and visual data
# pass on the parameters as well
worldmotion = my_worldmotion(vis=vis,
selfmotion=selfmotion,
params=params)
return [selfmotion, worldmotion]
###Output
_____no_output_____
###Markdown
**Template calculate self motion**Put notes in the function below that describe the inputs, the outputs, and steps that transform the output from the input using elements from micro-tutorials 3,4,5.
###Code
def my_selfmotion(arg1, arg2):
'''
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
Returns:
what output does the function generate?
Any further description?
'''
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# 4.
# what output should this function produce?
return output
# to_remove solution
def my_selfmotion(ves, params):
'''
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
'''
# 1. integrate vestibular signal
# 2. running window function
# 3. take final value
# 4. compare to threshold
# if higher than threshold: return value
# if lower than threshold: return 0
return output
###Output
_____no_output_____
###Markdown
**Template calculate world motion**Put notes in the function below that describe the inputs, the outputs, and steps that transform the output from the input using elements from micro-tutorials 3,4,5.
###Code
def my_worldmotion(arg1, arg2, arg3):
'''
Short description of the function
Args:
argument 1: explain the format and content of the first argument
argument 2: explain the format and content of the second argument
argument 3: explain the format and content of the third argument
Returns:
what output does the function generate?
Any further description?
'''
# what operations do we perform on the input?
# use the elements from micro-tutorials 3, 4, and 5
# 1.
# 2.
# 3.
# what output should this function produce?
return output
# to_remove solution
def my_worldmotion(vis, selfmotion, params):
'''
Estimates world motion based on the visual signal, the estimate of
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
'''
# 1. running window function
# 2. take final value
# 3. subtract selfmotion from value
# return final value
return output
###Output
_____no_output_____
###Markdown
Micro-tutorial 7 - implement model
###Code
#@title Video: implement the model
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='gtSOekY8jkw', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=gtSOekY8jkw
###Markdown
**Goal:** We write the components of the model in actual code.For the operations we picked, there function ready to use:* integration: `np.cumsum(data, axis=1)` (axis=1: per trial and over samples)* filtering: `my_moving_window(data, window)` (window: int, default 3)* average: `np.mean(data)`* threshold: if (value > thr): else: **TD 7.1:** Write code to estimate self motionUse the operations to finish writing the function that will calculate an estimate of self motion. Fill in the descriptive list of items with actual operations. Use the function for estimating world-motion below, which we've filled for you!**Template finish self motion function**
###Code
def my_selfmotion(ves, params):
'''
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
'''
###uncomment the code below and fill in with your code
## 1. integrate vestibular signal
#ves = np.cumsum(ves*(1/params['samplingrate']))
## 2. running window function to accumulate evidence:
#selfmotion = YOUR CODE HERE
## 3. take final value of self-motion vector as our estimate
#selfmotion =
## 4. compare to threshold. Hint the threshodl is stored in params['threshold']
## if selfmotion is higher than threshold: return value
## if it's lower than threshold: return 0
#if YOURCODEHERE
#selfmotion = YOURCODHERE
# comment this out when you've filled
raise NotImplementedError("Student excercise: estimate my_selfmotion")
return output
# to_remove solution
def my_selfmotion(ves, params):
'''
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
'''
# integrate signal:
ves = np.cumsum(ves*(1/params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves,
window=params['filterwindows'][0],
FUN=params['FUN'])
# take the final value as our estimate:
selfmotion = selfmotion[-1]
# compare to threshold, set to 0 if lower
if selfmotion < params['threshold']:
selfmotion = 0
return selfmotion
###Output
_____no_output_____
###Markdown
Estimate world motionWe have completed the `my_worldmotion()` function for you.**World motion function**
###Code
# World motion function
def my_worldmotion(vis, selfmotion, params):
'''
Short description of the function
Args:
vis (numpy.ndarray): 1xM array with the optic flow signal
selfmotion (float): estimate of self motion
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of world motion in m/s
'''
# running average to smooth/accumulate sensory evidence
visualmotion = my_moving_window(vis,
window=params['filterwindows'][1],
FUN=np.mean)
# take final value
visualmotion = visualmotion[-1]
# subtract selfmotion from value
worldmotion = visualmotion + selfmotion
# return final value
return worldmotion
###Output
_____no_output_____
###Markdown
Micro-tutorial 8 - completing the model
###Code
#@title Video: completing the model
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='-NiHSv4xCDs', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=-NiHSv4xCDs
###Markdown
**Goal:** Make sure the model can speak to the hypothesis. Eliminate all the parameters that do not speak to the hypothesis.Now that we have a working model, we can keep improving it, but at some point we need to decide that it is finished. Once we have a model that displays the properties of a system we are interested in, it should be possible to say something about our hypothesis and question. Keeping the model simple makes it easier to understand the phenomenon and answer the research question. Here that means that our model should have illusory perception, and perhaps make similar judgments to those of the participants, but not much more.To test this, we will run the model, store the output and plot the models' perceived self motion over perceived world motion, like we did with the actual perceptual judgments (it even uses the same plotting function). **TD 8.1:** See if the model produces illusions
###Code
#@title Run to plot model predictions of motion estimates
# prepare to run the model again:
data = {'opticflow':opticflow, 'vestibular':vestibular}
params = {'threshold':0.6, 'filterwindows':[100,50], 'FUN':np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:,0:3] = judgments[:,0:3]
predictions[:,3] = modelpredictions['selfmotion']
predictions[:,4] = modelpredictions['worldmotion'] *-1
my_plot_percepts(datasets={'predictions':predictions}, plotconditions=True)
###Output
_____no_output_____
###Markdown
**Questions:*** Why is the data distributed this way? How does it compare to the plot in TD 1.2?* Did you expect to see this?* Where do the model's predicted judgments for each of the two conditions fall?* How does this compare to the behavioral data?However, the main observation should be that **there are illusions**: the blue and red data points are mixed in each of the two sets of data. Does this mean the model can help us understand the phenomenon? Micro-tutorial 9 - testing and evaluating the model
###Code
#@title Video: Background
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='5vnDOxN3M_k', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=5vnDOxN3M_k
###Markdown
**Goal:** Once we have finished the model, we need a description of how good it is. The question and goals we set in micro-tutorial 1 and 4 help here. There are multiple ways to evaluate a model. Aside from the obvious fact that we want to get insight into the phenomenon that is not directly accessible without the model, we always want to quantify how well the model agrees with the data. Quantify model quality with $R^2$Let's look at how well our model matches the actual judgment data.
###Code
#@title Run to plot predictions over data
my_plot_predictions_data(judgments, predictions)
###Output
_____no_output_____
###Markdown
When model predictions are correct, the red points in the figure above should lie along the identity line (a dotted black line here). Points off the identity line represent model prediction errors. While in each plot we see two clusters of dots that are fairly close to the identity line, there are also two clusters that are not. For the trials that those points represent, the model has an illusion while the participants don't or vice versa.We will use a straightforward, quantitative measure of how good the model is: $R^2$ (pronounced: "R-squared"), which can take values between 0 and 1, and expresses how much variance is explained by the relationship between two variables (here the model's predictions and the actual judgments). It is also called [coefficient of determination](https://en.wikipedia.org/wiki/Coefficient_of_determination), and is calculated here as the square of the correlation coefficient (r or $\rho$). Just run the chunk below:
###Code
#@title Run to calculate R^2
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(conditions,veljudgmnt)
print('conditions -> judgments R^2: %0.3f'%( r_value**2 ))
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(veljudgmnt,velpredict)
print('predictions -> judgments R^2: %0.3f'%( r_value**2 ))
###Output
conditions -> judgments R^2: 0.032
predictions -> judgments R^2: 0.256
###Markdown
These $R^2$s express how well the experimental conditions explain the participants judgments and how well the models predicted judgments explain the participants judgments.You will learn much more about model fitting, quantitative model evaluation and model comparison tomorrow!Perhaps the $R^2$ values don't seem very impressive, but the judgments produced by the participants are explained by the model's predictions better than by the actual conditions. In other words: the model tends to have the same illusions as the participants. **TD 9.1** Varying the threshold parameter to improve the modelIn the code below, see if you can find a better value for the threshold parameter, to reduce errors in the models' predictions.**Testing thresholds**
###Code
# Testing thresholds
def test_threshold(threshold=0.33):
# prepare to run model
data = {'opticflow':opticflow, 'vestibular':vestibular}
params = {'threshold':threshold, 'filterwindows':[100,50], 'FUN':np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# get predictions in matrix
predictions = np.zeros(judgments.shape)
predictions[:,0:3] = judgments[:,0:3]
predictions[:,3] = modelpredictions['selfmotion']
predictions[:,4] = modelpredictions['worldmotion'] *-1
# get percepts from participants and model
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
# calculate R2
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(veljudgmnt,velpredict)
print('predictions -> judgments R2: %0.3f'%( r_value**2 ))
test_threshold(threshold=0.5)
###Output
predictions -> judgments R2: 0.267
###Markdown
**TD 9.2:** Credit assigmnent of self motionWhen we look at the figure in **TD 8.1**, we can see a cluster does seem very close to (1,0), just like in the actual data. The cluster of points at (1,0) are from the case where we conclude there is no self motion, and then set the self motion to 0. That value of 0 removes a lot of noise from the world-motion estimates, and all noise from the self-motion estimate. In the other case, where there is self motion, we still have a lot of noise (see also micro-tutorial 4).Let's change our `my_selfmotion()` function to return a self motion of 1 when the vestibular signal indicates we are above threshold, and 0 when we are below threshold. Edit the function here.**Template function for credit assigment of self motion**
###Code
# Template binary self-motion estimates
def my_selfmotion(ves, params):
'''
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
'''
# integrate signal:
ves = np.cumsum(ves*(1/params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves,
window=params['filterwindows'][0],
FUN=params['FUN'])
## take the final value as our estimate:
selfmotion = selfmotion[-1]
##########################################
# this last part will have to be changed
# compare to threshold, set to 0 if lower and else...
if selfmotion < params['threshold']:
selfmotion = 0
#uncomment the lines below and fill in with your code
#else:
#YOUR CODE HERE
# comment this out when you've filled
raise NotImplementedError("Student excercise: modify with credit assignment")
return selfmotion
# to_remove solution
def my_selfmotion(ves, params):
'''
Estimates self motion for one vestibular signal
Args:
ves (numpy.ndarray): 1xM array with a vestibular signal
params (dict): dictionary with named entries:
see my_train_illusion_model() for details
Returns:
(float): an estimate of self motion in m/s
'''
# integrate signal:
ves = np.cumsum(ves*(1/params['samplingrate']))
# use running window to accumulate evidence:
selfmotion = my_moving_window(ves,
window=params['filterwindows'][0],
FUN=params['FUN'])
# final value:
selfmotion = selfmotion[-1]
# compare to threshold, set to 0 if lower
if selfmotion < params['threshold']:
selfmotion = 0
else:
selfmotion = 1
return selfmotion
###Output
_____no_output_____
###Markdown
The function you just wrote will be used when we run the model again below.
###Code
#@title Run model credit assigment of self motion
# prepare to run the model again:
data = {'opticflow':opticflow, 'vestibular':vestibular}
params = {'threshold':0.33, 'filterwindows':[100,50], 'FUN':np.mean}
modelpredictions = my_train_illusion_model(sensorydata=data, params=params)
# no process the data to allow plotting...
predictions = np.zeros(judgments.shape)
predictions[:,0:3] = judgments[:,0:3]
predictions[:,3] = modelpredictions['selfmotion']
predictions[:,4] = modelpredictions['worldmotion'] *-1
my_plot_percepts(datasets={'predictions':predictions}, plotconditions=False)
###Output
_____no_output_____
###Markdown
That looks much better, and closer to the actual data. Let's see if the $R^2$ values have improved:
###Code
#@title Run to calculate R^2 for model with self motion credit assignment
conditions = np.concatenate((np.abs(judgments[:,1]),np.abs(judgments[:,2])))
veljudgmnt = np.concatenate((judgments[:,3],judgments[:,4]))
velpredict = np.concatenate((predictions[:,3],predictions[:,4]))
my_plot_predictions_data(judgments, predictions)
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(conditions,veljudgmnt)
print('conditions -> judgments R2: %0.3f'%( r_value**2 ))
slope, intercept, r_value, p_value, std_err = sp.stats.linregress(velpredict,veljudgmnt)
print('predictions -> judgments R2: %0.3f'%( r_value**2 ))
###Output
_____no_output_____
###Markdown
While the model still predicts velocity judgments better than the conditions (i.e. the model predicts illusions in somewhat similar cases), the $R^2$ values are actually worse than those of the simpler model. What's really going on is that the same set of points that were model prediction errors in the previous model are also errors here. All we have done is reduce the spread. Interpret the model's meaningHere's what you should have learned: 1. A noisy, vestibular, acceleration signal can give rise to illusory motion.2. However, disambiguating the optic flow by adding the vestibular signal simply adds a lot of noise. This is not a plausible thing for the brain to do.3. Our other hypothesis - credit assignment - is more qualitatively correct, but our simulations were not able to match the frequency of the illusion on a trial-by-trial basis._It's always possible to refine our models to improve the fits._There are many ways to try to do this. A few examples; we could implement a full sensory cue integration model, perhaps with Kalman filters (Week 2, Day 3), or we could add prior knowledge (at what time do the trains depart?). However, we decided that for now we have learned enough, so it's time to write it up. Micro-tutorial 10 - publishing the model
###Code
#@title Video: Background
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='kf4aauCr5vA', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
###Output
Video available at https://youtube.com/watch?v=kf4aauCr5vA
|
.ipynb_checkpoints/2019.12.17_rna-seq_pan_diseased-tissue_metastatic-SKCM-checkpoint.ipynb | ###Markdown
Sample Prep
###Code
samples = pd.read_csv('../data/TCGA/rna-seq_pan/meta/gdc_sample_sheet.2019-12-12.tsv', sep="\t")
# get file type
samples['data'] = [val[1] for i,val in samples['File Name'].str.split(".").items()]
samples['project'] = [val[1] for i,val in samples['Project ID'].str.split("-").items()]
samples['project'].value_counts()
samples['Sample Type'].value_counts()
###Output
_____no_output_____
###Markdown
New Model based on Tissues with available metastatic samples
###Code
samples[samples['Sample Type']=='Metastatic']['project'].value_counts()
samples[samples['Sample Type']=='Primary Tumor']['project'].value_counts().head(9)
proj = np.append(samples['project'].value_counts().head(9).index.values, ['SKCM'])
cases = samples[samples['Sample Type']=='Primary Tumor'].sample(frac=1).copy()
cases.shape
cases = cases[cases['project'].isin(proj)]
cases['project'].value_counts()
cases.shape
###Output
_____no_output_____
###Markdown
Dataset Prep
###Code
from sklearn.model_selection import train_test_split
target = 'project'
cases[target] = cases[target].astype('category')
train, test = train_test_split(cases)
import torch
import torch.nn as nn
from torch.optim import lr_scheduler
import torch.optim as optim
from torch.autograd import Variable
from trainer import fit
import visualization as vis
import numpy as np
if torch.cuda.is_available():
cuda = torch.cuda.is_available()
print("{} GPUs available".format(torch.cuda.device_count()))
classes = train[target].cat.categories.values
from tcga_datasets import TCGA, SiameseTCGA
root_dir = "../data/TCGA/rna-seq_pan/"
batch_size = 1
train_dataset = TCGA(root_dir, samples=train, train=True, target=target)
test_dataset = TCGA(root_dir, samples=test, train=False, target=target)
print('Loaded')
kwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=False, **kwargs)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
###Output
Loaded
###Markdown
Siamese Network
###Code
# Step 1 set up dataloader
siamese_train_dataset = SiameseTCGA(train_dataset) # Returns pairs of images and target same/different
siamese_test_dataset = SiameseTCGA(test_dataset)
batch_size = 64
kwargs = {'num_workers': 20, 'pin_memory': True} if cuda else {}
siamese_train_loader = torch.utils.data.DataLoader(siamese_train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
siamese_test_loader = torch.utils.data.DataLoader(siamese_test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
# Set up the network and training parameters
from tcga_networks import EmbeddingNet, SiameseNet
from losses import ContrastiveLoss
from metrics import AccumulatedAccuracyMetric
# Step 2
embedding_net = EmbeddingNet()
# Step 3
model = SiameseNet(embedding_net)
if cuda:
model = nn.DataParallel(model)
model.cuda()
# Step 4
margin = 1.
loss_fn = ContrastiveLoss(margin)
lr = 1e-3
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.StepLR(optimizer, 8, gamma=0.1, last_epoch=-1)
n_epochs = 8
# print training metrics every log_interval * batch_size
log_interval = 4
train_loss, val_loss = fit(siamese_train_loader, siamese_test_loader, model, loss_fn, optimizer, scheduler,
n_epochs, cuda, log_interval)
plt.plot(range(0, n_epochs), train_loss, 'rx-')
plt.plot(range(0, n_epochs), val_loss, 'bx-')
classes = train[target].cat.categories.values
train_embeddings_cl, train_labels_cl = vis.extract_embeddings(train_loader, model)
vis.plot_embeddings(train_embeddings_cl, train_labels_cl, classes)
val_embeddings_baseline, val_labels_baseline = vis.extract_embeddings(test_loader, model)
vis.plot_embeddings(val_embeddings_baseline, val_labels_baseline, classes)
###Output
_____no_output_____
###Markdown
Project Metastatic SKCM onto learned space
###Code
skcm_cat = np.where(cases['project'].cat.categories.values=='SKCM')[0][0]
ms = samples[(samples['Sample Type']=='Metastatic') & (samples['project']=='SKCM')].sample(frac=1).copy()
ms[target] = [i + '-MET' for i in ms[target]]
ms[target] = ms[target].astype('category')
met_classes = ms[target].cat.categories.values
root_dir = "../data/TCGA/rna-seq_pan/"
batch_size = 1
ms_dataset = TCGA(root_dir, samples=ms, train=False, target=target)
print('Loaded')
kwargs = {'num_workers': 1, 'pin_memory': True} if cuda else {}
ms_loader = torch.utils.data.DataLoader(ms_dataset, batch_size=batch_size, shuffle=False, **kwargs)
ms_embeddings_baseline, ms_labels_baseline = vis.extract_embeddings(ms_loader, model)
comb_classes = np.append(classes, met_classes)
comb_embeddings = np.append(train_embeddings_cl, ms_embeddings_baseline, axis=0)
comb_embeddings.shape
ms_labels = np.repeat(np.unique(train_labels_cl).max() + 1, len(ms_labels_baseline))
comb_labels = np.append(train_labels_cl, ms_labels, axis=0)
comb_labels.shape
vis.plot_embeddings(comb_embeddings, comb_labels, comb_classes)
###Output
_____no_output_____ |
others/generator.ipynb | ###Markdown
Create minority sample using translation model
###Code
from __future__ import print_function, division
import scipy
from keras.models import load_model
import matplotlib.pyplot as plt
import sys
import numpy as np
import os
from tqdm import tqdm
import keras
import pandas as pd
from keras.datasets import mnist
from keras_contrib.layers.normalization.instancenormalization import InstanceNormalization
from keras.layers import Input, Dense, Reshape, Flatten, Dropout, Concatenate
from keras.layers import BatchNormalization, Activation, ZeroPadding2D
from keras.layers.advanced_activations import LeakyReLU
from keras.layers.convolutional import UpSampling2D, Conv2D
from keras.models import Sequential, Model
from keras.optimizers import Adam
from keras.utils import np_utils
import datetime
import matplotlib.pyplot as plt
import sys
import numpy as np
import os
import cv2
# Root directory of the project
ROOT_DIR = os.path.abspath("../")
sys.path.append(ROOT_DIR)
import helpers
# Training file directory
DATASET = os.path.join(ROOT_DIR, 'dataset')
PATH = "{}/{}".format(DATASET, "isic2016numpy")
# load data
x_train = np.load("{}/x_train.npy".format(PATH))
y_train = np.load("{}/y_train.npy".format(PATH))
x_train.shape, y_train.shape
MODEL_PATH = os.path.join(ROOT_DIR, "models")
print(ROOT_DIR)
print(os.listdir(MODEL_PATH))
#b2m_510 done
#b2m_597 done
#b2m_784 done
model_name = 'generator_isic2016_b2m_100.h5'
model = load_model(os.path.join(MODEL_PATH, model_name), custom_objects={'InstanceNormalization':InstanceNormalization})
#model.summary()
def predict(model, img):
if img.shape[0] != 256:
print("Resizing image..")
img = cv2.resize(img, (256, 256))
# Normalize image as the trained distribution
img = img/127.5 - 1.
# Normalize imgae [0, 1]
#img = img.astype('float32')
#img /= 255.
img = np.expand_dims(img, axis=0)
img = model.predict(img)
img = np.squeeze(img, axis=0)
# Rescale to [0,1]
#img = 0.5 * img + 0.5
img = (img - np.min(img))/np.ptp(img)
return img
def oversample(x, y, model):
'''
Some cool stuff
INPUT
x:
y:
model:
OUTPUT
New folder in the current directory.
'''
print("Before oversampling :", x.shape, y.shape)
# majority class
majority_samples = []
for img, label in zip(x, y):
if label[1] == 0:
majority_samples.append(img)
else:
pass
# numpy array of majority classes
majority_samples = np.array(majority_samples)
# minority generated samples
synthetic_samples = []
# iterate over majority samples and generate minority class
for img in tqdm(majority_samples):
# translate to malignant
pred = predict(model, img)
synthetic_samples.append(pred)
# make labels for generated minority classes
y_syn = np.array([1 for _ in range(len(synthetic_samples))])
y_syn = np_utils.to_categorical(y_syn, 2)
# Scale training set to [0, 1]
x = x.astype('float32')
x /= 255
# merge and shuffle training and generated samples
x_balanced = np.concatenate( (x, synthetic_samples), axis = 0)
y_balanced = np.concatenate( (y, y_syn), axis = 0)
x_balanced, y_balanced = helpers.shuffle_dataset(x_balanced, y_balanced)
assert len(majority_samples) == len(synthetic_samples), "This should be same! If not, check model code"
assert len(x_balanced) == len(synthetic_samples) + len(x_train), "Check oversampler code"
print("After oversampling: ", x_balanced.shape, y_balanced.shape)
return majority_samples, synthetic_samples, x_balanced, y_balanced
raw, gen, x_new, y_new = oversample(x_train, y_train, model)
###Output
_____no_output_____
###Markdown
Divide the synthetic malignant from raw dataset for visualization
###Code
gen = np.array(gen)
print(gen.shape)
# make new label for plotting
gen_label = np.array([2 for _ in range(len(gen))])
gen_label = np_utils.to_categorical(gen_label, 3)
print(gen_label.shape)
# change original label to 3 onehot encoded vector
y_3 = np.array([np.argmax(x) for x in y_train])
print(y_3.shape)
y_3 = np_utils.to_categorical(y_3, 3)
print(y_3.shape)
# Scale training set to [0, 1] as synthetic data is in that range
x_train = x_train.astype('float32')
x_train /= 255
# merge and shuffle training and generated samples
x_balanced = np.concatenate( (x_train, gen), axis = 0)
y_balanced = np.concatenate( (y_3, gen_label), axis = 0)
#x3, y3 = helpers.shuffle_dataset(x_balanced, y_balanced)
x3, y3 = x_balanced, y_balanced
print(x3.shape, y3.shape)
from keras import backend as K
K.tensorflow_backend.clear_session()
model = None
model_name = "MelaNet.h5"
model = load_model(os.path.join(MODEL_PATH, model_name), custom_objects={'InstanceNormalization':InstanceNormalization}, compile=False)
model.summary()
min(x3[0].flatten()), max(x3[0].flatten())
from keras.models import Model
layer_name = 'global_average_pooling2d_1'
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(x3, verbose=1)
intermediate_output.shape
intermediate_output.shape, y3.shape
x3.shape
import cv2
resized_images = []
for i in range(len(x3)):
img = cv2.resize(x3[i], (20,20), interpolation = cv2.INTER_AREA)
resized_images.append(img)
resized_images = np.array(resized_images)
resized_images.shape
import sklearn, sklearn.manifold
X_embedded = sklearn.manifold.TSNE(n_components=2, random_state=42).fit_transform(intermediate_output)
X_embedded.shape
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
fig, ax = plt.subplots(figsize=(16, 16))
for item in range(X_embedded.shape[0]):
ax.scatter(X_embedded[item,0], X_embedded[item,1])
#plt.annotate(str(item),(X_embedded[item,0], X_embedded[item,1]))
ab = AnnotationBbox(OffsetImage(resized_images[item], cmap="Greys_r"), #resized_images[item][0]
(X_embedded[item,0], X_embedded[item,1]), frameon=False)
ax.add_artist(ab)
plt.figure(0, figsize=(7, 7), dpi=100)
plt.scatter(X_embedded[:,0], X_embedded[:,1])
x = np.linspace(-70,70,2)
y = 0*x+40
plt.plot(x, y, '-r', label='y=2x+1');
###Output
_____no_output_____
###Markdown
Plot raw data UMAP
###Code
import umap
import time
from sklearn.datasets import fetch_openml
import matplotlib.pyplot as plt
import seaborn as sns
np.random.seed(42)
sns.set(context="paper", style="white")
raw_train = intermediate_output #x3
raw_annot = y3
print(raw_train.shape)
raw_t_s = np.array([img.flatten() for img in raw_train])
print(raw_t_s.shape)
print(raw_annot.shape)
raw_annot_flat = np.argmax(raw_annot, axis=1)
print(raw_annot_flat.shape)
raw_annot_flat_3 = raw_annot_flat
print(np.unique(raw_annot_flat_3))
print(raw_t_s.shape, raw_annot_flat_3.shape)
data = raw_t_s
reducer = umap.UMAP(n_neighbors=15, random_state=42)
embedding = reducer.fit_transform(data)
colour_map = raw_annot_flat_3
tsneFigure = plt.figure(figsize=(12, 10))
fig, ax = plt.subplots(figsize=(12, 10))
for colour in range(2): # 1 - benign only, 2- malig benign, 3 - malig benign synth malig
indices = np.where(colour_map==colour)
indices = indices[0]
if colour == 0:
l = "Benign"
if colour == 1:
l = "Malignant"
if colour == 2:
l = "Generated Malignant"
plt.setp(ax, xticks=[], yticks=[])
plt.scatter(embedding[:, 0][indices],
embedding[:, 1][indices],
label=None, cmap="Spectral", s=50)
#plt.legend(loc='lower left', prop={'size': 20})
plt.axis('off')
#plt.savefig("raw_UMAP.pdf", bbox_inches = 'tight', pad_inches = 0, dpi=1000)
plt.show()
import umap
import time
from sklearn.datasets import fetch_openml
import matplotlib.pyplot as plt
import seaborn as sns
np.random.seed(42)
sns.set(context="paper", style="white")
raw_train = intermediate_output #x3
raw_annot = y3
print(raw_train.shape)
raw_t_s = np.array([img.flatten() for img in raw_train])
print(raw_t_s.shape)
print(raw_annot.shape)
raw_annot_flat = np.argmax(raw_annot, axis=1)
print(raw_annot_flat.shape)
raw_annot_flat_3 = raw_annot_flat
print(np.unique(raw_annot_flat_3))
print(raw_t_s.shape, raw_annot_flat_3.shape)
data = raw_t_s
reducer = umap.UMAP(n_neighbors=15, random_state=42)
embedding = reducer.fit_transform(data)
colour_map = raw_annot_flat_3
tsneFigure = plt.figure(figsize=(12, 10))
fig, ax = plt.subplots(figsize=(12, 10))
for colour in range(3): # 1 - benign only, 2- malig benign, 3 - malig benign synth malig
indices = np.where(colour_map==colour)
indices = indices[0]
if colour == 0:
l = "Benign"
if colour == 1:
l = "Malignant"
if colour == 2:
l = "Generated Malignant"
plt.setp(ax, xticks=[], yticks=[])
plt.scatter(embedding[:, 0][indices],
embedding[:, 1][indices],
label=None, cmap="Spectral", s=50)
#plt.legend(loc='lower left', prop={'size': 20})
plt.axis('off')
#plt.savefig("gan_UMAP.pdf", bbox_inches = 'tight', pad_inches = 0, dpi=1000)
plt.show()
###Output
_____no_output_____
###Markdown
Visualized and save the oversampled dataset
###Code
# inital dataset + generated samples
x_new.shape, y_new.shape
#max(np.array(gen).flatten()), min(np.array(gen).flatten())
#max(x_new.flatten()), min(x_new.flatten())
###Output
_____no_output_____
###Markdown
Raw data
###Code
from numpy.random import rand
import matplotlib.pyplot as plt
index = np.random.choice(np.array(gen).shape[0], 30, replace=False)
raw = np.array(raw)
x = raw[index]
a, b = 5, 6
x = np.reshape(x, (a, b, 256, 256, 3))
test_data = x
r, c = test_data.shape[0], test_data.shape[1]
cmaps = [['viridis', 'binary'], ['plasma', 'coolwarm'], ['Greens', 'copper']]
heights = [a[0].shape[0] for a in test_data]
widths = [a.shape[1] for a in test_data[0]]
fig_width = 15. # inches
fig_height = fig_width * sum(heights) / sum(widths)
f, axarr = plt.subplots(r,c, figsize=(fig_width, fig_height),
gridspec_kw={'height_ratios':heights})
for i in range(r):
for j in range(c):
axarr[i, j].imshow(test_data[i][j])
axarr[i, j].axis('off')
plt.subplots_adjust(wspace=0, hspace=0, left=0, right=1, bottom=0, top=1)
plt.savefig('{}/{}.png'.format("{}/outputs/".format(ROOT_DIR), "beforegan"), dpi=300)
plt.show()
###Output
_____no_output_____
###Markdown
Synthesize data
###Code
from numpy.random import rand
import matplotlib.pyplot as plt
gen = np.array(gen)
x = gen[index]
a, b = 5, 6
x = np.reshape(x, (a, b, 256, 256, 3))
test_data = x
r, c = test_data.shape[0], test_data.shape[1]
cmaps = [['viridis', 'binary'], ['plasma', 'coolwarm'], ['Greens', 'copper']]
heights = [a[0].shape[0] for a in test_data]
widths = [a.shape[1] for a in test_data[0]]
fig_width = 15. # inches
fig_height = fig_width * sum(heights) / sum(widths)
f, axarr = plt.subplots(r,c, figsize=(fig_width, fig_height),
gridspec_kw={'height_ratios':heights})
for i in range(r):
for j in range(c):
axarr[i, j].imshow(test_data[i][j])
axarr[i, j].axis('off')
plt.subplots_adjust(wspace=0, hspace=0, left=0, right=1, bottom=0, top=1)
plt.savefig('{}/{}.png'.format("{}/outputs/".format(ROOT_DIR), "aftergan"), dpi=300)
plt.show()
#helpers.show_images(raw[-20:], cols = 3, titles = None, save_fig = "default")
#helpers.show_images(gen[-20:], cols = 3, titles = None, save_fig = "default")
a = np.array([np.argmax(y) for y in y_new])
len(a)
np.unique(a)
np.count_nonzero(a == 0), np.count_nonzero(a == 1)
#np.count_nonzero(a == 0), np.count_nonzero(a == 1), np.count_nonzero(a == 2)
x_new.shape, y_new.shape
# Create directory
helpers.create_directory("{}/dataset/isic2016gan/".format(ROOT_DIR))
# Save
np.save("{}/dataset/isic2016gan/{}{}.npy".format(ROOT_DIR, "x_", model_name[:-3]), x_new)
np.save("{}/dataset/isic2016gan/{}{}.npy".format(ROOT_DIR, "y_", model_name[:-3]), y_new)
###Output
_____no_output_____ |
keras-nn/06_Conv_NN/CIFAR10 dataset.ipynb | ###Markdown
Convolutional Neural Network - Keras> **CIFAR10 dataset** - The CIFAR10 dataset contains 60,000 color images in 10 classes, with 6,000 images in each class. The dataset is divided into 50,000 training images and 10,000 testing images. The classes are mutually exclusive and there is no overlap between them. Imports
###Code
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras import datasets
cifar10 = datasets.cifar10.load_data()
(train_images, train_labels), (test_images, test_labels) = cifar10
train_images[0].shape
### Visualising the first image
plt.imshow(train_images[4], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
Scaling images > We want to scale down the image pixcels so that they will be normalised and be a number bewtween `0` and `1`
###Code
train_images, test_images = train_images/255.0, test_images/255.0
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer',
'dog', 'frog', 'horse', 'ship', 'truck']
###Output
_____no_output_____
###Markdown
Creating a CNN> As input, a CNN takes tensors of shape **(image_height, image_width, color_channels)**, ignoring the batch size.
###Code
input_shape = train_images[0].shape
input_shape
model = keras.Sequential([
keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=input_shape),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(64, (3,3), activation='relu'),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(64, (3,3), activation='relu'),
keras.layers.MaxPooling2D((2,2)),
keras.layers.Flatten(),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_2 (Conv2D) (None, 30, 30, 32) 896
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 15, 15, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 13, 13, 64) 18496
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 6, 6, 64) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 4, 4, 64) 36928
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 2, 2, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 256) 0
_________________________________________________________________
dense (Dense) (None, 32) 8224
_________________________________________________________________
dense_1 (Dense) (None, 10) 330
=================================================================
Total params: 64,874
Trainable params: 64,874
Non-trainable params: 0
_________________________________________________________________
###Markdown
Combile the model
###Code
model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy']
)
###Output
_____no_output_____
###Markdown
Fitting the model
###Code
EPOCHS = 3
BATCH_SIZE = 8
VALIDATION_DATA = (test_images, test_labels)
history = model.fit(train_images, train_labels, epochs=EPOCHS,
validation_data=VALIDATION_DATA,
batch_size=BATCH_SIZE
)
###Output
Epoch 1/3
6250/6250 [==============================] - 120s 17ms/step - loss: 2.3030 - accuracy: 0.0987 - val_loss: 2.3027 - val_accuracy: 0.1000
Epoch 2/3
2413/6250 [==========>...................] - ETA: 54s - loss: 2.3030 - accuracy: 0.096 |
docs/notebooks/04_routing.ipynb | ###Markdown
RoutingRouting allows you to route waveguides between component ports
###Code
import gdsfactory as gf
gf.config.set_plot_options(show_subports=False)
gf.CONF.plotter = 'matplotlib'
c = gf.Component()
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
c.plot()
###Output
_____no_output_____
###Markdown
get_route`get_route` returns a Manhattan route between 2 ports
###Code
gf.routing.get_route?
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
route = gf.routing.get_route(mmi1.ports["o2"], mmi2.ports["o1"])
c.add(route.references)
c.plot()
route
###Output
_____no_output_____
###Markdown
**Connect strip: Problem**sometimes there are obstacles that connect strip does not see!
###Code
c = gf.Component("sample_problem")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((110, 50))
x = c << gf.components.cross(length=20)
x.move((135, 20))
route = gf.routing.get_route(mmi1.ports["o2"], mmi2.ports["o2"])
c.add(route.references)
c.plot()
###Output
_____no_output_____
###Markdown
**Solution: Connect strip way points**You can also specify the points along the route
###Code
gf.routing.get_route_waypoints?
c = gf.Component("sample_avoid_obstacle")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((110, 50))
x = c << gf.components.cross(length=20)
x.move((135, 20))
x0 = mmi1.ports["o3"].x
y0 = mmi1.ports["o3"].y
x2 = mmi2.ports["o3"].x
y2 = mmi2.ports["o3"].y
route = gf.routing.get_route_from_waypoints(
[(x0, y0), (x2 + 40, y0), (x2 + 40, y2), (x2, y2)]
)
c.add(route.references)
c.plot()
route.length
route.ports
route.references
###Output
_____no_output_____
###Markdown
Lets say that we want to extrude the waveguide using a different waveguide crosssection, for example using a different layer
###Code
import gdsfactory as gf
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
route = gf.routing.get_route(
mmi1.ports["o3"], mmi2.ports["o1"], cross_section=gf.cross_section.metal1
)
c.add(route.references)
c.plot()
###Output
_____no_output_____
###Markdown
auto-widenTo reduce loss and phase errors you can also auto-widen waveguide routes straight sections that are longer than a certain length.
###Code
import gdsfactory as gf
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((200, 50))
route = gf.routing.get_route(
mmi1.ports["o3"],
mmi2.ports["o1"],
cross_section=gf.cross_section.strip,
auto_widen=True,
width_wide=2,
auto_widen_minimum_length=100,
)
c.add(route.references)
c.plot()
###Output
_____no_output_____
###Markdown
get_route_from_waypointsSometimes you need to set up a route with custom waypoints. `get_route_from_waypoints` is a manual version of `get_route`
###Code
import gdsfactory as gf
c = gf.Component("waypoints_sample")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
p0x, p0y = left.ports["o2"].midpoint
p1x, p1y = right.ports["o2"].midpoint
o = 10 # vertical offset to overcome bottom obstacle
ytop = 20
routes = gf.routing.get_route_from_waypoints(
[
(p0x, p0y),
(p0x + o, p0y),
(p0x + o, ytop),
(p1x + o, ytop),
(p1x + o, p1y),
(p1x, p1y),
],
)
c.add(routes.references)
c.plot()
###Output
_____no_output_____
###Markdown
get_route_from_stepsAs you can see waypoints can only change one point (x or y) at a time, making the waypoint definition a bit redundant.You can also use a `get_route_from_steps` which is a more concise route definition, that supports defining only the new steps `x` or `y` together with increments `dx` or `dy``get_route_from_steps` is a manual version of `get_route` and a more concise and convenient version of `get_route_from_waypoints`
###Code
import gdsfactory as gf
c = gf.Component("get_route_from_steps")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
port1 = left.ports["o2"]
port2 = right.ports["o2"]
routes = gf.routing.get_route_from_steps(
port1=port1,
port2=port2,
steps=[
{"x": 20, "y": 0},
{"x": 20, "y": 20},
{"x": 120, "y": 20},
{"x": 120, "y": 80},
],
)
c.add(routes.references)
c.plot()
import gdsfactory as gf
c = gf.Component("get_route_from_steps_shorter_syntax")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
port1 = left.ports["o2"]
port2 = right.ports["o2"]
routes = gf.routing.get_route_from_steps(
port1=port1,
port2=port2,
steps=[
{"x": 20},
{"y": 20},
{"x": 120},
{"y": 80},
],
)
c.add(routes.references)
c.plot()
###Output
_____no_output_____
###Markdown
get_bundle**Problem**See the route collisions When connecting groups of ports using a regular manhattan single-route router such as `get route`
###Code
import gdsfactory as gf
xs_top = [0, 10, 20, 40, 50, 80]
pitch = 127
N = len(xs_top)
xs_bottom = [(i - N / 2) * pitch for i in range(N)]
top_ports = [gf.Port(f"top_{i}", (xs_top[i], 0), 0.5, 270) for i in range(N)]
bottom_ports = [gf.Port(f"bottom_{i}", (xs_bottom[i], -100), 0.5, 90) for i in range(N)]
c = gf.Component(name="connect_bundle")
for p1, p2 in zip(top_ports, bottom_ports):
route = gf.routing.get_route(p1, p2)
c.add(route.references)
c.plot()
###Output
_____no_output_____
###Markdown
**solution**`get_bundle` provides you with river routing capabilities, that you can use to route bundles of ports without collisions
###Code
c = gf.Component(name="connect_bundle")
routes = gf.routing.get_bundle(top_ports, bottom_ports)
for route in routes:
c.add(route.references)
c.plot()
import gdsfactory as gf
ys_right = [0, 10, 20, 40, 50, 80]
pitch = 127.0
N = len(ys_right)
ys_left = [(i - N / 2) * pitch for i in range(N)]
right_ports = [gf.Port(f"R_{i}", (0, ys_right[i]), 0.5, 180) for i in range(N)]
left_ports = [gf.Port(f"L_{i}".format(i), (-200, ys_left[i]), 0.5, 0) for i in range(N)]
# you can also mess up the port order and it will sort them by default
left_ports.reverse()
c = gf.Component(name="connect_bundle2")
routes = gf.routing.get_bundle(
left_ports, right_ports, sort_ports=True, start_straight_length=100
)
for route in routes:
c.add(route.references)
c.plot()
xs_top = [0, 10, 20, 40, 50, 80]
pitch = 127.0
N = len(xs_top)
xs_bottom = [(i - N / 2) * pitch for i in range(N)]
top_ports = [gf.Port(f"top_{i}", (xs_top[i], 0), 0.5, 270) for i in range(N)]
bot_ports = [gf.Port(f"bot_{i}", (xs_bottom[i], -300), 0.5, 90) for i in range(N)]
c = gf.Component(name="connect_bundle")
routes = gf.routing.get_bundle(
top_ports, bot_ports, separation=5.0, end_straight_length=100
)
for route in routes:
c.add(route.references)
c.plot()
###Output
_____no_output_____
###Markdown
`get_bundle` can also route bundles through corners
###Code
import gdsfactory as gf
from gdsfactory.cell import cell
from gdsfactory.component import Component
from gdsfactory.port import Port
@cell
def test_connect_corner(N=6, config="A"):
d = 10.0
sep = 5.0
top_cell = gf.Component(name="connect_corner")
if config in ["A", "B"]:
a = 100.0
ports_A_TR = [
Port("A_TR_{}".format(i), (d, a / 2 + i * sep), 0.5, 0) for i in range(N)
]
ports_A_TL = [
Port("A_TL_{}".format(i), (-d, a / 2 + i * sep), 0.5, 180) for i in range(N)
]
ports_A_BR = [
Port("A_BR_{}".format(i), (d, -a / 2 - i * sep), 0.5, 0) for i in range(N)
]
ports_A_BL = [
Port("A_BL_{}".format(i), (-d, -a / 2 - i * sep), 0.5, 180)
for i in range(N)
]
ports_A = [ports_A_TR, ports_A_TL, ports_A_BR, ports_A_BL]
ports_B_TR = [
Port("B_TR_{}".format(i), (a / 2 + i * sep, d), 0.5, 90) for i in range(N)
]
ports_B_TL = [
Port("B_TL_{}".format(i), (-a / 2 - i * sep, d), 0.5, 90) for i in range(N)
]
ports_B_BR = [
Port("B_BR_{}".format(i), (a / 2 + i * sep, -d), 0.5, 270) for i in range(N)
]
ports_B_BL = [
Port("B_BL_{}".format(i), (-a / 2 - i * sep, -d), 0.5, 270)
for i in range(N)
]
ports_B = [ports_B_TR, ports_B_TL, ports_B_BR, ports_B_BL]
elif config in ["C", "D"]:
a = N * sep + 2 * d
ports_A_TR = [
Port("A_TR_{}".format(i), (a, d + i * sep), 0.5, 0) for i in range(N)
]
ports_A_TL = [
Port("A_TL_{}".format(i), (-a, d + i * sep), 0.5, 180) for i in range(N)
]
ports_A_BR = [
Port("A_BR_{}".format(i), (a, -d - i * sep), 0.5, 0) for i in range(N)
]
ports_A_BL = [
Port("A_BL_{}".format(i), (-a, -d - i * sep), 0.5, 180) for i in range(N)
]
ports_A = [ports_A_TR, ports_A_TL, ports_A_BR, ports_A_BL]
ports_B_TR = [
Port("B_TR_{}".format(i), (d + i * sep, a), 0.5, 90) for i in range(N)
]
ports_B_TL = [
Port("B_TL_{}".format(i), (-d - i * sep, a), 0.5, 90) for i in range(N)
]
ports_B_BR = [
Port("B_BR_{}".format(i), (d + i * sep, -a), 0.5, 270) for i in range(N)
]
ports_B_BL = [
Port("B_BL_{}".format(i), (-d - i * sep, -a), 0.5, 270) for i in range(N)
]
ports_B = [ports_B_TR, ports_B_TL, ports_B_BR, ports_B_BL]
if config in ["A", "C"]:
for ports1, ports2 in zip(ports_A, ports_B):
routes = gf.routing.get_bundle(ports1, ports2, layer=(2, 0), radius=5)
for route in routes:
top_cell.add(route.references)
elif config in ["B", "D"]:
for ports1, ports2 in zip(ports_A, ports_B):
routes = gf.routing.get_bundle(ports2, ports1, layer=(2, 0), radius=5)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_corner(config="A")
c.plot()
c = test_connect_corner(config="C")
c.plot()
@cell
def test_connect_bundle_udirect(dy=200, angle=270):
xs1 = [-100, -90, -80, -55, -35, 24, 0] + [200, 210, 240]
axis = "X" if angle in [0, 180] else "Y"
pitch = 10.0
N = len(xs1)
xs2 = [70 + i * pitch for i in range(N)]
if axis == "X":
ports1 = [Port(f"top_{i}", (0, xs1[i]), 0.5, angle) for i in range(N)]
ports2 = [Port(f"bottom_{i}", (dy, xs2[i]), 0.5, angle) for i in range(N)]
else:
ports1 = [Port(f"top_{i}", (xs1[i], 0), 0.5, angle) for i in range(N)]
ports2 = [Port(f"bottom_{i}", (xs2[i], dy), 0.5, angle) for i in range(N)]
top_cell = Component(name="connect_bundle_udirect")
routes = gf.routing.get_bundle(ports1, ports2, radius=10.0)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_bundle_udirect()
c.plot()
@cell
def test_connect_bundle_u_indirect(dy=-200, angle=180):
xs1 = [-100, -90, -80, -55, -35] + [200, 210, 240]
axis = "X" if angle in [0, 180] else "Y"
pitch = 10.0
N = len(xs1)
xs2 = [50 + i * pitch for i in range(N)]
a1 = angle
a2 = a1 + 180
if axis == "X":
ports1 = [Port("top_{}".format(i), (0, xs1[i]), 0.5, a1) for i in range(N)]
ports2 = [Port("bot_{}".format(i), (dy, xs2[i]), 0.5, a2) for i in range(N)]
else:
ports1 = [Port("top_{}".format(i), (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [Port("bot_{}".format(i), (xs2[i], dy), 0.5, a2) for i in range(N)]
top_cell = Component("connect_bundle_u_indirect")
routes = gf.routing.get_bundle(
ports1,
ports2,
bend=gf.components.bend_euler,
radius=10,
)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_bundle_u_indirect(angle=0)
c.plot()
import gdsfactory as gf
@gf.cell
def test_north_to_south():
dy = 200.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 10.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N // 2)]
xs2 += [400 + i * pitch for i in range(N // 2)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port("top_{}".format(i), (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port("bot_{}".format(i), (xs2[i], dy), 0.5, a2) for i in range(N)]
c = gf.Component()
routes = gf.routing.get_bundle(ports1, ports2, auto_widen=False)
for route in routes:
c.add(route.references)
return c
c = test_north_to_south()
c.plot()
def demo_connect_bundle():
"""combines all the connect_bundle tests"""
y = 400.0
x = 500
y0 = 900
dy = 200.0
c = Component("connect_bundle")
for j, s in enumerate([-1, 1]):
for i, angle in enumerate([0, 90, 180, 270]):
_cmp = test_connect_bundle_u_indirect(dy=s * dy, angle=angle)
_cmp_ref = _cmp.ref(position=(i * x, j * y))
c.add(_cmp_ref)
_cmp = test_connect_bundle_udirect(dy=s * dy, angle=angle)
_cmp_ref = _cmp.ref(position=(i * x, j * y + y0))
c.add(_cmp_ref)
for i, config in enumerate(["A", "B", "C", "D"]):
_cmp = test_connect_corner(config=config)
_cmp_ref = _cmp.ref(position=(i * x, 1700))
c.add(_cmp_ref)
# _cmp = test_facing_ports()
# _cmp_ref = _cmp.ref(position=(800, 1820))
# c.add(_cmp_ref)
return c
c = demo_connect_bundle()
c.plot()
import gdsfactory as gf
c = gf.Component("route_bend_5um")
c1 = c << gf.components.mmi2x2()
c2 = c << gf.components.mmi2x2()
c2.move((100, 50))
routes = gf.routing.get_bundle(
[c1.ports["o4"], c1.ports["o3"]], [c2.ports["o1"], c2.ports["o2"]], radius=5
)
for route in routes:
c.add(route.references)
c.plot()
import gdsfactory as gf
c = gf.Component("electrical")
c1 = c << gf.components.pad()
c2 = c << gf.components.pad()
c2.move((200, 100))
routes = gf.routing.get_bundle(
[c1.ports["e3"]], [c2.ports["e1"]], cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c.plot()
c = gf.Component("get_bundle_with_ubends_bend_from_top")
pad_array = gf.components.pad_array()
c1 = c << pad_array
c2 = c << pad_array
c2.rotate(90)
c2.movex(1000)
c2.ymax = -200
routes_bend180 = gf.routing.get_routes_bend180(
ports=c2.get_ports_list(),
radius=75 / 2,
cross_section=gf.cross_section.metal1,
bend_port1="e1",
bend_port2="e2",
)
c.add(routes_bend180.references)
routes = gf.routing.get_bundle(
c1.get_ports_list(), routes_bend180.ports, cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c.plot()
c = gf.Component("get_bundle_with_ubends_bend_from_bottom")
pad_array = gf.components.pad_array()
c1 = c << pad_array
c2 = c << pad_array
c2.rotate(90)
c2.movex(1000)
c2.ymax = -200
routes_bend180 = gf.routing.get_routes_bend180(
ports=c2.get_ports_list(),
radius=75 / 2,
cross_section=gf.cross_section.metal1,
bend_port1="e2",
bend_port2="e1",
)
c.add(routes_bend180.references)
routes = gf.routing.get_bundle(
c1.get_ports_list(), routes_bend180.ports, cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c.plot()
###Output
_____no_output_____
###Markdown
**Problem**Sometimes 90 degrees routes do not have enough space for a Manhattan route
###Code
import gdsfactory as gf
c = gf.Component("route_fail_1")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
c.plot()
import gdsfactory as gf
c = gf.Component("route_fail_1")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
routes = gf.routing.get_bundle(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
auto_widen=False,
)
for route in routes:
c.add(route.references)
c.plot()
c = gf.Component("route_fail_2")
pitch = 2.0
ys_left = [0, 10, 20]
N = len(ys_left)
ys_right = [(i - N / 2) * pitch for i in range(N)]
right_ports = [gf.Port(f"R_{i}", (0, ys_right[i]), 0.5, 180) for i in range(N)]
left_ports = [gf.Port(f"L_{i}", (-50, ys_left[i]), 0.5, 0) for i in range(N)]
left_ports.reverse()
routes = gf.routing.get_bundle(right_ports, left_ports, radius=5)
for i, route in enumerate(routes):
c.add(route.references)
c.plot()
###Output
_____no_output_____
###Markdown
**Solution**Add Sbend routes using `get_bundle_sbend`
###Code
import gdsfactory as gf
c = gf.Component("route_solution_1_get_bundle_sbend")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
routes = gf.routing.get_bundle_sbend(
c1.get_ports_list(orientation=0), c2.get_ports_list(orientation=180)
)
c.add(routes.references)
c.plot()
routes
c = gf.Component("route_solution_2_get_bundle_sbend")
route = gf.routing.get_bundle_sbend(right_ports, left_ports)
c.add(route.references)
###Output
_____no_output_____
###Markdown
get_bundle_from_waypointsWhile `get_bundle` routes bundles of ports automatically, you can also use `get_bundle_from_waypoints` to manually specify the route waypoints.You can think of `get_bundle_from_waypoints` as a manual version of `get_bundle`
###Code
import numpy as np
import gdsfactory as gf
@gf.cell
def test_connect_bundle_waypoints():
"""Connect bundle of ports with bundle of routes following a list of waypoints."""
ys1 = np.array([0, 5, 10, 15, 30, 40, 50, 60]) + 0.0
ys2 = np.array([0, 10, 20, 30, 70, 90, 110, 120]) + 500.0
N = ys1.size
ports1 = [
gf.Port(name=f"A_{i}", midpoint=(0, ys1[i]), width=0.5, orientation=0)
for i in range(N)
]
ports2 = [
gf.Port(
name=f"B_{i}",
midpoint=(500, ys2[i]),
width=0.5,
orientation=180,
)
for i in range(N)
]
p0 = ports1[0].position
c = gf.Component("B")
c.add_ports(ports1)
c.add_ports(ports2)
waypoints = [
p0 + (200, 0),
p0 + (200, -200),
p0 + (400, -200),
(p0[0] + 400, ports2[0].y),
]
routes = gf.routing.get_bundle_from_waypoints(ports1, ports2, waypoints)
lengths = {}
for i, route in enumerate(routes):
c.add(route.references)
lengths[i] = route.length
return c
cell = test_connect_bundle_waypoints()
cell.plot()
import numpy as np
import gdsfactory as gf
c = gf.Component()
r = c << gf.components.array(component=gf.components.straight, rows=2, columns=1, spacing=(0, 20))
r.movex(60)
r.movey(40)
lt = c << gf.components.straight(length=15)
lb = c << gf.components.straight(length=5)
lt.movey(5)
ports1 = lt.get_ports_list(orientation=0) + lb.get_ports_list(orientation=0)
ports2 = r.get_ports_list(orientation=180)
dx = 20
p0 = ports1[0].midpoint + (dx, 0)
p1 = (ports1[0].midpoint[0] + dx, ports2[0].midpoint[1])
waypoints = (p0, p1)
routes = gf.routing.get_bundle_from_waypoints(ports1, ports2, waypoints=waypoints)
for route in routes:
c.add(route.references)
c.plot()
###Output
_____no_output_____
###Markdown
get_bundle_from_steps
###Code
import gdsfactory as gf
c = gf.Component("get_route_from_steps_sample")
w = gf.components.array(
gf.partial(gf.components.straight, layer=(2, 0)),
rows=3,
columns=1,
spacing=(0, 50),
)
left = c << w
right = c << w
right.move((200, 100))
p1 = left.get_ports_list(orientation=0)
p2 = right.get_ports_list(orientation=180)
routes = gf.routing.get_bundle_from_steps(
p1,
p2,
steps=[{"x": 150}],
)
for route in routes:
c.add(route.references)
c.plot()
###Output
_____no_output_____
###Markdown
get_bundle_path_length_matchSometimes you need to set up a route a bundle of ports that need to keep the same lengths
###Code
import gdsfactory as gf
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 100.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port(f"bottom_{i}", (xs2[i], dy), 0.5, a2) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(ports1, ports2)
for route in routes:
c.add(route.references)
print(route.length)
c.plot()
###Output
_____no_output_____
###Markdown
Add extra lengthYou can also add some extra length to all the routes
###Code
import gdsfactory as gf
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 100.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port(f"bot_{i}", (xs2[i], dy), 0.5, a2) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(ports1, ports2, extra_length=44)
for route in routes:
c.add(route.references)
print(route.length)
c.show() # Klayout show
c.plot()
###Output
_____no_output_____
###Markdown
increase number of loopsYou can also increase the number of loops
###Code
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 200.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port(f"bot_{i}", (xs2[i], dy), 0.5, a2) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(
ports1, ports2, nb_loops=2, auto_widen=False
)
for route in routes:
c.add(route.references)
print(route.length)
c.plot()
# Problem, sometimes when you do path length matching you need to increase the separation
import gdsfactory as gf
c = gf.Component()
c1 = c << gf.components.straight_array(spacing=90)
c2 = c << gf.components.straight_array(spacing=5)
c2.movex(200)
c1.y = 0
c2.y = 0
routes = gf.routing.get_bundle_path_length_match(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
end_straight_length=0,
start_straight_length=0,
separation=30,
radius=5,
)
for route in routes:
c.add(route.references)
c.plot()
# Solution: increase separation
import gdsfactory as gf
c = gf.Component()
c1 = c << gf.components.straight_array(spacing=90)
c2 = c << gf.components.straight_array(spacing=5)
c2.movex(200)
c1.y = 0
c2.y = 0
routes = gf.routing.get_bundle_path_length_match(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
end_straight_length=0,
start_straight_length=0,
separation=80, # increased
radius=5,
)
for route in routes:
c.add(route.references)
c.plot()
###Output
_____no_output_____
###Markdown
Route to IO (Pads, grating couplers ...) Route to electrical pads
###Code
import gdsfactory as gf
mzi = gf.components.straight_heater_metal(length=30)
mzi.plot()
import gdsfactory as gf
mzi = gf.components.mzi_phase_shifter(
length_x=30, straight_x_top=gf.components.straight_heater_metal_90_90
)
mzi_te = gf.routing.add_electrical_pads_top(component=mzi, layer=(41, 0))
mzi_te.plot()
import gdsfactory as gf
hr = gf.components.straight_heater_metal()
cc = gf.routing.add_electrical_pads_shortest(component=hr, layer=(41, 0))
cc.plot()
# Problem: Sometimes the shortest path does not work well
import gdsfactory as gf
c = gf.components.mzi_phase_shifter_top_heater_metal(length_x=70)
cc = gf.routing.add_electrical_pads_shortest(component=c, layer=(41, 0))
cc.show()
cc.plot()
# Solution: you can use define the pads separate and route metal lines to them
c = gf.Component("mzi_with_pads")
c1 = c << gf.components.mzi_phase_shifter_top_heater_metal(length_x=70)
c2 = c << gf.components.pad_array(columns=2)
c2.ymin = c1.ymax + 20
c2.x = 0
c1.x = 0
c.plot()
c = gf.Component("mzi_with_pads")
c1 = c << gf.components.mzi_phase_shifter(
straight_x_top=gf.components.straight_heater_metal_90_90, length_x=70
)
c2 = c << gf.components.pad_array(columns=2)
c2.ymin = c1.ymax + 20
c2.x = 0
c1.x = 0
ports1 = c1.get_ports_list(width=11)
ports2 = c2.get_ports_list()
routes = gf.routing.get_bundle(
ports1=ports1,
ports2=ports2,
cross_section=gf.cross_section.metal1,
width=5,
bend=gf.components.wire_corner,
)
for route in routes:
c.add(route.references)
c.plot()
###Output
_____no_output_____
###Markdown
Route to Fiber ArrayRouting allows you to define routes to optical or electrical IO (grating couplers or electrical pads)
###Code
import numpy as np
import gdsfactory as gf
from gdsfactory import LAYER
from gdsfactory import Port
@gf.cell
def big_device(w=400.0, h=400.0, N=16, port_pitch=15.0, layer=LAYER.WG, wg_width=0.5):
"""big component with N ports on each side"""
component = gf.Component()
p0 = np.array((0, 0))
dx = w / 2
dy = h / 2
points = [[dx, dy], [dx, -dy], [-dx, -dy], [-dx, dy]]
component.add_polygon(points, layer=layer)
port_params = {"layer": layer, "width": wg_width}
for i in range(N):
port = Port(
name="W{}".format(i),
midpoint=p0 + (-dx, (i - N / 2) * port_pitch),
orientation=180,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name="E{}".format(i),
midpoint=p0 + (dx, (i - N / 2) * port_pitch),
orientation=0,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name="N{}".format(i),
midpoint=p0 + ((i - N / 2) * port_pitch, dy),
orientation=90,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name="S{}".format(i),
midpoint=p0 + ((i - N / 2) * port_pitch, -dy),
orientation=-90,
**port_params,
)
component.add_port(port)
return component
component = big_device(N=10)
c = gf.routing.add_fiber_array(component=component, radius=10.0, fanout_length=60.0)
c.plot()
import gdsfactory as gf
c = gf.components.ring_double(width=0.8)
cc = gf.routing.add_fiber_array(component=c, taper_length=150)
cc.plot()
cc.pprint()
###Output
_____no_output_____
###Markdown
You can also mix and match `TE` and `TM` grating couplers
###Code
c = gf.components.mzi_phase_shifter()
gcte = gf.components.grating_coupler_te
gctm = gf.components.grating_coupler_tm
cc = gf.routing.add_fiber_array(
component=c,
optical_routing_type=2,
grating_coupler=[gctm, gcte, gctm, gcte],
radius=20,
)
cc.plot()
###Output
_____no_output_____
###Markdown
Route to fiber single
###Code
import gdsfactory as gf
c = gf.components.ring_single()
cc = gf.routing.add_fiber_single(component=c)
cc.plot()
import gdsfactory as gf
c = gf.components.ring_single()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc.plot()
c = gf.components.mmi2x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc.plot()
c = gf.components.mmi1x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False, fiber_spacing=150)
cc.plot()
c = gf.components.mmi1x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False, fiber_spacing=50)
cc.plot()
c = gf.components.crossing()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc.plot()
c = gf.components.cross(length=200, width=2, port_type='optical')
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc.plot()
c = gf.components.spiral()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc.plot()
###Output
_____no_output_____
###Markdown
Routing optical and RF portsOptical and high speed RF ports have an orientation that routes need to follow to avoid sharp turns that produce reflections.
###Code
import gdsfactory as gf
gf.config.set_plot_options(show_subports=False)
gf.CONF.plotter = "matplotlib"
c = gf.Component()
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
c
###Output
_____no_output_____
###Markdown
get_route`get_route` returns a Manhattan route between 2 ports
###Code
gf.routing.get_route?
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
route = gf.routing.get_route(mmi1.ports["o2"], mmi2.ports["o1"])
c.add(route.references)
c
route
###Output
_____no_output_____
###Markdown
**Problem**: get_route with obstaclessometimes there are obstacles that connect strip does not see!
###Code
c = gf.Component("sample_problem")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((110, 50))
x = c << gf.components.cross(length=20)
x.move((135, 20))
route = gf.routing.get_route(mmi1.ports["o2"], mmi2.ports["o2"])
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**Solutions:**- specify the route waypoints- specify the route steps
###Code
c = gf.Component("sample_avoid_obstacle")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((110, 50))
x = c << gf.components.cross(length=20)
x.move((135, 20))
x0 = mmi1.ports["o3"].x
y0 = mmi1.ports["o3"].y
x2 = mmi2.ports["o3"].x
y2 = mmi2.ports["o3"].y
route = gf.routing.get_route_from_waypoints(
[(x0, y0), (x2 + 40, y0), (x2 + 40, y2), (x2, y2)]
)
c.add(route.references)
c
route.length
route.ports
route.references
###Output
_____no_output_____
###Markdown
Lets say that we want to extrude the waveguide using a different waveguide crosssection, for example using a different layer
###Code
import gdsfactory as gf
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
route = gf.routing.get_route(
mmi1.ports["o3"], mmi2.ports["o1"], cross_section=gf.cross_section.metal1
)
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
auto_widenTo reduce loss and phase errors you can also auto-widen waveguide routes straight sections that are longer than a certain length.
###Code
import gdsfactory as gf
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((200, 50))
route = gf.routing.get_route(
mmi1.ports["o3"],
mmi2.ports["o1"],
cross_section=gf.cross_section.strip,
auto_widen=True,
width_wide=2,
auto_widen_minimum_length=100,
)
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
get_route_from_waypointsSometimes you need to set up a route with custom waypoints. `get_route_from_waypoints` is a manual version of `get_route`
###Code
import gdsfactory as gf
c = gf.Component("waypoints_sample")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
p0x, p0y = left.ports["o2"].midpoint
p1x, p1y = right.ports["o2"].midpoint
o = 10 # vertical offset to overcome bottom obstacle
ytop = 20
routes = gf.routing.get_route_from_waypoints(
[
(p0x, p0y),
(p0x + o, p0y),
(p0x + o, ytop),
(p1x + o, ytop),
(p1x + o, p1y),
(p1x, p1y),
],
)
c.add(routes.references)
c
###Output
_____no_output_____
###Markdown
get_route_from_stepsAs you can see waypoints can only change one point (x or y) at a time, making the waypoint definition a bit redundant.You can also use a `get_route_from_steps` which is a more concise route definition, that supports defining only the new steps `x` or `y` together with increments `dx` or `dy``get_route_from_steps` is a manual version of `get_route` and a more concise and convenient version of `get_route_from_waypoints`
###Code
import gdsfactory as gf
c = gf.Component("get_route_from_steps")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
port1 = left.ports["o2"]
port2 = right.ports["o2"]
routes = gf.routing.get_route_from_steps(
port1=port1,
port2=port2,
steps=[
{"x": 20, "y": 0},
{"x": 20, "y": 20},
{"x": 120, "y": 20},
{"x": 120, "y": 80},
],
)
c.add(routes.references)
c
import gdsfactory as gf
c = gf.Component("get_route_from_steps_shorter_syntax")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
port1 = left.ports["o2"]
port2 = right.ports["o2"]
routes = gf.routing.get_route_from_steps(
port1=port1,
port2=port2,
steps=[
{"x": 20},
{"y": 20},
{"x": 120},
{"y": 80},
],
)
c.add(routes.references)
c
###Output
_____no_output_____
###Markdown
get_bundle**Problem**See the route collisions When connecting groups of ports using `get_route` manhattan single-route router
###Code
import gdsfactory as gf
xs_top = [0, 10, 20, 40, 50, 80]
pitch = 127
N = len(xs_top)
xs_bottom = [(i - N / 2) * pitch for i in range(N)]
layer = (1, 0)
top_ports = [
gf.Port(f"top_{i}", (xs_top[i], 0), 0.5, 270, layer=layer) for i in range(N)
]
bottom_ports = [
gf.Port(f"bottom_{i}", (xs_bottom[i], -100), 0.5, 90, layer=layer) for i in range(N)
]
c = gf.Component(name="connect_bundle")
for p1, p2 in zip(top_ports, bottom_ports):
route = gf.routing.get_route(p1, p2)
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**solution**`get_bundle` provides you with river routing capabilities, that you can use to route bundles of ports without collisions
###Code
c = gf.Component(name="connect_bundle")
routes = gf.routing.get_bundle(top_ports, bottom_ports)
for route in routes:
c.add(route.references)
c
import gdsfactory as gf
ys_right = [0, 10, 20, 40, 50, 80]
pitch = 127.0
N = len(ys_right)
ys_left = [(i - N / 2) * pitch for i in range(N)]
layer = (1, 0)
right_ports = [
gf.Port(f"R_{i}", (0, ys_right[i]), width=0.5, orientation=180, layer=layer)
for i in range(N)
]
left_ports = [
gf.Port(
f"L_{i}".format(i), (-200, ys_left[i]), width=0.5, orientation=0, layer=layer
)
for i in range(N)
]
# you can also mess up the port order and it will sort them by default
left_ports.reverse()
c = gf.Component(name="connect_bundle2")
routes = gf.routing.get_bundle(
left_ports, right_ports, sort_ports=True, start_straight_length=100
)
for route in routes:
c.add(route.references)
c
xs_top = [0, 10, 20, 40, 50, 80]
pitch = 127.0
N = len(xs_top)
xs_bottom = [(i - N / 2) * pitch for i in range(N)]
layer = (1, 0)
top_ports = [
gf.Port(
f"top_{i}", midpoint=(xs_top[i], 0), width=0.5, orientation=270, layer=layer
)
for i in range(N)
]
bot_ports = [
gf.Port(
f"bot_{i}",
midpoint=(xs_bottom[i], -300),
width=0.5,
orientation=90,
layer=layer,
)
for i in range(N)
]
c = gf.Component(name="connect_bundle")
routes = gf.routing.get_bundle(
top_ports, bot_ports, separation=5.0, end_straight_length=100
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
`get_bundle` can also route bundles through corners
###Code
import gdsfactory as gf
from gdsfactory.cell import cell
from gdsfactory.component import Component
from gdsfactory.port import Port
@cell
def test_connect_corner(N=6, config="A"):
d = 10.0
sep = 5.0
top_cell = gf.Component(name="connect_corner")
layer = (1, 0)
if config in ["A", "B"]:
a = 100.0
ports_A_TR = [
Port(
f"A_TR_{i}",
midpoint=(d, a / 2 + i * sep),
width=0.5,
orientation=0,
layer=layer,
)
for i in range(N)
]
ports_A_TL = [
Port(
f"A_TL_{i}",
midpoint=(-d, a / 2 + i * sep),
width=0.5,
orientation=180,
layer=layer,
)
for i in range(N)
]
ports_A_BR = [
Port(
f"A_BR_{i}",
midpoint=(d, -a / 2 - i * sep),
width=0.5,
orientation=0,
layer=layer,
)
for i in range(N)
]
ports_A_BL = [
Port(
f"A_BL_{i}",
midpoint=(-d, -a / 2 - i * sep),
width=0.5,
orientation=180,
layer=layer,
)
for i in range(N)
]
ports_A = [ports_A_TR, ports_A_TL, ports_A_BR, ports_A_BL]
ports_B_TR = [
Port(
f"B_TR_{i}",
midpoint=(a / 2 + i * sep, d),
width=0.5,
orientation=90,
layer=layer,
)
for i in range(N)
]
ports_B_TL = [
Port(
f"B_TL_{i}",
midpoint=(-a / 2 - i * sep, d),
width=0.5,
orientation=90,
layer=layer,
)
for i in range(N)
]
ports_B_BR = [
Port(
f"B_BR_{i}",
midpoint=(a / 2 + i * sep, -d),
width=0.5,
orientation=270,
layer=layer,
)
for i in range(N)
]
ports_B_BL = [
Port(
f"B_BL_{i}",
midpoint=(-a / 2 - i * sep, -d),
width=0.5,
orientation=270,
layer=layer,
)
for i in range(N)
]
ports_B = [ports_B_TR, ports_B_TL, ports_B_BR, ports_B_BL]
elif config in ["C", "D"]:
a = N * sep + 2 * d
ports_A_TR = [
Port(
f"A_TR_{i}",
midpoint=(a, d + i * sep),
width=0.5,
orientation=0,
layer=layer,
)
for i in range(N)
]
ports_A_TL = [
Port(
f"A_TL_{i}",
midpoint=(-a, d + i * sep),
width=0.5,
orientation=180,
layer=layer,
)
for i in range(N)
]
ports_A_BR = [
Port(
f"A_BR_{i}",
midpoint=(a, -d - i * sep),
width=0.5,
orientation=0,
layer=layer,
)
for i in range(N)
]
ports_A_BL = [
Port(
f"A_BL_{i}",
midpoint=(-a, -d - i * sep),
width=0.5,
orientation=180,
layer=layer,
)
for i in range(N)
]
ports_A = [ports_A_TR, ports_A_TL, ports_A_BR, ports_A_BL]
ports_B_TR = [
Port(
f"B_TR_{i}",
midpoint=(d + i * sep, a),
width=0.5,
orientation=90,
layer=layer,
)
for i in range(N)
]
ports_B_TL = [
Port(
f"B_TL_{i}",
midpoint=(-d - i * sep, a),
width=0.5,
orientation=90,
layer=layer,
)
for i in range(N)
]
ports_B_BR = [
Port(
f"B_BR_{i}",
midpoint=(d + i * sep, -a),
width=0.5,
orientation=270,
layer=layer,
)
for i in range(N)
]
ports_B_BL = [
Port(
f"B_BL_{i}",
midpoint=(-d - i * sep, -a),
width=0.5,
orientation=270,
layer=layer,
)
for i in range(N)
]
ports_B = [ports_B_TR, ports_B_TL, ports_B_BR, ports_B_BL]
if config in ["A", "C"]:
for ports1, ports2 in zip(ports_A, ports_B):
routes = gf.routing.get_bundle(ports1, ports2, layer=(2, 0), radius=5)
for route in routes:
top_cell.add(route.references)
elif config in ["B", "D"]:
for ports1, ports2 in zip(ports_A, ports_B):
routes = gf.routing.get_bundle(ports2, ports1, layer=(2, 0), radius=5)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_corner(config="A")
c
c = test_connect_corner(config="C")
c
@cell
def test_connect_bundle_udirect(dy=200, angle=270, layer=(1, 0)):
xs1 = [-100, -90, -80, -55, -35, 24, 0] + [200, 210, 240]
axis = "X" if angle in [0, 180] else "Y"
pitch = 10.0
N = len(xs1)
xs2 = [70 + i * pitch for i in range(N)]
if axis == "X":
ports1 = [
Port(f"top_{i}", (0, xs1[i]), 0.5, angle, layer=layer) for i in range(N)
]
ports2 = [
Port(f"bottom_{i}", (dy, xs2[i]), 0.5, angle, layer=layer) for i in range(N)
]
else:
ports1 = [
Port(f"top_{i}", (xs1[i], 0), 0.5, angle, layer=layer) for i in range(N)
]
ports2 = [
Port(f"bottom_{i}", (xs2[i], dy), 0.5, angle, layer=layer) for i in range(N)
]
top_cell = Component(name="connect_bundle_udirect")
routes = gf.routing.get_bundle(ports1, ports2, radius=10.0)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_bundle_udirect()
c
@cell
def test_connect_bundle_u_indirect(dy=-200, angle=180, layer=(1, 0)):
xs1 = [-100, -90, -80, -55, -35] + [200, 210, 240]
axis = "X" if angle in [0, 180] else "Y"
pitch = 10.0
N = len(xs1)
xs2 = [50 + i * pitch for i in range(N)]
a1 = angle
a2 = a1 + 180
if axis == "X":
ports1 = [Port(f"top_{i}", (0, xs1[i]), 0.5, a1, layer=layer) for i in range(N)]
ports2 = [
Port(f"bot_{i}", (dy, xs2[i]), 0.5, a2, layer=layer) for i in range(N)
]
else:
ports1 = [Port(f"top_{i}", (xs1[i], 0), 0.5, a1, layer=layer) for i in range(N)]
ports2 = [
Port(f"bot_{i}", (xs2[i], dy), 0.5, a2, layer=layer) for i in range(N)
]
top_cell = Component("connect_bundle_u_indirect")
routes = gf.routing.get_bundle(
ports1,
ports2,
bend=gf.components.bend_euler,
radius=5,
)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_bundle_u_indirect(angle=0)
c
import gdsfactory as gf
@gf.cell
def test_north_to_south(layer=(1, 0)):
dy = 200.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 10.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N // 2)]
xs2 += [400 + i * pitch for i in range(N // 2)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1, layer=layer) for i in range(N)]
ports2 = [gf.Port(f"bot_{i}", (xs2[i], dy), 0.5, a2, layer=layer) for i in range(N)]
c = gf.Component()
routes = gf.routing.get_bundle(ports1, ports2, auto_widen=False)
for route in routes:
c.add(route.references)
return c
c = test_north_to_south()
c
def demo_connect_bundle():
"""combines all the connect_bundle tests"""
y = 400.0
x = 500
y0 = 900
dy = 200.0
c = gf.Component("connect_bundle")
for j, s in enumerate([-1, 1]):
for i, angle in enumerate([0, 90, 180, 270]):
ci = test_connect_bundle_u_indirect(dy=s * dy, angle=angle)
ref = ci.ref(position=(i * x, j * y))
c.add(ref)
ci = test_connect_bundle_udirect(dy=s * dy, angle=angle)
ref = ci.ref(position=(i * x, j * y + y0))
c.add(ref)
for i, config in enumerate(["A", "B", "C", "D"]):
ci = test_connect_corner(config=config)
ref = ci.ref(position=(i * x, 1700))
c.add(ref)
return c
c = demo_connect_bundle()
c
import gdsfactory as gf
c = gf.Component("route_bend_5um")
c1 = c << gf.components.mmi2x2()
c2 = c << gf.components.mmi2x2()
c2.move((100, 50))
routes = gf.routing.get_bundle(
[c1.ports["o4"], c1.ports["o3"]], [c2.ports["o1"], c2.ports["o2"]], radius=5
)
for route in routes:
c.add(route.references)
c
import gdsfactory as gf
c = gf.Component("electrical")
c1 = c << gf.components.pad()
c2 = c << gf.components.pad()
c2.move((200, 100))
routes = gf.routing.get_bundle(
[c1.ports["e3"]], [c2.ports["e1"]], cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c
c = gf.Component("get_bundle_with_ubends_bend_from_top")
pad_array = gf.components.pad_array()
c1 = c << pad_array
c2 = c << pad_array
c2.rotate(90)
c2.movex(1000)
c2.ymax = -200
routes_bend180 = gf.routing.get_routes_bend180(
ports=c2.get_ports_list(),
radius=75 / 2,
cross_section=gf.cross_section.metal1,
bend_port1="e1",
bend_port2="e2",
)
c.add(routes_bend180.references)
routes = gf.routing.get_bundle(
c1.get_ports_list(), routes_bend180.ports, cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c
c = gf.Component("get_bundle_with_ubends_bend_from_bottom")
pad_array = gf.components.pad_array()
c1 = c << pad_array
c2 = c << pad_array
c2.rotate(90)
c2.movex(1000)
c2.ymax = -200
routes_bend180 = gf.routing.get_routes_bend180(
ports=c2.get_ports_list(),
radius=75 / 2,
cross_section=gf.cross_section.metal1,
bend_port1="e2",
bend_port2="e1",
)
c.add(routes_bend180.references)
routes = gf.routing.get_bundle(
c1.get_ports_list(), routes_bend180.ports, cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**Problem**Sometimes 90 degrees routes do not have enough space for a Manhattan route
###Code
import gdsfactory as gf
c = gf.Component("route_fail_1")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
c
import gdsfactory as gf
c = gf.Component("route_fail_1")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
routes = gf.routing.get_bundle(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
auto_widen=False,
)
for route in routes:
c.add(route.references)
c
c = gf.Component("route_fail_2")
pitch = 2.0
ys_left = [0, 10, 20]
N = len(ys_left)
ys_right = [(i - N / 2) * pitch for i in range(N)]
layer = (1, 0)
right_ports = [
gf.Port(f"R_{i}", (0, ys_right[i]), 0.5, 180, layer=layer) for i in range(N)
]
left_ports = [
gf.Port(f"L_{i}", (-50, ys_left[i]), 0.5, 0, layer=layer) for i in range(N)
]
left_ports.reverse()
routes = gf.routing.get_bundle(right_ports, left_ports, radius=5)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**Solution**Add Sbend routes using `get_bundle_sbend`
###Code
import gdsfactory as gf
c = gf.Component("route_solution_1_get_bundle_sbend")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
routes = gf.routing.get_bundle_sbend(
c1.get_ports_list(orientation=0), c2.get_ports_list(orientation=180)
)
c.add(routes.references)
c
routes
c = gf.Component("route_solution_2_get_bundle_sbend")
route = gf.routing.get_bundle_sbend(right_ports, left_ports)
c.add(route.references)
###Output
_____no_output_____
###Markdown
get_bundle_from_waypointsWhile `get_bundle` routes bundles of ports automatically, you can also use `get_bundle_from_waypoints` to manually specify the route waypoints.You can think of `get_bundle_from_waypoints` as a manual version of `get_bundle`
###Code
import numpy as np
import gdsfactory as gf
@gf.cell
def test_connect_bundle_waypoints(layer=(1, 0)):
"""Connect bundle of ports with bundle of routes following a list of waypoints."""
ys1 = np.array([0, 5, 10, 15, 30, 40, 50, 60]) + 0.0
ys2 = np.array([0, 10, 20, 30, 70, 90, 110, 120]) + 500.0
N = ys1.size
ports1 = [
gf.Port(
name=f"A_{i}", midpoint=(0, ys1[i]), width=0.5, orientation=0, layer=layer
)
for i in range(N)
]
ports2 = [
gf.Port(
name=f"B_{i}",
midpoint=(500, ys2[i]),
width=0.5,
orientation=180,
layer=layer,
)
for i in range(N)
]
p0 = ports1[0].position
c = gf.Component("B")
c.add_ports(ports1)
c.add_ports(ports2)
waypoints = [
p0 + (200, 0),
p0 + (200, -200),
p0 + (400, -200),
(p0[0] + 400, ports2[0].y),
]
routes = gf.routing.get_bundle_from_waypoints(ports1, ports2, waypoints)
lengths = {}
for i, route in enumerate(routes):
c.add(route.references)
lengths[i] = route.length
return c
cell = test_connect_bundle_waypoints()
cell
import numpy as np
import gdsfactory as gf
c = gf.Component()
r = c << gf.components.array(
component=gf.components.straight, rows=2, columns=1, spacing=(0, 20)
)
r.movex(60)
r.movey(40)
lt = c << gf.components.straight(length=15)
lb = c << gf.components.straight(length=5)
lt.movey(5)
ports1 = lt.get_ports_list(orientation=0) + lb.get_ports_list(orientation=0)
ports2 = r.get_ports_list(orientation=180)
dx = 20
p0 = ports1[0].midpoint + (dx, 0)
p1 = (ports1[0].midpoint[0] + dx, ports2[0].midpoint[1])
waypoints = (p0, p1)
routes = gf.routing.get_bundle_from_waypoints(ports1, ports2, waypoints=waypoints)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
get_bundle_from_steps
###Code
import gdsfactory as gf
c = gf.Component("get_route_from_steps_sample")
w = gf.components.array(
gf.partial(gf.components.straight, layer=(2, 0)),
rows=3,
columns=1,
spacing=(0, 50),
)
left = c << w
right = c << w
right.move((200, 100))
p1 = left.get_ports_list(orientation=0)
p2 = right.get_ports_list(orientation=180)
routes = gf.routing.get_bundle_from_steps(
p1,
p2,
steps=[{"x": 150}],
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
get_bundle_path_length_matchSometimes you need to set up a route a bundle of ports that need to keep the same lengths
###Code
import gdsfactory as gf
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 100.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
layer = (1, 0)
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1, layer=layer) for i in range(N)]
ports2 = [gf.Port(f"bot_{i}", (xs2[i], dy), 0.5, a2, layer=layer) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(ports1, ports2)
for route in routes:
c.add(route.references)
print(route.length)
c
###Output
_____no_output_____
###Markdown
Add extra lengthYou can also add some extra length to all the routes
###Code
import gdsfactory as gf
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 100.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
layer = (1, 0)
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1, layer=layer) for i in range(N)]
ports2 = [gf.Port(f"bot_{i}", (xs2[i], dy), 0.5, a2, layer=layer) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(ports1, ports2, extra_length=44)
for route in routes:
c.add(route.references)
print(route.length)
c
###Output
_____no_output_____
###Markdown
increase number of loopsYou can also increase the number of loops
###Code
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 200.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
layer = (1, 0)
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1, layer=layer) for i in range(N)]
ports2 = [gf.Port(f"bot_{i}", (xs2[i], dy), 0.5, a2, layer=layer) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(
ports1, ports2, nb_loops=2, auto_widen=False
)
for route in routes:
c.add(route.references)
print(route.length)
c
# Problem, sometimes when you do path length matching you need to increase the separation
import gdsfactory as gf
c = gf.Component()
c1 = c << gf.components.straight_array(spacing=90)
c2 = c << gf.components.straight_array(spacing=5)
c2.movex(200)
c1.y = 0
c2.y = 0
routes = gf.routing.get_bundle_path_length_match(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
end_straight_length=0,
start_straight_length=0,
separation=30,
radius=5,
)
for route in routes:
c.add(route.references)
c
# Solution: increase separation
import gdsfactory as gf
c = gf.Component()
c1 = c << gf.components.straight_array(spacing=90)
c2 = c << gf.components.straight_array(spacing=5)
c2.movex(200)
c1.y = 0
c2.y = 0
routes = gf.routing.get_bundle_path_length_match(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
end_straight_length=0,
start_straight_length=0,
separation=80, # increased
radius=5,
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
Route to IO (Pads, grating couplers ...) Route to electrical pads
###Code
import gdsfactory as gf
mzi = gf.components.straight_heater_metal(length=30)
mzi
import gdsfactory as gf
mzi = gf.components.mzi_phase_shifter(
length_x=30, straight_x_top=gf.components.straight_heater_metal_90_90
)
mzi_te = gf.routing.add_electrical_pads_top(component=mzi, layer=(41, 0))
mzi_te
import gdsfactory as gf
hr = gf.components.straight_heater_metal()
cc = gf.routing.add_electrical_pads_shortest(component=hr, layer=(41, 0))
cc
# Problem: Sometimes the shortest path does not work well
import gdsfactory as gf
c = gf.components.mzi_phase_shifter_top_heater_metal(length_x=70)
cc = gf.routing.add_electrical_pads_shortest(component=c, layer=(41, 0))
cc
# Solution: you can use define the pads separate and route metal lines to them
c = gf.Component("mzi_with_pads")
c1 = c << gf.components.mzi_phase_shifter_top_heater_metal(length_x=70)
c2 = c << gf.components.pad_array(columns=2)
c2.ymin = c1.ymax + 20
c2.x = 0
c1.x = 0
c
c = gf.Component("mzi_with_pads")
c1 = c << gf.components.mzi_phase_shifter(
straight_x_top=gf.components.straight_heater_metal_90_90, length_x=70 # 150
)
c2 = c << gf.components.pad_array(columns=2, orientation=270)
c2.ymin = c1.ymax + 30
c2.x = 0
c1.x = 0
ports1 = c1.get_ports_list(port_type="electrical")
ports2 = c2.get_ports_list()
routes = gf.routing.get_bundle(
ports1=ports1,
ports2=ports2,
cross_section=gf.cross_section.metal1,
width=10,
bend=gf.components.wire_corner,
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
Route to Fiber ArrayRouting allows you to define routes to optical or electrical IO (grating couplers or electrical pads)
###Code
import numpy as np
import gdsfactory as gf
from gdsfactory import LAYER
from gdsfactory import Port
@gf.cell
def big_device(w=400.0, h=400.0, N=16, port_pitch=15.0, layer=LAYER.WG, wg_width=0.5):
"""big component with N ports on each side"""
component = gf.Component()
p0 = np.array((0, 0))
dx = w / 2
dy = h / 2
points = [[dx, dy], [dx, -dy], [-dx, -dy], [-dx, dy]]
component.add_polygon(points, layer=layer)
port_params = {"layer": layer, "width": wg_width}
for i in range(N):
port = Port(
name=f"W{i}",
midpoint=p0 + (-dx, (i - N / 2) * port_pitch),
orientation=180,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name=f"E{i}",
midpoint=p0 + (dx, (i - N / 2) * port_pitch),
orientation=0,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name=f"N{i}",
midpoint=p0 + ((i - N / 2) * port_pitch, dy),
orientation=90,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name=f"S{i}",
midpoint=p0 + ((i - N / 2) * port_pitch, -dy),
orientation=-90,
**port_params,
)
component.add_port(port)
return component
component = big_device(N=10)
c = gf.routing.add_fiber_array(component=component, radius=10.0, fanout_length=60.0)
c
import gdsfactory as gf
c = gf.components.ring_double(width=0.8)
cc = gf.routing.add_fiber_array(component=c, taper_length=150)
cc
cc.pprint()
###Output
_____no_output_____
###Markdown
You can also mix and match `TE` and `TM` grating couplers
###Code
c = gf.components.mzi_phase_shifter()
gcte = gf.components.grating_coupler_te
gctm = gf.components.grating_coupler_tm
cc = gf.routing.add_fiber_array(
component=c,
optical_routing_type=2,
grating_coupler=[gctm, gcte, gctm, gcte],
radius=20,
)
cc
###Output
_____no_output_____
###Markdown
Route to fiber single
###Code
import gdsfactory as gf
c = gf.components.ring_single()
cc = gf.routing.add_fiber_single(component=c)
cc
import gdsfactory as gf
c = gf.components.ring_single()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.mmi2x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.mmi1x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False, fiber_spacing=150)
cc
c = gf.components.mmi1x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False, fiber_spacing=50)
cc
c = gf.components.crossing()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.cross(length=200, width=2, port_type="optical")
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.spiral()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
###Output
_____no_output_____
###Markdown
RoutingRouting allows you to route waveguides between component ports
###Code
import gdsfactory as gf
gf.config.set_plot_options(show_subports=False)
c = gf.Component()
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
c
###Output
_____no_output_____
###Markdown
get_route`get_route` returns a Manhattan route between 2 ports
###Code
gf.routing.get_route?
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
route = gf.routing.get_route(mmi1.ports["o2"], mmi2.ports["o1"])
c.add(route.references)
c
route
###Output
_____no_output_____
###Markdown
**Connect strip: Problem**sometimes there are obstacles that connect strip does not see!
###Code
c = gf.Component("sample_problem")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((110, 50))
x = c << gf.components.cross(length=20)
x.move((135, 20))
route = gf.routing.get_route(mmi1.ports["o2"], mmi2.ports["o2"])
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**Solution: Connect strip way points**You can also specify the points along the route
###Code
gf.routing.get_route_waypoints?
c = gf.Component("sample_avoid_obstacle")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((110, 50))
x = c << gf.components.cross(length=20)
x.move((135, 20))
x0 = mmi1.ports["o3"].x
y0 = mmi1.ports["o3"].y
x2 = mmi2.ports["o3"].x
y2 = mmi2.ports["o3"].y
route = gf.routing.get_route_from_waypoints(
[(x0, y0), (x2 + 40, y0), (x2 + 40, y2), (x2, y2)]
)
c.add(route.references)
c
route.length
route.ports
route.references
###Output
_____no_output_____
###Markdown
Lets say that we want to extrude the waveguide using a different waveguide crosssection, for example using a different layer
###Code
import gdsfactory as gf
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
route = gf.routing.get_route(
mmi1.ports["o3"], mmi2.ports["o1"], cross_section=gf.cross_section.metal1
)
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
auto-widenTo reduce loss and phase errors you can also auto-widen waveguide routes straight sections that are longer than a certain length.
###Code
import gdsfactory as gf
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((200, 50))
route = gf.routing.get_route(
mmi1.ports["o3"],
mmi2.ports["o1"],
cross_section=gf.cross_section.strip,
auto_widen=True,
width_wide=2,
auto_widen_minimum_length=100,
)
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
get_route_from_waypointsSometimes you need to set up a route with custom waypoints. `get_route_from_waypoints` is a manual version of `get_route`
###Code
import gdsfactory as gf
c = gf.Component("waypoints_sample")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
p0x, p0y = left.ports["o2"].midpoint
p1x, p1y = right.ports["o2"].midpoint
o = 10 # vertical offset to overcome bottom obstacle
ytop = 20
routes = gf.routing.get_route_from_waypoints(
[
(p0x, p0y),
(p0x + o, p0y),
(p0x + o, ytop),
(p1x + o, ytop),
(p1x + o, p1y),
(p1x, p1y),
],
)
c.add(routes.references)
c
###Output
_____no_output_____
###Markdown
get_route_from_stepsAs you can see waypoints can only change one point (x or y) at a time, making the waypoint definition a bit redundant.You can also use a `get_route_from_steps` which is a more concise route definition, that supports defining only the new steps `x` or `y` together with increments `dx` or `dy``get_route_from_steps` is a manual version of `get_route` and a more concise and convenient version of `get_route_from_waypoints`
###Code
import gdsfactory as gf
c = gf.Component("get_route_from_steps")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
port1 = left.ports["o2"]
port2 = right.ports["o2"]
routes = gf.routing.get_route_from_steps(
port1=port1,
port2=port2,
steps=[
{"x": 20, "y": 0},
{"x": 20, "y": 20},
{"x": 120, "y": 20},
{"x": 120, "y": 80},
],
)
c.add(routes.references)
c
import gdsfactory as gf
c = gf.Component("get_route_from_steps_shorter_syntax")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
port1 = left.ports["o2"]
port2 = right.ports["o2"]
routes = gf.routing.get_route_from_steps(
port1=port1,
port2=port2,
steps=[
{"x": 20},
{"y": 20},
{"x": 120},
{"y": 80},
],
)
c.add(routes.references)
c
###Output
_____no_output_____
###Markdown
get_bundle**Problem**See the route collisions When connecting groups of ports using a regular manhattan single-route router such as `get route`
###Code
import gdsfactory as gf
xs_top = [0, 10, 20, 40, 50, 80]
pitch = 127
N = len(xs_top)
xs_bottom = [(i - N / 2) * pitch for i in range(N)]
top_ports = [gf.Port(f"top_{i}", (xs_top[i], 0), 0.5, 270) for i in range(N)]
bottom_ports = [gf.Port(f"bottom_{i}", (xs_bottom[i], -100), 0.5, 90) for i in range(N)]
c = gf.Component(name="connect_bundle")
for p1, p2 in zip(top_ports, bottom_ports):
route = gf.routing.get_route(p1, p2)
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**solution**`get_bundle` provides you with river routing capabilities, that you can use to route bundles of ports without collisions
###Code
c = gf.Component(name="connect_bundle")
routes = gf.routing.get_bundle(top_ports, bottom_ports)
for route in routes:
c.add(route.references)
c
import gdsfactory as gf
ys_right = [0, 10, 20, 40, 50, 80]
pitch = 127.0
N = len(ys_right)
ys_left = [(i - N / 2) * pitch for i in range(N)]
right_ports = [gf.Port(f"R_{i}", (0, ys_right[i]), 0.5, 180) for i in range(N)]
left_ports = [gf.Port(f"L_{i}".format(i), (-400, ys_left[i]), 0.5, 0) for i in range(N)]
# you can also mess up the port order and it will sort them by default
left_ports.reverse()
c = gf.Component(name="connect_bundle2")
routes = gf.routing.get_bundle(right_ports, left_ports, sort_ports=True)
for route in routes:
c.add(route.references)
c
xs_top = [0, 10, 20, 40, 50, 80]
pitch = 127.0
N = len(xs_top)
xs_bottom = [(i - N / 2) * pitch for i in range(N)]
top_ports = [gf.Port(f"top_{i}", (xs_top[i], 0), 0.5, 270) for i in range(N)]
bottom_ports = [gf.Port(f"bottom_{i}", (xs_bottom[i], -400), 0.5, 90) for i in range(N)]
c = gf.Component(name="connect_bundle")
routes = gf.routing.get_bundle(top_ports, bottom_ports, separation=5.)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
`get_bundle` can also route bundles through corners
###Code
import gdsfactory as gf
from gdsfactory.cell import cell
from gdsfactory.component import Component
from gdsfactory.port import Port
@cell
def test_connect_corner(N=6, config="A"):
d = 10.0
sep = 5.0
top_cell = gf.Component(name="connect_corner")
if config in ["A", "B"]:
a = 100.0
ports_A_TR = [
Port("A_TR_{}".format(i), (d, a / 2 + i * sep), 0.5, 0) for i in range(N)
]
ports_A_TL = [
Port("A_TL_{}".format(i), (-d, a / 2 + i * sep), 0.5, 180) for i in range(N)
]
ports_A_BR = [
Port("A_BR_{}".format(i), (d, -a / 2 - i * sep), 0.5, 0) for i in range(N)
]
ports_A_BL = [
Port("A_BL_{}".format(i), (-d, -a / 2 - i * sep), 0.5, 180)
for i in range(N)
]
ports_A = [ports_A_TR, ports_A_TL, ports_A_BR, ports_A_BL]
ports_B_TR = [
Port("B_TR_{}".format(i), (a / 2 + i * sep, d), 0.5, 90) for i in range(N)
]
ports_B_TL = [
Port("B_TL_{}".format(i), (-a / 2 - i * sep, d), 0.5, 90) for i in range(N)
]
ports_B_BR = [
Port("B_BR_{}".format(i), (a / 2 + i * sep, -d), 0.5, 270) for i in range(N)
]
ports_B_BL = [
Port("B_BL_{}".format(i), (-a / 2 - i * sep, -d), 0.5, 270)
for i in range(N)
]
ports_B = [ports_B_TR, ports_B_TL, ports_B_BR, ports_B_BL]
elif config in ["C", "D"]:
a = N * sep + 2 * d
ports_A_TR = [
Port("A_TR_{}".format(i), (a, d + i * sep), 0.5, 0) for i in range(N)
]
ports_A_TL = [
Port("A_TL_{}".format(i), (-a, d + i * sep), 0.5, 180) for i in range(N)
]
ports_A_BR = [
Port("A_BR_{}".format(i), (a, -d - i * sep), 0.5, 0) for i in range(N)
]
ports_A_BL = [
Port("A_BL_{}".format(i), (-a, -d - i * sep), 0.5, 180) for i in range(N)
]
ports_A = [ports_A_TR, ports_A_TL, ports_A_BR, ports_A_BL]
ports_B_TR = [
Port("B_TR_{}".format(i), (d + i * sep, a), 0.5, 90) for i in range(N)
]
ports_B_TL = [
Port("B_TL_{}".format(i), (-d - i * sep, a), 0.5, 90) for i in range(N)
]
ports_B_BR = [
Port("B_BR_{}".format(i), (d + i * sep, -a), 0.5, 270) for i in range(N)
]
ports_B_BL = [
Port("B_BL_{}".format(i), (-d - i * sep, -a), 0.5, 270) for i in range(N)
]
ports_B = [ports_B_TR, ports_B_TL, ports_B_BR, ports_B_BL]
if config in ["A", "C"]:
for ports1, ports2 in zip(ports_A, ports_B):
routes = gf.routing.get_bundle(ports1, ports2, layer=(2, 0), radius=5)
for route in routes:
top_cell.add(route.references)
elif config in ["B", "D"]:
for ports1, ports2 in zip(ports_A, ports_B):
routes = gf.routing.get_bundle(ports2, ports1, layer=(2, 0), radius=5)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_corner(config="A")
c
c = test_connect_corner(config="C")
c
@cell
def test_connect_bundle_udirect(dy=200, angle=270):
xs1 = [-100, -90, -80, -55, -35, 24, 0] + [200, 210, 240]
axis = "X" if angle in [0, 180] else "Y"
pitch = 10.0
N = len(xs1)
xs2 = [70 + i * pitch for i in range(N)]
if axis == "X":
ports1 = [Port(f"top_{i}", (0, xs1[i]), 0.5, angle) for i in range(N)]
ports2 = [Port(f"bottom_{i}", (dy, xs2[i]), 0.5, angle) for i in range(N)]
else:
ports1 = [Port(f"top_{i}", (xs1[i], 0), 0.5, angle) for i in range(N)]
ports2 = [Port(f"bottom_{i}", (xs2[i], dy), 0.5, angle) for i in range(N)]
top_cell = Component(name="connect_bundle_udirect")
routes = gf.routing.get_bundle(ports1, ports2, radius=10.0)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_bundle_udirect()
c
@cell
def test_connect_bundle_u_indirect(dy=-200, angle=180):
xs1 = [-100, -90, -80, -55, -35] + [200, 210, 240]
axis = "X" if angle in [0, 180] else "Y"
pitch = 10.0
N = len(xs1)
xs2 = [50 + i * pitch for i in range(N)]
a1 = angle
a2 = a1 + 180
if axis == "X":
ports1 = [Port("top_{}".format(i), (0, xs1[i]), 0.5, a1) for i in range(N)]
ports2 = [Port("bottom_{}".format(i), (dy, xs2[i]), 0.5, a2) for i in range(N)]
else:
ports1 = [Port("top_{}".format(i), (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [Port("bottom_{}".format(i), (xs2[i], dy), 0.5, a2) for i in range(N)]
top_cell = Component("connect_bundle_u_indirect")
routes = gf.routing.get_bundle(
ports1, ports2, bend=gf.components.bend_euler, radius=10
)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_bundle_u_indirect(angle=0)
c
import gdsfactory as gf
@gf.cell
def test_north_to_south():
dy = 200.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 10.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N // 2)]
xs2 += [400 + i * pitch for i in range(N // 2)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port("top_{}".format(i), (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port("bottom_{}".format(i), (xs2[i], dy), 0.5, a2) for i in range(N)]
c = gf.Component()
routes = gf.routing.get_bundle(ports1, ports2, auto_widen=False)
for route in routes:
c.add(route.references)
return c
c = test_north_to_south()
c
def demo_connect_bundle():
"""combines all the connect_bundle tests"""
y = 400.0
x = 500
y0 = 900
dy = 200.0
c = Component("connect_bundle")
for j, s in enumerate([-1, 1]):
for i, angle in enumerate([0, 90, 180, 270]):
_cmp = test_connect_bundle_u_indirect(dy=s * dy, angle=angle)
_cmp_ref = _cmp.ref(position=(i * x, j * y))
c.add(_cmp_ref)
_cmp = test_connect_bundle_udirect(dy=s * dy, angle=angle)
_cmp_ref = _cmp.ref(position=(i * x, j * y + y0))
c.add(_cmp_ref)
for i, config in enumerate(["A", "B", "C", "D"]):
_cmp = test_connect_corner(config=config)
_cmp_ref = _cmp.ref(position=(i * x, 1700))
c.add(_cmp_ref)
# _cmp = test_facing_ports()
# _cmp_ref = _cmp.ref(position=(800, 1820))
# c.add(_cmp_ref)
return c
c = demo_connect_bundle()
c
import gdsfactory as gf
c = gf.Component("route_bend_5um")
c1 = c << gf.components.mmi2x2()
c2 = c << gf.components.mmi2x2()
c2.move((100, 50))
routes = gf.routing.get_bundle(
[c1.ports["o4"], c1.ports["o3"]], [c2.ports["o1"], c2.ports["o2"]], radius=5
)
for route in routes:
c.add(route.references)
c
import gdsfactory as gf
c = gf.Component("electrical")
c1 = c << gf.components.pad()
c2 = c << gf.components.pad()
c2.move((200, 100))
routes = gf.routing.get_bundle(
[c1.ports["e3"]], [c2.ports["e1"]], cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c
c = gf.Component("get_bundle_with_ubends_bend_from_top")
pad_array = gf.components.pad_array()
c1 = c << pad_array
c2 = c << pad_array
c2.rotate(90)
c2.movex(1000)
c2.ymax = -200
routes_bend180 = gf.routing.get_routes_bend180(
ports=c2.get_ports_list(),
radius=75 / 2,
cross_section=gf.cross_section.metal1,
bend_port1="e1",
bend_port2="e2",
)
c.add(routes_bend180.references)
routes = gf.routing.get_bundle(
c1.get_ports_list(), routes_bend180.ports, cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c
c = gf.Component("get_bundle_with_ubends_bend_from_bottom")
pad_array = gf.components.pad_array()
c1 = c << pad_array
c2 = c << pad_array
c2.rotate(90)
c2.movex(1000)
c2.ymax = -200
routes_bend180 = gf.routing.get_routes_bend180(
ports=c2.get_ports_list(),
radius=75 / 2,
cross_section=gf.cross_section.metal1,
bend_port1="e2",
bend_port2="e1",
)
c.add(routes_bend180.references)
routes = gf.routing.get_bundle(
c1.get_ports_list(), routes_bend180.ports, cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**Problem**Sometimes 90 degrees routes do not have enough space for a Manhattan route
###Code
import gdsfactory as gf
c = gf.Component("route_fail_1")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
c
import gdsfactory as gf
c = gf.Component("route_fail_1")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
routes = gf.routing.get_bundle(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
auto_widen=False,
)
for route in routes:
c.add(route.references)
c
c = gf.Component("route_fail_2")
pitch = 2.0
ys_left = [0, 10, 20]
N = len(ys_left)
ys_right = [(i - N / 2) * pitch for i in range(N)]
right_ports = [gf.Port(f"R_{i}", (0, ys_right[i]), 0.5, 180) for i in range(N)]
left_ports = [gf.Port(f"L_{i}", (-50, ys_left[i]), 0.5, 0) for i in range(N)]
left_ports.reverse()
routes = gf.routing.get_bundle(right_ports, left_ports, radius=5)
for i, route in enumerate(routes):
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**Solution**Add Sbend routes using `get_bundle_sbend`
###Code
import gdsfactory as gf
c = gf.Component("route_solution_1_get_bundle_sbend")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
routes = gf.routing.get_bundle_sbend(
c1.get_ports_list(orientation=0), c2.get_ports_list(orientation=180)
)
c.add(routes.references)
c
routes
c = gf.Component("route_solution_2_get_bundle_sbend")
route = gf.routing.get_bundle_sbend(right_ports, left_ports)
c.add(route.references)
###Output
_____no_output_____
###Markdown
get_bundle_from_waypointsWhile `get_bundle` routes bundles of ports automatically, you can also use `get_bundle_from_waypoints` to manually specify the route waypoints.You can think of `get_bundle_from_waypoints` as a manual version of `get_bundle`
###Code
import numpy as np
import gdsfactory as gf
@gf.cell
def test_connect_bundle_waypoints():
"""Connect bundle of ports with bundle of routes following a list of waypoints."""
ys1 = np.array([0, 5, 10, 15, 30, 40, 50, 60]) + 0.0
ys2 = np.array([0, 10, 20, 30, 70, 90, 110, 120]) + 500.0
N = ys1.size
ports1 = [
Port(name=f"A_{i}", midpoint=(0, ys1[i]), width=0.5, orientation=0)
for i in range(N)
]
ports2 = [
Port(
name=f"B_{i}",
midpoint=(500, ys2[i]),
width=0.5,
orientation=180,
)
for i in range(N)
]
p0 = ports1[0].position
c = gf.Component("B")
c.add_ports(ports1)
c.add_ports(ports2)
waypoints = [
p0 + (200, 0),
p0 + (200, -200),
p0 + (400, -200),
(p0[0] + 400, ports2[0].y),
]
routes = gf.routing.get_bundle_from_waypoints(ports1, ports2, waypoints)
lengths = {}
for i, route in enumerate(routes):
c.add(route.references)
lengths[i] = route.length
return c
cell = test_connect_bundle_waypoints()
cell
import numpy as np
import gdsfactory as gf
c = gf.Component()
r = c << gf.c.array(component=gf.c.straight, rows=2, columns=1, spacing=(0, 20))
r.movex(60)
r.movey(40)
lt = c << gf.c.straight(length=15)
lb = c << gf.c.straight(length=5)
lt.movey(5)
ports1 = lt.get_ports_list(orientation=0) + lb.get_ports_list(orientation=0)
ports2 = r.get_ports_list(orientation=180)
dx = 20
p0 = ports1[0].midpoint + (dx, 0)
p1 = (ports1[0].midpoint[0] + dx, ports2[0].midpoint[1])
waypoints = (p0, p1)
routes = gf.routing.get_bundle_from_waypoints(ports1, ports2, waypoints=waypoints)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
get_bundle_path_length_matchSometimes you need to set up a route a bundle of ports that need to keep the same lengths
###Code
import gdsfactory as gf
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 100.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port(f"bottom_{i}", (xs2[i], dy), 0.5, a2) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(ports1, ports2)
for route in routes:
c.add(route.references)
print(route.length)
c
###Output
_____no_output_____
###Markdown
Add extra lengthYou can also add some extra length to all the routes
###Code
import gdsfactory as gf
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 100.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port(f"bot_{i}", (xs2[i], dy), 0.5, a2) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(ports1, ports2, extra_length=44)
for route in routes:
c.add(route.references)
print(route.length)
c
###Output
_____no_output_____
###Markdown
increase number of loopsYou can also increase the number of loops
###Code
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 200.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port(f"bot_{i}", (xs2[i], dy), 0.5, a2) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(
ports1, ports2, nb_loops=2, auto_widen=False
)
for route in routes:
c.add(route.references)
print(route.length)
c
# Problem, sometimes when you do path length matching you need to increase the separation
import gdsfactory as gf
c = gf.Component()
c1 = c << gf.c.straight_array(spacing=90)
c2 = c << gf.c.straight_array(spacing=5)
c2.movex(200)
c1.y = 0
c2.y = 0
routes = gf.routing.get_bundle_path_length_match(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
end_straight_offset=0,
start_straight=0,
separation=30,
radius=5,
)
for route in routes:
c.add(route.references)
c
# Solution: increase separation
import gdsfactory as gf
c = gf.Component()
c1 = c << gf.c.straight_array(spacing=90)
c2 = c << gf.c.straight_array(spacing=5)
c2.movex(200)
c1.y = 0
c2.y = 0
routes = gf.routing.get_bundle_path_length_match(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
end_straight_offset=0,
start_straight=0,
separation=80, # increased
radius=5,
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
Route to IO (Pads, grating couplers ...) Route to electrical pads
###Code
import gdsfactory as gf
mzi = gf.components.straight_heater_metal(length=30)
mzi
import gdsfactory as gf
mzi = gf.components.mzi_phase_shifter(
length_x=30, straight_x_top=gf.c.straight_heater_metal_90_90
)
gf.routing.add_electrical_pads_top(component=mzi)
import gdsfactory as gf
hr = gf.components.straight_heater_metal()
cc = gf.routing.add_electrical_pads_shortest(component=hr)
cc
# Problem: Sometimes the shortest path does not work well
import gdsfactory as gf
c = gf.components.mzi_phase_shifter(length_x=250)
cc = gf.routing.add_electrical_pads_shortest(component=c)
cc
# Solution: you can use define the pads separate and route metal lines to them
c = gf.Component("mzi_with_pads")
c1 = c << gf.components.mzi_phase_shifter(length_x=70)
c2 = c << gf.components.pad_array(columns=2)
c2.ymin = c1.ymax + 20
c2.x = 0
c1.x = 0
c
c = gf.Component("mzi_with_pads")
c1 = c << gf.components.mzi_phase_shifter(
straight_x_top=gf.c.straight_heater_metal_90_90, length_x=70
)
c2 = c << gf.components.pad_array(columns=2)
c2.ymin = c1.ymax + 20
c2.x = 0
c1.x = 0
ports1 = c1.get_ports_list(width=11)
ports2 = c2.get_ports_list()
routes = gf.routing.get_bundle(
ports1=ports1,
ports2=ports2,
cross_section=gf.cross_section.metal1,
width=5,
bend=gf.c.wire_corner,
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
Route to Fiber ArrayRouting allows you to define routes to optical or electrical IO (grating couplers or electrical pads)
###Code
import numpy as np
import gdsfactory as gf
from gdsfactory import LAYER
from gdsfactory import Port
@gf.cell
def big_device(w=400.0, h=400.0, N=16, port_pitch=15.0, layer=LAYER.WG, wg_width=0.5):
"""big component with N ports on each side"""
component = gf.Component()
p0 = np.array((0, 0))
dx = w / 2
dy = h / 2
points = [[dx, dy], [dx, -dy], [-dx, -dy], [-dx, dy]]
component.add_polygon(points, layer=layer)
port_params = {"layer": layer, "width": wg_width}
for i in range(N):
port = Port(
name="W{}".format(i),
midpoint=p0 + (-dx, (i - N / 2) * port_pitch),
orientation=180,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name="E{}".format(i),
midpoint=p0 + (dx, (i - N / 2) * port_pitch),
orientation=0,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name="N{}".format(i),
midpoint=p0 + ((i - N / 2) * port_pitch, dy),
orientation=90,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name="S{}".format(i),
midpoint=p0 + ((i - N / 2) * port_pitch, -dy),
orientation=-90,
**port_params,
)
component.add_port(port)
return component
component = big_device(N=10)
c = gf.routing.add_fiber_array(component=component, radius=10.0, fanout_length=60.0)
c
import gdsfactory as gf
c = gf.components.ring_double(width=0.8)
cc = gf.routing.add_fiber_array(component=c, taper_length=150)
cc
cc.pprint
###Output
_____no_output_____
###Markdown
You can also mix and match `TE` and `TM` grating couplers
###Code
c = gf.components.mzi_phase_shifter()
gcte = gf.components.grating_coupler_te
gctm = gf.components.grating_coupler_tm
cc = gf.routing.add_fiber_array(
component=c,
optical_routing_type=2,
grating_coupler=[gctm, gcte, gctm, gcte],
radius=20,
)
cc
###Output
_____no_output_____
###Markdown
Route to fiber single
###Code
import gdsfactory as gf
c = gf.components.ring_single()
cc = gf.routing.add_fiber_single(component=c)
cc
import gdsfactory as gf
c = gf.components.ring_single()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.mmi2x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.mmi1x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False, fiber_spacing=150)
cc
c = gf.components.mmi1x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False, fiber_spacing=50)
cc
c = gf.components.crossing()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.cross(length=200, width=2)
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.spiral()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
###Output
_____no_output_____
###Markdown
RoutingRouting allows you to route waveguides between component ports
###Code
import gdsfactory as gf
gf.config.set_plot_options(show_subports=False)
c = gf.Component()
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
c
###Output
_____no_output_____
###Markdown
get_route`get_route` returns a Manhattan route between 2 ports
###Code
gf.routing.get_route?
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
route = gf.routing.get_route(mmi1.ports["o2"], mmi2.ports["o1"])
c.add(route.references)
c
route
###Output
_____no_output_____
###Markdown
**Connect strip: Problem**sometimes there are obstacles that connect strip does not see!
###Code
c = gf.Component("sample_problem")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((110, 50))
x = c << gf.components.cross(length=20)
x.move((135, 20))
route = gf.routing.get_route(mmi1.ports["o2"], mmi2.ports["o2"])
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**Solution: Connect strip way points**You can also specify the points along the route
###Code
gf.routing.get_route_waypoints?
c = gf.Component("sample_avoid_obstacle")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((110, 50))
x = c << gf.components.cross(length=20)
x.move((135, 20))
x0 = mmi1.ports["o3"].x
y0 = mmi1.ports["o3"].y
x2 = mmi2.ports["o3"].x
y2 = mmi2.ports["o3"].y
route = gf.routing.get_route_from_waypoints(
[(x0, y0), (x2 + 40, y0), (x2 + 40, y2), (x2, y2)]
)
c.add(route.references)
c
route.length
route.ports
route.references
###Output
_____no_output_____
###Markdown
Lets say that we want to extrude the waveguide using a different waveguide crosssection, for example using a different layer
###Code
import gdsfactory as gf
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((100, 50))
route = gf.routing.get_route(
mmi1.ports["o3"], mmi2.ports["o1"], cross_section=gf.cross_section.metal1
)
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
auto-widenTo reduce loss and phase errors you can also auto-widen waveguide routes straight sections that are longer than a certain length.
###Code
import gdsfactory as gf
c = gf.Component("sample_connect")
mmi1 = c << gf.components.mmi1x2()
mmi2 = c << gf.components.mmi1x2()
mmi2.move((200, 50))
route = gf.routing.get_route(
mmi1.ports["o3"],
mmi2.ports["o1"],
cross_section=gf.cross_section.strip,
auto_widen=True,
width_wide=2,
auto_widen_minimum_length=100,
)
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
get_route_from_waypointsSometimes you need to set up a route with custom waypoints. `get_route_from_waypoints` is a manual version of `get_route`
###Code
import gdsfactory as gf
c = gf.Component("waypoints_sample")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
p0x, p0y = left.ports["o2"].midpoint
p1x, p1y = right.ports["o2"].midpoint
o = 10 # vertical offset to overcome bottom obstacle
ytop = 20
routes = gf.routing.get_route_from_waypoints(
[
(p0x, p0y),
(p0x + o, p0y),
(p0x + o, ytop),
(p1x + o, ytop),
(p1x + o, p1y),
(p1x, p1y),
],
)
c.add(routes.references)
c
###Output
_____no_output_____
###Markdown
get_route_from_stepsAs you can see waypoints can only change one point (x or y) at a time, making the waypoint definition a bit redundant.You can also use a `get_route_from_steps` which is a more concise route definition, that supports defining only the new steps `x` or `y` together with increments `dx` or `dy``get_route_from_steps` is a manual version of `get_route` and a more concise and convenient version of `get_route_from_waypoints`
###Code
import gdsfactory as gf
c = gf.Component("get_route_from_steps")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
port1 = left.ports["o2"]
port2 = right.ports["o2"]
routes = gf.routing.get_route_from_steps(
port1=port1,
port2=port2,
steps=[
{"x": 20, "y": 0},
{"x": 20, "y": 20},
{"x": 120, "y": 20},
{"x": 120, "y": 80},
],
)
c.add(routes.references)
c
import gdsfactory as gf
c = gf.Component("get_route_from_steps_shorter_syntax")
w = gf.components.straight()
left = c << w
right = c << w
right.move((100, 80))
obstacle = gf.components.rectangle(size=(100, 10))
obstacle1 = c << obstacle
obstacle2 = c << obstacle
obstacle1.ymin = 40
obstacle2.xmin = 25
port1 = left.ports["o2"]
port2 = right.ports["o2"]
routes = gf.routing.get_route_from_steps(
port1=port1,
port2=port2,
steps=[
{"x": 20},
{"y": 20},
{"x": 120},
{"y": 80},
],
)
c.add(routes.references)
c
###Output
_____no_output_____
###Markdown
get_bundle**Problem**See the route collisions When connecting groups of ports using a regular manhattan single-route router such as `get route`
###Code
import gdsfactory as gf
xs_top = [0, 10, 20, 40, 50, 80]
pitch = 127
N = len(xs_top)
xs_bottom = [(i - N / 2) * pitch for i in range(N)]
top_ports = [gf.Port(f"top_{i}", (xs_top[i], 0), 0.5, 270) for i in range(N)]
bottom_ports = [gf.Port(f"bottom_{i}", (xs_bottom[i], -100), 0.5, 90) for i in range(N)]
c = gf.Component(name="connect_bundle")
for p1, p2 in zip(top_ports, bottom_ports):
route = gf.routing.get_route(p1, p2)
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**solution**`get_bundle` provides you with river routing capabilities, that you can use to route bundles of ports without collisions
###Code
c = gf.Component(name="connect_bundle")
routes = gf.routing.get_bundle(top_ports, bottom_ports)
for route in routes:
c.add(route.references)
c
import gdsfactory as gf
ys_right = [0, 10, 20, 40, 50, 80]
pitch = 127.0
N = len(ys_right)
ys_left = [(i - N / 2) * pitch for i in range(N)]
right_ports = [gf.Port(f"R_{i}", (0, ys_right[i]), 0.5, 180) for i in range(N)]
left_ports = [gf.Port(f"L_{i}".format(i), (-200, ys_left[i]), 0.5, 0) for i in range(N)]
# you can also mess up the port order and it will sort them by default
left_ports.reverse()
c = gf.Component(name="connect_bundle2")
routes = gf.routing.get_bundle(
left_ports, right_ports, sort_ports=True, start_straight_length=100
)
for route in routes:
c.add(route.references)
c
xs_top = [0, 10, 20, 40, 50, 80]
pitch = 127.0
N = len(xs_top)
xs_bottom = [(i - N / 2) * pitch for i in range(N)]
top_ports = [gf.Port(f"top_{i}", (xs_top[i], 0), 0.5, 270) for i in range(N)]
bot_ports = [gf.Port(f"bot_{i}", (xs_bottom[i], -300), 0.5, 90) for i in range(N)]
c = gf.Component(name="connect_bundle")
routes = gf.routing.get_bundle(
top_ports, bot_ports, separation=5.0, end_straight_length=100
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
`get_bundle` can also route bundles through corners
###Code
import gdsfactory as gf
from gdsfactory.cell import cell
from gdsfactory.component import Component
from gdsfactory.port import Port
@cell
def test_connect_corner(N=6, config="A"):
d = 10.0
sep = 5.0
top_cell = gf.Component(name="connect_corner")
if config in ["A", "B"]:
a = 100.0
ports_A_TR = [
Port("A_TR_{}".format(i), (d, a / 2 + i * sep), 0.5, 0) for i in range(N)
]
ports_A_TL = [
Port("A_TL_{}".format(i), (-d, a / 2 + i * sep), 0.5, 180) for i in range(N)
]
ports_A_BR = [
Port("A_BR_{}".format(i), (d, -a / 2 - i * sep), 0.5, 0) for i in range(N)
]
ports_A_BL = [
Port("A_BL_{}".format(i), (-d, -a / 2 - i * sep), 0.5, 180)
for i in range(N)
]
ports_A = [ports_A_TR, ports_A_TL, ports_A_BR, ports_A_BL]
ports_B_TR = [
Port("B_TR_{}".format(i), (a / 2 + i * sep, d), 0.5, 90) for i in range(N)
]
ports_B_TL = [
Port("B_TL_{}".format(i), (-a / 2 - i * sep, d), 0.5, 90) for i in range(N)
]
ports_B_BR = [
Port("B_BR_{}".format(i), (a / 2 + i * sep, -d), 0.5, 270) for i in range(N)
]
ports_B_BL = [
Port("B_BL_{}".format(i), (-a / 2 - i * sep, -d), 0.5, 270)
for i in range(N)
]
ports_B = [ports_B_TR, ports_B_TL, ports_B_BR, ports_B_BL]
elif config in ["C", "D"]:
a = N * sep + 2 * d
ports_A_TR = [
Port("A_TR_{}".format(i), (a, d + i * sep), 0.5, 0) for i in range(N)
]
ports_A_TL = [
Port("A_TL_{}".format(i), (-a, d + i * sep), 0.5, 180) for i in range(N)
]
ports_A_BR = [
Port("A_BR_{}".format(i), (a, -d - i * sep), 0.5, 0) for i in range(N)
]
ports_A_BL = [
Port("A_BL_{}".format(i), (-a, -d - i * sep), 0.5, 180) for i in range(N)
]
ports_A = [ports_A_TR, ports_A_TL, ports_A_BR, ports_A_BL]
ports_B_TR = [
Port("B_TR_{}".format(i), (d + i * sep, a), 0.5, 90) for i in range(N)
]
ports_B_TL = [
Port("B_TL_{}".format(i), (-d - i * sep, a), 0.5, 90) for i in range(N)
]
ports_B_BR = [
Port("B_BR_{}".format(i), (d + i * sep, -a), 0.5, 270) for i in range(N)
]
ports_B_BL = [
Port("B_BL_{}".format(i), (-d - i * sep, -a), 0.5, 270) for i in range(N)
]
ports_B = [ports_B_TR, ports_B_TL, ports_B_BR, ports_B_BL]
if config in ["A", "C"]:
for ports1, ports2 in zip(ports_A, ports_B):
routes = gf.routing.get_bundle(ports1, ports2, layer=(2, 0), radius=5)
for route in routes:
top_cell.add(route.references)
elif config in ["B", "D"]:
for ports1, ports2 in zip(ports_A, ports_B):
routes = gf.routing.get_bundle(ports2, ports1, layer=(2, 0), radius=5)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_corner(config="A")
c
c = test_connect_corner(config="C")
c
@cell
def test_connect_bundle_udirect(dy=200, angle=270):
xs1 = [-100, -90, -80, -55, -35, 24, 0] + [200, 210, 240]
axis = "X" if angle in [0, 180] else "Y"
pitch = 10.0
N = len(xs1)
xs2 = [70 + i * pitch for i in range(N)]
if axis == "X":
ports1 = [Port(f"top_{i}", (0, xs1[i]), 0.5, angle) for i in range(N)]
ports2 = [Port(f"bottom_{i}", (dy, xs2[i]), 0.5, angle) for i in range(N)]
else:
ports1 = [Port(f"top_{i}", (xs1[i], 0), 0.5, angle) for i in range(N)]
ports2 = [Port(f"bottom_{i}", (xs2[i], dy), 0.5, angle) for i in range(N)]
top_cell = Component(name="connect_bundle_udirect")
routes = gf.routing.get_bundle(ports1, ports2, radius=10.0)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_bundle_udirect()
c
@cell
def test_connect_bundle_u_indirect(dy=-200, angle=180):
xs1 = [-100, -90, -80, -55, -35] + [200, 210, 240]
axis = "X" if angle in [0, 180] else "Y"
pitch = 10.0
N = len(xs1)
xs2 = [50 + i * pitch for i in range(N)]
a1 = angle
a2 = a1 + 180
if axis == "X":
ports1 = [Port("top_{}".format(i), (0, xs1[i]), 0.5, a1) for i in range(N)]
ports2 = [Port("bot_{}".format(i), (dy, xs2[i]), 0.5, a2) for i in range(N)]
else:
ports1 = [Port("top_{}".format(i), (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [Port("bot_{}".format(i), (xs2[i], dy), 0.5, a2) for i in range(N)]
top_cell = Component("connect_bundle_u_indirect")
routes = gf.routing.get_bundle(
ports1,
ports2,
bend=gf.components.bend_euler,
radius=10,
)
for route in routes:
top_cell.add(route.references)
return top_cell
c = test_connect_bundle_u_indirect(angle=0)
c
import gdsfactory as gf
@gf.cell
def test_north_to_south():
dy = 200.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 10.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N // 2)]
xs2 += [400 + i * pitch for i in range(N // 2)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port("top_{}".format(i), (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port("bot_{}".format(i), (xs2[i], dy), 0.5, a2) for i in range(N)]
c = gf.Component()
routes = gf.routing.get_bundle(ports1, ports2, auto_widen=False)
for route in routes:
c.add(route.references)
return c
c = test_north_to_south()
c
def demo_connect_bundle():
"""combines all the connect_bundle tests"""
y = 400.0
x = 500
y0 = 900
dy = 200.0
c = Component("connect_bundle")
for j, s in enumerate([-1, 1]):
for i, angle in enumerate([0, 90, 180, 270]):
_cmp = test_connect_bundle_u_indirect(dy=s * dy, angle=angle)
_cmp_ref = _cmp.ref(position=(i * x, j * y))
c.add(_cmp_ref)
_cmp = test_connect_bundle_udirect(dy=s * dy, angle=angle)
_cmp_ref = _cmp.ref(position=(i * x, j * y + y0))
c.add(_cmp_ref)
for i, config in enumerate(["A", "B", "C", "D"]):
_cmp = test_connect_corner(config=config)
_cmp_ref = _cmp.ref(position=(i * x, 1700))
c.add(_cmp_ref)
# _cmp = test_facing_ports()
# _cmp_ref = _cmp.ref(position=(800, 1820))
# c.add(_cmp_ref)
return c
c = demo_connect_bundle()
c
import gdsfactory as gf
c = gf.Component("route_bend_5um")
c1 = c << gf.components.mmi2x2()
c2 = c << gf.components.mmi2x2()
c2.move((100, 50))
routes = gf.routing.get_bundle(
[c1.ports["o4"], c1.ports["o3"]], [c2.ports["o1"], c2.ports["o2"]], radius=5
)
for route in routes:
c.add(route.references)
c
import gdsfactory as gf
c = gf.Component("electrical")
c1 = c << gf.components.pad()
c2 = c << gf.components.pad()
c2.move((200, 100))
routes = gf.routing.get_bundle(
[c1.ports["e3"]], [c2.ports["e1"]], cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c
c = gf.Component("get_bundle_with_ubends_bend_from_top")
pad_array = gf.components.pad_array()
c1 = c << pad_array
c2 = c << pad_array
c2.rotate(90)
c2.movex(1000)
c2.ymax = -200
routes_bend180 = gf.routing.get_routes_bend180(
ports=c2.get_ports_list(),
radius=75 / 2,
cross_section=gf.cross_section.metal1,
bend_port1="e1",
bend_port2="e2",
)
c.add(routes_bend180.references)
routes = gf.routing.get_bundle(
c1.get_ports_list(), routes_bend180.ports, cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c
c = gf.Component("get_bundle_with_ubends_bend_from_bottom")
pad_array = gf.components.pad_array()
c1 = c << pad_array
c2 = c << pad_array
c2.rotate(90)
c2.movex(1000)
c2.ymax = -200
routes_bend180 = gf.routing.get_routes_bend180(
ports=c2.get_ports_list(),
radius=75 / 2,
cross_section=gf.cross_section.metal1,
bend_port1="e2",
bend_port2="e1",
)
c.add(routes_bend180.references)
routes = gf.routing.get_bundle(
c1.get_ports_list(), routes_bend180.ports, cross_section=gf.cross_section.metal1
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**Problem**Sometimes 90 degrees routes do not have enough space for a Manhattan route
###Code
import gdsfactory as gf
c = gf.Component("route_fail_1")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
c
import gdsfactory as gf
c = gf.Component("route_fail_1")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
routes = gf.routing.get_bundle(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
auto_widen=False,
)
for route in routes:
c.add(route.references)
c
c = gf.Component("route_fail_2")
pitch = 2.0
ys_left = [0, 10, 20]
N = len(ys_left)
ys_right = [(i - N / 2) * pitch for i in range(N)]
right_ports = [gf.Port(f"R_{i}", (0, ys_right[i]), 0.5, 180) for i in range(N)]
left_ports = [gf.Port(f"L_{i}", (-50, ys_left[i]), 0.5, 0) for i in range(N)]
left_ports.reverse()
routes = gf.routing.get_bundle(right_ports, left_ports, radius=5)
for i, route in enumerate(routes):
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
**Solution**Add Sbend routes using `get_bundle_sbend`
###Code
import gdsfactory as gf
c = gf.Component("route_solution_1_get_bundle_sbend")
c1 = c << gf.components.nxn(east=3, ysize=20)
c2 = c << gf.components.nxn(west=3)
c2.move((80, 0))
routes = gf.routing.get_bundle_sbend(
c1.get_ports_list(orientation=0), c2.get_ports_list(orientation=180)
)
c.add(routes.references)
c
routes
c = gf.Component("route_solution_2_get_bundle_sbend")
route = gf.routing.get_bundle_sbend(right_ports, left_ports)
c.add(route.references)
###Output
_____no_output_____
###Markdown
get_bundle_from_waypointsWhile `get_bundle` routes bundles of ports automatically, you can also use `get_bundle_from_waypoints` to manually specify the route waypoints.You can think of `get_bundle_from_waypoints` as a manual version of `get_bundle`
###Code
import numpy as np
import gdsfactory as gf
@gf.cell
def test_connect_bundle_waypoints():
"""Connect bundle of ports with bundle of routes following a list of waypoints."""
ys1 = np.array([0, 5, 10, 15, 30, 40, 50, 60]) + 0.0
ys2 = np.array([0, 10, 20, 30, 70, 90, 110, 120]) + 500.0
N = ys1.size
ports1 = [
gf.Port(name=f"A_{i}", midpoint=(0, ys1[i]), width=0.5, orientation=0)
for i in range(N)
]
ports2 = [
gf.Port(
name=f"B_{i}",
midpoint=(500, ys2[i]),
width=0.5,
orientation=180,
)
for i in range(N)
]
p0 = ports1[0].position
c = gf.Component("B")
c.add_ports(ports1)
c.add_ports(ports2)
waypoints = [
p0 + (200, 0),
p0 + (200, -200),
p0 + (400, -200),
(p0[0] + 400, ports2[0].y),
]
routes = gf.routing.get_bundle_from_waypoints(ports1, ports2, waypoints)
lengths = {}
for i, route in enumerate(routes):
c.add(route.references)
lengths[i] = route.length
return c
cell = test_connect_bundle_waypoints()
cell
import numpy as np
import gdsfactory as gf
c = gf.Component()
r = c << gf.c.array(component=gf.c.straight, rows=2, columns=1, spacing=(0, 20))
r.movex(60)
r.movey(40)
lt = c << gf.c.straight(length=15)
lb = c << gf.c.straight(length=5)
lt.movey(5)
ports1 = lt.get_ports_list(orientation=0) + lb.get_ports_list(orientation=0)
ports2 = r.get_ports_list(orientation=180)
dx = 20
p0 = ports1[0].midpoint + (dx, 0)
p1 = (ports1[0].midpoint[0] + dx, ports2[0].midpoint[1])
waypoints = (p0, p1)
routes = gf.routing.get_bundle_from_waypoints(ports1, ports2, waypoints=waypoints)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
get_bundle_from_steps
###Code
import gdsfactory as gf
c = gf.Component("get_route_from_steps_sample")
w = gf.components.array(
gf.partial(gf.c.straight, layer=(2, 0)),
rows=3,
columns=1,
spacing=(0, 50),
)
left = c << w
right = c << w
right.move((200, 100))
p1 = left.get_ports_list(orientation=0)
p2 = right.get_ports_list(orientation=180)
routes = gf.routing.get_bundle_from_steps(
p1,
p2,
steps=[{"x": 150}],
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
get_bundle_path_length_matchSometimes you need to set up a route a bundle of ports that need to keep the same lengths
###Code
import gdsfactory as gf
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 100.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port(f"bottom_{i}", (xs2[i], dy), 0.5, a2) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(ports1, ports2)
for route in routes:
c.add(route.references)
print(route.length)
c
###Output
_____no_output_____
###Markdown
Add extra lengthYou can also add some extra length to all the routes
###Code
import gdsfactory as gf
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 100.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port(f"bot_{i}", (xs2[i], dy), 0.5, a2) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(ports1, ports2, extra_length=44)
for route in routes:
c.add(route.references)
print(route.length)
c
###Output
_____no_output_____
###Markdown
increase number of loopsYou can also increase the number of loops
###Code
c = gf.Component("path_length_match_sample")
dy = 2000.0
xs1 = [-500, -300, -100, -90, -80, -55, -35, 200, 210, 240, 500, 650]
pitch = 200.0
N = len(xs1)
xs2 = [-20 + i * pitch for i in range(N)]
a1 = 90
a2 = a1 + 180
ports1 = [gf.Port(f"top_{i}", (xs1[i], 0), 0.5, a1) for i in range(N)]
ports2 = [gf.Port(f"bot_{i}", (xs2[i], dy), 0.5, a2) for i in range(N)]
routes = gf.routing.get_bundle_path_length_match(
ports1, ports2, nb_loops=2, auto_widen=False
)
for route in routes:
c.add(route.references)
print(route.length)
c
# Problem, sometimes when you do path length matching you need to increase the separation
import gdsfactory as gf
c = gf.Component()
c1 = c << gf.c.straight_array(spacing=90)
c2 = c << gf.c.straight_array(spacing=5)
c2.movex(200)
c1.y = 0
c2.y = 0
routes = gf.routing.get_bundle_path_length_match(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
end_straight_length=0,
start_straight_length=0,
separation=30,
radius=5,
)
for route in routes:
c.add(route.references)
c
# Solution: increase separation
import gdsfactory as gf
c = gf.Component()
c1 = c << gf.c.straight_array(spacing=90)
c2 = c << gf.c.straight_array(spacing=5)
c2.movex(200)
c1.y = 0
c2.y = 0
routes = gf.routing.get_bundle_path_length_match(
c1.get_ports_list(orientation=0),
c2.get_ports_list(orientation=180),
end_straight_length=0,
start_straight_length=0,
separation=80, # increased
radius=5,
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
Route to IO (Pads, grating couplers ...) Route to electrical pads
###Code
import gdsfactory as gf
mzi = gf.components.straight_heater_metal(length=30)
mzi
import gdsfactory as gf
mzi = gf.components.mzi_phase_shifter(
length_x=30, straight_x_top=gf.c.straight_heater_metal_90_90
)
gf.routing.add_electrical_pads_top(component=mzi)
import gdsfactory as gf
hr = gf.components.straight_heater_metal()
cc = gf.routing.add_electrical_pads_shortest(component=hr)
cc
# Problem: Sometimes the shortest path does not work well
import gdsfactory as gf
c = gf.components.mzi_phase_shifter(length_x=250)
cc = gf.routing.add_electrical_pads_shortest(component=c)
cc
# Solution: you can use define the pads separate and route metal lines to them
c = gf.Component("mzi_with_pads")
c1 = c << gf.components.mzi_phase_shifter(length_x=70)
c2 = c << gf.components.pad_array(columns=2)
c2.ymin = c1.ymax + 20
c2.x = 0
c1.x = 0
c
c = gf.Component("mzi_with_pads")
c1 = c << gf.components.mzi_phase_shifter(
straight_x_top=gf.c.straight_heater_metal_90_90, length_x=70
)
c2 = c << gf.components.pad_array(columns=2)
c2.ymin = c1.ymax + 20
c2.x = 0
c1.x = 0
ports1 = c1.get_ports_list(width=11)
ports2 = c2.get_ports_list()
routes = gf.routing.get_bundle(
ports1=ports1,
ports2=ports2,
cross_section=gf.cross_section.metal1,
width=5,
bend=gf.c.wire_corner,
)
for route in routes:
c.add(route.references)
c
###Output
_____no_output_____
###Markdown
Route to Fiber ArrayRouting allows you to define routes to optical or electrical IO (grating couplers or electrical pads)
###Code
import numpy as np
import gdsfactory as gf
from gdsfactory import LAYER
from gdsfactory import Port
@gf.cell
def big_device(w=400.0, h=400.0, N=16, port_pitch=15.0, layer=LAYER.WG, wg_width=0.5):
"""big component with N ports on each side"""
component = gf.Component()
p0 = np.array((0, 0))
dx = w / 2
dy = h / 2
points = [[dx, dy], [dx, -dy], [-dx, -dy], [-dx, dy]]
component.add_polygon(points, layer=layer)
port_params = {"layer": layer, "width": wg_width}
for i in range(N):
port = Port(
name="W{}".format(i),
midpoint=p0 + (-dx, (i - N / 2) * port_pitch),
orientation=180,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name="E{}".format(i),
midpoint=p0 + (dx, (i - N / 2) * port_pitch),
orientation=0,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name="N{}".format(i),
midpoint=p0 + ((i - N / 2) * port_pitch, dy),
orientation=90,
**port_params,
)
component.add_port(port)
for i in range(N):
port = Port(
name="S{}".format(i),
midpoint=p0 + ((i - N / 2) * port_pitch, -dy),
orientation=-90,
**port_params,
)
component.add_port(port)
return component
component = big_device(N=10)
c = gf.routing.add_fiber_array(component=component, radius=10.0, fanout_length=60.0)
c
import gdsfactory as gf
c = gf.components.ring_double(width=0.8)
cc = gf.routing.add_fiber_array(component=c, taper_length=150)
cc
cc.pprint()
###Output
_____no_output_____
###Markdown
You can also mix and match `TE` and `TM` grating couplers
###Code
c = gf.components.mzi_phase_shifter()
gcte = gf.components.grating_coupler_te
gctm = gf.components.grating_coupler_tm
cc = gf.routing.add_fiber_array(
component=c,
optical_routing_type=2,
grating_coupler=[gctm, gcte, gctm, gcte],
radius=20,
)
cc
###Output
_____no_output_____
###Markdown
Route to fiber single
###Code
import gdsfactory as gf
c = gf.components.ring_single()
cc = gf.routing.add_fiber_single(component=c)
cc
import gdsfactory as gf
c = gf.components.ring_single()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.mmi2x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.mmi1x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False, fiber_spacing=150)
cc
c = gf.components.mmi1x2()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False, fiber_spacing=50)
cc
c = gf.components.crossing()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.cross(length=200, width=2, port_type='optical')
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
c = gf.components.spiral()
cc = gf.routing.add_fiber_single(component=c, with_loopback=False)
cc
###Output
_____no_output_____ |
book/tutorials/lidar/ASO_data_tutorial.ipynb | ###Markdown
Lidar remote sensing of snow Intro ASOSee an overview of ASO operations [here](https://www.cbrfc.noaa.gov/report/AWRA2019_Pres3.pdf)ASO set-up: Riegl Q1560 dual laser scanning lidar 1064nm (image credit ASO) ASO data collection (image credit ASO)Laser reflections together create a 3D point cloud of the earth surface (image credit ASO) Point clouds can be classified and processed using specialised software such as [pdal](https://pdal.io/). We won't cover that here, because ASO has already processed all the snow depth datasets for us. ASO rasterises the point clouds to produce snow depth maps as rasters. Point clouds can also be rasterised to create canopy height models (CHMs) or digital terrain models (DTMs). These formats allow us to analyse the information easier. ASO states "Snow depths in exposed areas are within 1-2 cm at the 50 m scale" However, point-to-point variability can exist between manual and lidar measurements due to:- vegetation, particularly shrubs- geo-location accuracy of manual measurements- combination of both in forests Basic data inspection Import the packages needed for this tutorial
###Code
# general purpose data manipulation and analysis
import numpy as np
# packages for working with raster datasets
import rasterio
from rasterio.mask import mask
from rasterio.plot import show
from rasterio.enums import Resampling
import xarray # allows us to work with raster data as arrays
# packages for working with geospatial data
import geopandas as gpd
import pycrs
from shapely.geometry import box
# import packages for viewing the data
import matplotlib.pyplot as pyplot
#define paths
import os
CURDIR = os.path.dirname(os.path.realpath("__file__"))
# matplotlib functionality
%matplotlib inline
# %matplotlib notebook
###Output
_____no_output_____
###Markdown
The command *%matplotlib notebook* allows you to plot data interactively, which makes things way more interesting. If you want, you can test to see if this works for you. If not, go back to *%matplotlib inline* Data overview and visualisation
###Code
# open the raster
fparts_SD_GM_3m = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clipped.tif"
SD_GM_3m = rasterio.open(fparts_SD_GM_3m)
# check the CRS - is it consistent with other datasets we want to use?
SD_GM_3m.crs
###Output
_____no_output_____
###Markdown
ASO datasets are in EPSG: 32612. However, you might find other SnowEx datasets are in EPGS:26912. This can be changed using reproject in rioxarray. See [here](https://corteva.github.io/rioxarray/stable/examples/reproject.html) for an example. For now, we'll stay in 32612. With the above raster open, you can look at the different attributes of the raster. For example, the cellsize:
###Code
SD_GM_3m.res
###Output
_____no_output_____
###Markdown
The raster boundaries...
###Code
SD_GM_3m.bounds
###Output
_____no_output_____
###Markdown
And the dimensions. Note this is in pixels, not in meters. To get the total size, you can multiply the dimensions by the resolution.
###Code
print(SD_GM_3m.width,SD_GM_3m.height)
###Output
_____no_output_____
###Markdown
rasterio.open allows you to quickly look at the data...
###Code
fig1, ax1 = pyplot.subplots(1, figsize=(5, 5))
show((SD_GM_3m, 1), cmap='Blues', interpolation='none', ax=ax1)
###Output
_____no_output_____
###Markdown
While this can allow us to very quickly visualise the data, it doesn't show us a lot about the data itself. We can also open the data from the geotiff as as a data array, giving us more flexibility in the data analysis.
###Code
# First, close the rasterio file
SD_GM_3m.close()
###Output
_____no_output_____
###Markdown
Now we can re-open the data as an array and visualise it using pyplot.
###Code
dat_array_3m = xarray.open_rasterio(fparts_SD_GM_3m)
# plot the raster
fig2, ax2 = pyplot.subplots()
pos2 = ax2.imshow(dat_array_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5);
ax2.set_title('GM Snow Depth 3m')
fig2.colorbar(pos2, ax=ax2)
###Output
_____no_output_____
###Markdown
We set the figure to display the colorbar with a maximum of 1.5m. But you can see in the north of the area there are some very deep snow depths.
###Code
np.nanmax(dat_array_3m)
###Output
_____no_output_____
###Markdown
Optional - use the interactive plot to pan and zoom in and out to have a look at the snow depth distribution across the Grand Mesa. This should work for you if you run your notebook locally. We can clip the larger domain to a smaller areas to better visualise the snow depth distributions in the areas we're interested in. Depending on the field site, you could look at distributions in different slope classes, vegetation classes (bush vs forest vs open) or aspect classes. For now, we'll focus on a forest-dominated area and use the canopy height model (CHM) to clip the snow depth data. Canopy height modelsWe will use an existing raster of a canopy height model (CHM) to clip our snow depth map. This CHM is an area investigated by [Mazzotti et al. 2019](https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019WR024898). You can also access the data [here](https://www.envidat.ch//metadata/als-based-snow-depth).
###Code
# load the chm
chm = xarray.open_rasterio('data/CHM_20160926GMb_700x700_EPSG32612.tif')
# check the crs is the same as the snow depth data
chm.crs
###Output
_____no_output_____
###Markdown
Don't forget that if the coordinate systems in your datasets don't match then you will need to transform one of them. You can change the coordinate systems using the links above. (Note, I've already transformed this dataset from EPSG 32613). Let's have a quick look at the chm data as an xarray.
###Code
chm
###Output
_____no_output_____
###Markdown
You can see the resolution of the CHM is 0.5m, which is much higher than the snow depth dataset. Can you think why we would want to have CHM at such a high resolution? There are two main reasons:- resolution high enough to represent individual trees- maximum canopy height can mis-represented in lower resolution CHMs We can extract simple statistics from the dataset the same way you would with a numpy dataset. For example:
###Code
chm.data.max()
# plot the CHM, setting the maximum color value to the maximum canopy height in the dataset
fig3, ax3 = pyplot.subplots()
pos3 = ax3.imshow(chm.data[0,:,:], cmap='viridis', vmin=0, vmax=chm.data.max())
ax3.set_title('CHM Grand Mesa B')
fig3.colorbar(pos3, ax=ax3)
###Output
_____no_output_____
###Markdown
If you play around and zoom in, you can see individual trees. If you were wanting to investigate the role of canopy structure at the individual tree level on snow depth distribution, this is the level of detail you would want to work with. Clipping rasters Let's clip the snow depth dataset to the same boundaries as the CHM. One way to clip the snow depth raster is to use another raster as an area of interest. We will use the CHM as a mask, following [this](https://automating-gis-processes.github.io/CSC18/lessons/L6/clipping-raster.html) tutorial. You can also use shapefiles (see [here](https://rasterio.readthedocs.io/en/latest/topics/masking-by-shapefile.html) for another example) if you want to use more complicated geometry, or you can manually define your coordinates.We can extract the boundaries of the CHM and create a bounding box using the Shapely package
###Code
bbox = box(chm.x.min(),chm.y.min(),chm.x.max(),chm.y.max())
print(bbox)
###Output
_____no_output_____
###Markdown
If you want to come back and do this later, you don't need a raster or shapefile. If you only know the min/max coordinates of the area you're interested in, that's fine too.
###Code
# bbox = box(minx,miny,maxx,maxy)
###Output
_____no_output_____
###Markdown
You could also add a buffer around your CHM, if you wanted to see a bigger area:
###Code
#buffer = 200
#bbox = box(cb[0]-buffer,cb[1]-buffer,cb[2]+buffer,cb[3]+buffer)
###Output
_____no_output_____
###Markdown
But for now let's just stay with the same limits as the CHM.We need to put the bounding box into a geodataframe
###Code
geo = gpd.GeoDataFrame({'geometry': bbox}, index=[0], crs=chm.crs)
###Output
_____no_output_____
###Markdown
And then extract the coordinates to a format that we can use with rasterio.
###Code
def getFeatures(gdf):
"""Function to parse features from GeoDataFrame in such a manner that rasterio wants them"""
import json
return [json.loads(gdf.to_json())['features'][0]['geometry']]
coords = getFeatures(geo)
print(coords)
###Output
_____no_output_____
###Markdown
After all that, we're ready to clip the raster. We do this using the mask function from rasterio, and specifying crop=TRUEWe also need to re-open the dataset as a rasterio object.
###Code
SD_GM_3m.close()
SD_GM_3m = rasterio.open(fparts_SD_GM_3m)
out_img, out_transform = mask(SD_GM_3m, coords, crop=True)
###Output
_____no_output_____
###Markdown
We also need to copy the meta information across to the new raster
###Code
out_meta = SD_GM_3m.meta.copy()
epsg_code = int(SD_GM_3m.crs.data['init'][5:])
###Output
_____no_output_____
###Markdown
And update the metadata with the new dimsions etc.
###Code
out_meta.update({"driver": "GTiff",
....: "height": out_img.shape[1],
....: "width": out_img.shape[2],
....: "transform": out_transform,
....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()}
....: )
....:
###Output
_____no_output_____
###Markdown
Next, we should save this new raster. Let's call the area 'GMb', to match the name of the CHM.
###Code
out_tif = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clip_GMb.tif"
with rasterio.open(out_tif, "w", **out_meta) as dest:
dest.write(out_img)
###Output
_____no_output_____
###Markdown
To check the result is correct, we can read the data back in.
###Code
SD_GMb_3m = xarray.open_rasterio(out_tif)
# plot the new SD map
fig4, ax4 = pyplot.subplots()
pos4 = ax4.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax4.set_title('GMb Snow Depth 3m')
fig4.colorbar(pos4, ax=ax4)
###Output
_____no_output_____
###Markdown
Here's an aerial image of the same area. What patterns do you see in the snow depth map when compared to the aerial image?(Image from Google Earth)If you plotted snow depth compared to canopy height, what do you think you'd see in the graph? Raster resolution ASO also creates a 50m SD data product. So, let's have a look at that in the same area.
###Code
SD_GM_50m = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m.tif")
out_img_50, out_transform_50 = mask(SD_GM_50m, coords, crop=True)
out_meta_50 = SD_GM_50m.meta.copy()
epsg_code_50 = int(SD_GM_50m.crs.data['init'][5:])
out_meta_50.update({"driver": "GTiff",
....: "height": out_img_50.shape[1],
....: "width": out_img_50.shape[2],
....: "transform": out_transform_50,
....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()}
....: )
....:
out_tif_50 = "data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif"
with rasterio.open(out_tif_50, "w", **out_meta_50) as dest:
dest.write(out_img_50)
SD_GM_50m.close()
SD_GMb_50m = xarray.open_rasterio(out_tif_50)
###Output
_____no_output_____
###Markdown
Now we have the two rasters clipped to the same area, we can compare them.
###Code
### plot them side by side with a minimum and maximum values of 0m and 1.5m
fig5, ax5 = pyplot.subplots()
pos5 = ax5.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax5.set_title('GMb Snow Depth 3m')
fig5.colorbar(pos5, ax=ax5)
fig6, ax6 = pyplot.subplots()
pos6 = ax6.imshow(SD_GMb_50m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax6.set_title('GM Snow Depth 50m')
fig6.colorbar(pos6, ax=ax6)
###Output
_____no_output_____
###Markdown
Let's have a look at the two resolutions next to each other. What do you notice? We can look at the data in more detail. For example, histograms show us the snow depth distribution across the area.
###Code
# plot histograms of the snow depth distributions across a range from 0 to 1.5m in 25cm increments
fig7, ax7 = pyplot.subplots(figsize=(5, 5))
pyplot.hist(SD_GMb_3m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025));
ax7.set_title('GM Snow Depth 3m')
ax7.set_xlim((0,1.5))
fig8, ax8 = pyplot.subplots(figsize=(5, 5))
pyplot.hist(SD_GMb_50m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025));
ax8.set_title('GM Snow Depth 50m')
ax8.set_xlim((0,1.5))
###Output
_____no_output_____
###Markdown
Things to think about:- What are the maximum and minimum snow depths between the two datasets?- Does the distribution in snow depths across the area change with resolution?- How representative are the different datasets for snow depth at different process scales? Can you see the forest in the 50m data?- There are snow free areas in the 3m data, but not in the 50m. What do you think this means for validating modelled snow depletion?
###Code
SD_GMb_3m.close()
SD_GMb_50m.close()
chm.close()
###Output
_____no_output_____
###Markdown
Resampling If you are looking to compare your modelled snow depth, you can resample your lidar snow depth to the same resolution as your model. You can see the code [here](https://rasterio.readthedocs.io/en/latest/topics/resampling.html)Let's say we want to sample the whole domain at 250 m resolution.
###Code
# Resample your raster
# select your upscale_factor - this is related to the resolution of your raster
# upscale_factor = old_resolution/desired_resolution
upscale_factor = 50/250
SD_GMb_50m_rio = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif")
# resample data to target shape using the bilinear method
new_res = SD_GMb_50m_rio.read(
out_shape=(
SD_GMb_50m_rio.count,
int(SD_GMb_50m_rio.height * upscale_factor),
int(SD_GMb_50m_rio.width * upscale_factor)
),
resampling=Resampling.bilinear
)
# scale image transform
transform = SD_GMb_50m_rio.transform * SD_GMb_50m_rio.transform.scale(
(SD_GMb_50m_rio.width / new_res.shape[-1]),
(SD_GMb_50m_rio.height / new_res.shape[-2])
)
# display the raster
fig9, ax9 = pyplot.subplots()
pos9 = ax9.imshow(new_res[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax9.set_title('GM Snow Depth 50m')
fig9.colorbar(pos9, ax=ax9)
###Output
_____no_output_____
###Markdown
Play around with different upscaling factors and see what sort of results you get. How do the maximum and minimum values across the area change? Other possibilities: - Load the 3 m dataset and resample from the higher resolution. - You can clip to larger areas, such as a model domain, to resample to larger pixel sizes.- Load another dataset and see if you see the same patterns.
###Code
SD_GMb_50m_rio.close()
###Output
_____no_output_____
###Markdown
Lidar remote sensing of snow Intro ASOSee an overview of ASO operations [here](https://www.cbrfc.noaa.gov/report/AWRA2019_Pres3.pdf)ASO set-up: Riegl Q1560 dual laser scanning lidar 1064nm (image credit ASO) ASO data collection (image credit ASO)Laser reflections together create a 3D point cloud of the earth surface (image credit ASO) Point clouds can be classified and processed using specialised software such as [pdal](https://pdal.io/). We won't cover that here, because ASO has already processed all the snow depth datasets for us. ASO rasterises the point clouds to produce snow depth maps as rasters. Point clouds can also be rasterised to create canopy height models (CHMs) or digital terrain models (DTMs). These formats allow us to analyse the information easier. ASO states "Snow depths in exposed areas are within 1-2 cm at the 50 m scale" However, point-to-point variability can exist between manual and lidar measurements due to:- vegetation, particularly shrubs- geo-location accuracy of manual measurements- combination of both in forests Basic data inspection Import the packages needed for this tutorial
###Code
# general purpose data manipulation and analysis
import numpy as np
# packages for working with raster datasets
import rasterio
from rasterio.mask import mask
from rasterio.plot import show
from rasterio.enums import Resampling
import xarray # allows us to work with raster data as arrays
# packages for working with geospatial data
import geopandas as gpd
import pycrs
from shapely.geometry import box
# import packages for viewing the data
import matplotlib.pyplot as pyplot
# matplotlib functionality
%matplotlib inline
# %matplotlib notebook
###Output
_____no_output_____
###Markdown
The command *%matplotlib notebook* allows you to plot data interactively, which makes things way more interesting. If you want, you can test to see if this works for you. If not, go back to *%matplotlib inline* Data overview and visualisation
###Code
# open the raster
fparts_SD_GM_3m = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clipped.tif"
SD_GM_3m = rasterio.open(fparts_SD_GM_3m)
# check the CRS - is it consistent with other datasets we want to use?
SD_GM_3m.crs
###Output
_____no_output_____
###Markdown
ASO datasets are in EPSG: 32612. However, you might find other SnowEx datasets are in EPGS:26912. This can be changed using reproject in rioxarray. See [here](https://corteva.github.io/rioxarray/stable/examples/reproject.html) for an example. For now, we'll stay in 32612. With the above raster open, you can look at the different attributes of the raster. For example, the cellsize:
###Code
SD_GM_3m.res
###Output
_____no_output_____
###Markdown
The raster boundaries...
###Code
SD_GM_3m.bounds
###Output
_____no_output_____
###Markdown
And the dimensions. Note this is in pixels, not in meters. To get the total size, you can multiply the dimensions by the resolution.
###Code
print(SD_GM_3m.width,SD_GM_3m.height)
###Output
_____no_output_____
###Markdown
rasterio.open allows you to quickly look at the data...
###Code
fig1, ax1 = pyplot.subplots(1, figsize=(5, 5))
show((SD_GM_3m, 1), cmap='Blues', interpolation='none', ax=ax1)
###Output
_____no_output_____
###Markdown
While this can allow us to very quickly visualise the data, it doesn't show us a lot about the data itself. We can also open the data from the geotiff as as a data array, giving us more flexibility in the data analysis.
###Code
# First, close the rasterio file
SD_GM_3m.close()
###Output
_____no_output_____
###Markdown
Now we can re-open the data as an array and visualise it using pyplot.
###Code
dat_array_3m = xarray.open_rasterio(fparts_SD_GM_3m)
# plot the raster
fig2, ax2 = pyplot.subplots()
pos2 = ax2.imshow(dat_array_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5);
ax2.set_title('GM Snow Depth 3m')
fig2.colorbar(pos2, ax=ax2)
###Output
_____no_output_____
###Markdown
We set the figure to display the colorbar with a maximum of 1.5m. But you can see in the north of the area there are some very deep snow depths.
###Code
np.nanmax(dat_array_3m)
###Output
_____no_output_____
###Markdown
Optional - use the interactive plot to pan and zoom in and out to have a look at the snow depth distribution across the Grand Mesa. This should work for you if you run your notebook locally. We can clip the larger domain to a smaller areas to better visualise the snow depth distributions in the areas we're interested in. Depending on the field site, you could look at distributions in different slope classes, vegetation classes (bush vs forest vs open) or aspect classes. For now, we'll focus on a forest-dominated area and use the canopy height model (CHM) to clip the snow depth data. Canopy height modelsWe will use an existing raster of a canopy height model (CHM) to clip our snow depth map. This CHM is an area investigated by [Mazzotti et al. 2019](https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019WR024898). You can also access the data [here](https://www.envidat.ch//metadata/als-based-snow-depth).
###Code
# load the chm
chm = xarray.open_rasterio('data/CHM_20160926GMb_700x700_EPSG32612.tif')
# check the crs is the same as the snow depth data
chm.crs
###Output
_____no_output_____
###Markdown
Don't forget that if the coordinate systems in your datasets don't match then you will need to transform one of them. You can change the coordinate systems using the links above. (Note, I've already transformed this dataset from EPSG 32613). Let's have a quick look at the chm data as an xarray.
###Code
chm
###Output
_____no_output_____
###Markdown
You can see the resolution of the CHM is 0.5m, which is much higher than the snow depth dataset. Can you think why we would want to have CHM at such a high resolution? There are two main reasons:- resolution high enough to represent individual trees- maximum canopy height can mis-represented in lower resolution CHMs We can extract simple statistics from the dataset the same way you would with a numpy dataset. For example:
###Code
chm.data.max()
# plot the CHM, setting the maximum color value to the maximum canopy height in the dataset
fig3, ax3 = pyplot.subplots()
pos3 = ax3.imshow(chm.data[0,:,:], cmap='viridis', vmin=0, vmax=chm.data.max())
ax3.set_title('CHM Grand Mesa B')
fig3.colorbar(pos3, ax=ax3)
###Output
_____no_output_____
###Markdown
If you play around and zoom in, you can see individual trees. If you were wanting to investigate the role of canopy structure at the individual tree level on snow depth distribution, this is the level of detail you would want to work with. Clipping rasters Let's clip the snow depth dataset to the same boundaries as the CHM. One way to clip the snow depth raster is to use another raster as an area of interest. We will use the CHM as a mask, following [this](https://automating-gis-processes.github.io/CSC18/lessons/L6/clipping-raster.html) tutorial. You can also use shapefiles (see [here](https://rasterio.readthedocs.io/en/latest/topics/masking-by-shapefile.html) for another example) if you want to use more complicated geometry, or you can manually define your coordinates.We can extract the boundaries of the CHM and create a bounding box using the Shapely package
###Code
bbox = box(chm.x.min(),chm.y.min(),chm.x.max(),chm.y.max())
print(bbox)
###Output
_____no_output_____
###Markdown
If you want to come back and do this later, you don't need a raster or shapefile. If you only know the min/max coordinates of the area you're interested in, that's fine too.
###Code
# bbox = box(minx,miny,maxx,maxy)
###Output
_____no_output_____
###Markdown
You could also add a buffer around your CHM, if you wanted to see a bigger area:
###Code
#buffer = 200
#bbox = box(cb[0]-buffer,cb[1]-buffer,cb[2]+buffer,cb[3]+buffer)
###Output
_____no_output_____
###Markdown
But for now let's just stay with the same limits as the CHM.We need to put the bounding box into a geodataframe
###Code
geo = gpd.GeoDataFrame({'geometry': bbox}, index=[0], crs=chm.crs)
###Output
_____no_output_____
###Markdown
And then extract the coordinates to a format that we can use with rasterio.
###Code
def getFeatures(gdf):
"""Function to parse features from GeoDataFrame in such a manner that rasterio wants them"""
import json
return [json.loads(gdf.to_json())['features'][0]['geometry']]
coords = getFeatures(geo)
print(coords)
###Output
_____no_output_____
###Markdown
After all that, we're ready to clip the raster. We do this using the mask function from rasterio, and specifying crop=TRUEWe also need to re-open the dataset as a rasterio object.
###Code
SD_GM_3m.close()
SD_GM_3m = rasterio.open(fparts_SD_GM_3m)
out_img, out_transform = mask(SD_GM_3m, coords, crop=True)
###Output
_____no_output_____
###Markdown
We also need to copy the meta information across to the new raster
###Code
out_meta = SD_GM_3m.meta.copy()
epsg_code = int(SD_GM_3m.crs.data['init'][5:])
###Output
_____no_output_____
###Markdown
And update the metadata with the new dimsions etc.
###Code
out_meta.update({"driver": "GTiff",
....: "height": out_img.shape[1],
....: "width": out_img.shape[2],
....: "transform": out_transform,
....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()}
....: )
....:
###Output
_____no_output_____
###Markdown
Next, we should save this new raster. Let's call the area 'GMb', to match the name of the CHM.
###Code
out_tif = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clip_GMb.tif"
with rasterio.open(out_tif, "w", **out_meta) as dest:
dest.write(out_img)
###Output
_____no_output_____
###Markdown
To check the result is correct, we can read the data back in.
###Code
SD_GMb_3m = xarray.open_rasterio(out_tif)
# plot the new SD map
fig4, ax4 = pyplot.subplots()
pos4 = ax4.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax4.set_title('GMb Snow Depth 3m')
fig4.colorbar(pos4, ax=ax4)
###Output
_____no_output_____
###Markdown
Here's an aerial image of the same area. What patterns do you see in the snow depth map when compared to the aerial image?(Image from Google Earth)If you plotted snow depth compared to canopy height, what do you think you'd see in the graph? Raster resolution ASO also creates a 50m SD data product. So, let's have a look at that in the same area.
###Code
SD_GM_50m = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m.tif")
out_img_50, out_transform_50 = mask(SD_GM_50m, coords, crop=True)
out_meta_50 = SD_GM_50m.meta.copy()
epsg_code_50 = int(SD_GM_50m.crs.data['init'][5:])
out_meta_50.update({"driver": "GTiff",
....: "height": out_img_50.shape[1],
....: "width": out_img_50.shape[2],
....: "transform": out_transform_50,
....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()}
....: )
....:
out_tif_50 = "data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif"
with rasterio.open(out_tif_50, "w", **out_meta_50) as dest:
dest.write(out_img_50)
SD_GM_50m.close()
SD_GMb_50m = xarray.open_rasterio(out_tif_50)
###Output
_____no_output_____
###Markdown
Now we have the two rasters clipped to the same area, we can compare them.
###Code
### plot them side by side with a minimum and maximum values of 0m and 1.5m
fig5, ax5 = pyplot.subplots()
pos5 = ax5.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax5.set_title('GMb Snow Depth 3m')
fig5.colorbar(pos5, ax=ax5)
fig6, ax6 = pyplot.subplots()
pos6 = ax6.imshow(SD_GMb_50m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax6.set_title('GM Snow Depth 50m')
fig6.colorbar(pos6, ax=ax6)
###Output
_____no_output_____
###Markdown
Let's have a look at the two resolutions next to each other. What do you notice? We can look at the data in more detail. For example, histograms show us the snow depth distribution across the area.
###Code
# plot histograms of the snow depth distributions across a range from 0 to 1.5m in 25cm increments
fig7, ax7 = pyplot.subplots(figsize=(5, 5))
pyplot.hist(SD_GMb_3m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025));
ax7.set_title('GM Snow Depth 3m')
ax7.set_xlim((0,1.5))
fig8, ax8 = pyplot.subplots(figsize=(5, 5))
pyplot.hist(SD_GMb_50m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025));
ax8.set_title('GM Snow Depth 50m')
ax8.set_xlim((0,1.5))
###Output
_____no_output_____
###Markdown
Things to think about:- What are the maximum and minimum snow depths between the two datasets?- Does the distribution in snow depths across the area change with resolution?- How representative are the different datasets for snow depth at different process scales? Can you see the forest in the 50m data?- There are snow free areas in the 3m data, but not in the 50m. What do you think this means for validating modelled snow depletion?
###Code
SD_GMb_3m.close()
SD_GMb_50m.close()
chm.close()
###Output
_____no_output_____
###Markdown
Resampling If you are looking to compare your modelled snow depth, you can resample your lidar snow depth to the same resolution as your model. You can see the code [here](https://rasterio.readthedocs.io/en/latest/topics/resampling.html)Let's say we want to sample the whole domain at 250 m resolution.
###Code
# Resample your raster
# select your upscale_factor - this is related to the resolution of your raster
# upscale_factor = old_resolution/desired_resolution
upscale_factor = 50/250
SD_GMb_50m_rio = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif")
# resample data to target shape using the bilinear method
new_res = SD_GMb_50m_rio.read(
out_shape=(
SD_GMb_50m_rio.count,
int(SD_GMb_50m_rio.height * upscale_factor),
int(SD_GMb_50m_rio.width * upscale_factor)
),
resampling=Resampling.bilinear
)
# scale image transform
transform = SD_GMb_50m_rio.transform * SD_GMb_50m_rio.transform.scale(
(SD_GMb_50m_rio.width / new_res.shape[-1]),
(SD_GMb_50m_rio.height / new_res.shape[-2])
)
# display the raster
fig9, ax9 = pyplot.subplots()
pos9 = ax9.imshow(new_res[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax9.set_title('GM Snow Depth 50m')
fig9.colorbar(pos9, ax=ax9)
###Output
_____no_output_____
###Markdown
Play around with different upscaling factors and see what sort of results you get. How do the maximum and minimum values across the area change? Other possibilities: - Load the 3 m dataset and resample from the higher resolution. - You can clip to larger areas, such as a model domain, to resample to larger pixel sizes.- Load another dataset and see if you see the same patterns.
###Code
SD_GMb_50m_rio.close()
###Output
_____no_output_____
###Markdown
Lidar remote sensing of snow Intro ASOSee an overview of ASO operations [here](https://www.cbrfc.noaa.gov/report/AWRA2019_Pres3.pdf)ASO set-up: Riegl Q1560 dual laser scanning lidar 1064nm (image credit ASO) ASO data collection (image credit ASO)Laser reflections together create a 3D point cloud of the earth surface (image credit ASO) Point clouds can be classified and processed using specialised software such as [pdal](https://pdal.io/). We won't cover that here, because ASO has already processed all the snow depth datasets for us. ASO rasterises the point clouds to produce snow depth maps as rasters. Point clouds can also be rasterised to create canopy height models (CHMs) or digital terrain models (DTMs). These formats allow us to analyse the information easier. ASO states "Snow depths in exposed areas are within 1-2 cm at the 50 m scale" However, point-to-point variability can exist between manual and lidar measurements due to:- vegetation, particularly shrubs- geo-location accuracy of manual measurements- combination of both in forests Basic data inspection Import the packages needed for this tutorial
###Code
# general purpose data manipulation and analysis
import numpy as np
# packages for working with raster datasets
import rasterio
from rasterio.mask import mask
from rasterio.plot import show
from rasterio.enums import Resampling
import xarray # allows us to work with raster data as arrays
# packages for working with geospatial data
import geopandas as gpd
import pycrs
from shapely.geometry import box
# import packages for viewing the data
import matplotlib.pyplot as pyplot
#define paths
import os
CURDIR = os.path.dirname(os.path.realpath("__file__"))
# matplotlib functionality
%matplotlib inline
# %matplotlib notebook
%matplotlib widget
###Output
_____no_output_____
###Markdown
The command *%matplotlib notebook* allows you to plot data interactively, which makes things way more interesting. If you want, you can test to see if this works for you. If not, go back to *%matplotlib inline* Data overview and visualisation
###Code
# open the raster
fparts_SD_GM_3m = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clipped.tif"
SD_GM_3m = rasterio.open(fparts_SD_GM_3m)
# check the CRS - is it consistent with other datasets we want to use?
SD_GM_3m.crs
###Output
_____no_output_____
###Markdown
ASO datasets are in EPSG: 32612. However, you might find other SnowEx datasets are in EPGS:26912. This can be changed using reproject in rioxarray. See [here](https://corteva.github.io/rioxarray/stable/examples/reproject.html) for an example. For now, we'll stay in 32612. With the above raster open, you can look at the different attributes of the raster. For example, the cellsize:
###Code
SD_GM_3m.res
###Output
_____no_output_____
###Markdown
The raster boundaries...
###Code
SD_GM_3m.bounds
###Output
_____no_output_____
###Markdown
And the dimensions. Note this is in pixels, not in meters. To get the total size, you can multiply the dimensions by the resolution.
###Code
print(SD_GM_3m.width,SD_GM_3m.height)
###Output
2667 1667
###Markdown
rasterio.open allows you to quickly look at the data...
###Code
fig1, ax1 = pyplot.subplots(1, figsize=(5, 5))
show((SD_GM_3m, 1), cmap='Blues', interpolation='none', ax=ax1)
###Output
_____no_output_____
###Markdown
While this can allow us to very quickly visualise the data, it doesn't show us a lot about the data itself. We can also open the data from the geotiff as as a data array, giving us more flexibility in the data analysis.
###Code
# First, close the rasterio file
SD_GM_3m.close()
###Output
_____no_output_____
###Markdown
Now we can re-open the data as an array and visualise it using pyplot.
###Code
dat_array_3m = xarray.open_rasterio(fparts_SD_GM_3m)
# plot the raster
fig2, ax2 = pyplot.subplots()
pos2 = ax2.imshow(dat_array_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5);
ax2.set_title('GM Snow Depth 3m')
fig2.colorbar(pos2, ax=ax2)
###Output
_____no_output_____
###Markdown
We set the figure to display the colorbar with a maximum of 1.5m. But you can see in the north of the area there are some very deep snow depths.
###Code
np.nanmax(dat_array_3m)
###Output
_____no_output_____
###Markdown
Optional - use the interactive plot to pan and zoom in and out to have a look at the snow depth distribution across the Grand Mesa. This should work for you if you run your notebook locally. We can clip the larger domain to a smaller areas to better visualise the snow depth distributions in the areas we're interested in. Depending on the field site, you could look at distributions in different slope classes, vegetation classes (bush vs forest vs open) or aspect classes. For now, we'll focus on a forest-dominated area and use the canopy height model (CHM) to clip the snow depth data. Canopy height modelsWe will use an existing raster of a canopy height model (CHM) to clip our snow depth map. This CHM is an area investigated by [Mazzotti et al. 2019](https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019WR024898). You can also access the data [here](https://www.envidat.ch//metadata/als-based-snow-depth).
###Code
# load the chm
chm = xarray.open_rasterio('data/CHM_20160926GMb_700x700_EPSG32612.tif')
# check the crs is the same as the snow depth data
chm.crs
###Output
_____no_output_____
###Markdown
Don't forget that if the coordinate systems in your datasets don't match then you will need to transform one of them. You can change the coordinate systems using the links above. (Note, I've already transformed this dataset from EPSG 32613). Let's have a quick look at the chm data as an xarray.
###Code
chm
###Output
_____no_output_____
###Markdown
You can see the resolution of the CHM is 0.5m, which is much higher than the snow depth dataset. Can you think why we would want to have CHM at such a high resolution? There are two main reasons:- resolution high enough to represent individual trees- maximum canopy height can mis-represented in lower resolution CHMs We can extract simple statistics from the dataset the same way you would with a numpy dataset. For example:
###Code
chm.data.max()
# plot the CHM, setting the maximum color value to the maximum canopy height in the dataset
fig3, ax3 = pyplot.subplots()
pos3 = ax3.imshow(chm.data[0,:,:], cmap='viridis', vmin=0, vmax=chm.data.max())
ax3.set_title('CHM Grand Mesa B')
fig3.colorbar(pos3, ax=ax3)
###Output
_____no_output_____
###Markdown
If you play around and zoom in, you can see individual trees. If you were wanting to investigate the role of canopy structure at the individual tree level on snow depth distribution, this is the level of detail you would want to work with. Clipping rasters Let's clip the snow depth dataset to the same boundaries as the CHM. One way to clip the snow depth raster is to use another raster as an area of interest. We will use the CHM as a mask, following [this](https://automating-gis-processes.github.io/CSC18/lessons/L6/clipping-raster.html) tutorial. You can also use shapefiles (see [here](https://rasterio.readthedocs.io/en/latest/topics/masking-by-shapefile.html) for another example) if you want to use more complicated geometry, or you can manually define your coordinates.We can extract the boundaries of the CHM and create a bounding box using the Shapely package
###Code
bbox = box(chm.x.min(),chm.y.min(),chm.x.max(),chm.y.max())
print(bbox)
###Output
POLYGON ((753719.9471975124 4321584.199675269, 753719.9471975124 4322328.699675269, 752975.4471975124 4322328.699675269, 752975.4471975124 4321584.199675269, 753719.9471975124 4321584.199675269))
###Markdown
If you want to come back and do this later, you don't need a raster or shapefile. If you only know the min/max coordinates of the area you're interested in, that's fine too.
###Code
# bbox = box(minx,miny,maxx,maxy)
###Output
_____no_output_____
###Markdown
You could also add a buffer around your CHM, if you wanted to see a bigger area:
###Code
#buffer = 200
#bbox = box(cb[0]-buffer,cb[1]-buffer,cb[2]+buffer,cb[3]+buffer)
###Output
_____no_output_____
###Markdown
But for now let's just stay with the same limits as the CHM.We need to put the bounding box into a geodataframe
###Code
geo = gpd.GeoDataFrame({'geometry': bbox}, index=[0], crs=chm.crs)
###Output
/srv/conda/envs/notebook/lib/python3.8/site-packages/pyproj/crs/crs.py:292: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
projstring = _prepare_from_string(projparams)
###Markdown
And then extract the coordinates to a format that we can use with rasterio.
###Code
def getFeatures(gdf):
"""Function to parse features from GeoDataFrame in such a manner that rasterio wants them"""
import json
return [json.loads(gdf.to_json())['features'][0]['geometry']]
coords = getFeatures(geo)
print(coords)
###Output
[{'type': 'Polygon', 'coordinates': [[[753719.9471975124, 4321584.199675269], [753719.9471975124, 4322328.699675269], [752975.4471975124, 4322328.699675269], [752975.4471975124, 4321584.199675269], [753719.9471975124, 4321584.199675269]]]}]
###Markdown
After all that, we're ready to clip the raster. We do this using the mask function from rasterio, and specifying crop=TRUEWe also need to re-open the dataset as a rasterio object.
###Code
SD_GM_3m.close()
SD_GM_3m = rasterio.open(fparts_SD_GM_3m)
out_img, out_transform = mask(SD_GM_3m, coords, crop=True)
###Output
_____no_output_____
###Markdown
We also need to copy the meta information across to the new raster
###Code
out_meta = SD_GM_3m.meta.copy()
epsg_code = int(SD_GM_3m.crs.data['init'][5:])
###Output
_____no_output_____
###Markdown
And update the metadata with the new dimsions etc.
###Code
out_meta.update({"driver": "GTiff",
....: "height": out_img.shape[1],
....: "width": out_img.shape[2],
....: "transform": out_transform,
....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()}
....: )
....:
###Output
_____no_output_____
###Markdown
Next, we should save this new raster. Let's call the area 'GMb', to match the name of the CHM.
###Code
out_tif = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clip_GMb.tif"
with rasterio.open(out_tif, "w", **out_meta) as dest:
dest.write(out_img)
###Output
_____no_output_____
###Markdown
To check the result is correct, we can read the data back in.
###Code
SD_GMb_3m = xarray.open_rasterio(out_tif)
# plot the new SD map
fig4, ax4 = pyplot.subplots()
pos4 = ax4.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax4.set_title('GMb Snow Depth 3m')
fig4.colorbar(pos4, ax=ax4)
###Output
_____no_output_____
###Markdown
Here's an aerial image of the same area. What patterns do you see in the snow depth map when compared to the aerial image?(Image from Google Earth)If you plotted snow depth compared to canopy height, what do you think you'd see in the graph? Raster resolution ASO also creates a 50m SD data product. So, let's have a look at that in the same area.
###Code
SD_GM_50m = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m.tif")
out_img_50, out_transform_50 = mask(SD_GM_50m, coords, crop=True)
out_meta_50 = SD_GM_50m.meta.copy()
epsg_code_50 = int(SD_GM_50m.crs.data['init'][5:])
out_meta_50.update({"driver": "GTiff",
....: "height": out_img_50.shape[1],
....: "width": out_img_50.shape[2],
....: "transform": out_transform_50,
....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()}
....: )
....:
out_tif_50 = "data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif"
with rasterio.open(out_tif_50, "w", **out_meta_50) as dest:
dest.write(out_img_50)
SD_GM_50m.close()
SD_GMb_50m = xarray.open_rasterio(out_tif_50)
###Output
_____no_output_____
###Markdown
Now we have the two rasters clipped to the same area, we can compare them.
###Code
### plot them side by side with a minimum and maximum values of 0m and 1.5m
fig5, ax5 = pyplot.subplots()
pos5 = ax5.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax5.set_title('GMb Snow Depth 3m')
fig5.colorbar(pos5, ax=ax5)
fig6, ax6 = pyplot.subplots()
pos6 = ax6.imshow(SD_GMb_50m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax6.set_title('GM Snow Depth 50m')
fig6.colorbar(pos6, ax=ax6)
###Output
_____no_output_____
###Markdown
Let's have a look at the two resolutions next to each other. What do you notice? We can look at the data in more detail. For example, histograms show us the snow depth distribution across the area.
###Code
# plot histograms of the snow depth distributions across a range from 0 to 1.5m in 25cm increments
fig7, ax7 = pyplot.subplots(figsize=(5, 5))
pyplot.hist(SD_GMb_3m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025));
ax7.set_title('GM Snow Depth 3m')
ax7.set_xlim((0,1.5))
fig8, ax8 = pyplot.subplots(figsize=(5, 5))
pyplot.hist(SD_GMb_50m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025));
ax8.set_title('GM Snow Depth 50m')
ax8.set_xlim((0,1.5))
###Output
_____no_output_____
###Markdown
Things to think about:- What are the maximum and minimum snow depths between the two datasets?- Does the distribution in snow depths across the area change with resolution?- How representative are the different datasets for snow depth at different process scales? Can you see the forest in the 50m data?- There are snow free areas in the 3m data, but not in the 50m. What do you think this means for validating modelled snow depletion?
###Code
SD_GMb_3m.close()
SD_GMb_50m.close()
chm.close()
###Output
_____no_output_____
###Markdown
Resampling If you are looking to compare your modelled snow depth, you can resample your lidar snow depth to the same resolution as your model. You can see the code [here](https://rasterio.readthedocs.io/en/latest/topics/resampling.html)Let's say we want to sample the whole domain at 250 m resolution.
###Code
# Resample your raster
# select your upscale_factor - this is related to the resolution of your raster
# upscale_factor = old_resolution/desired_resolution
upscale_factor = 50/250
SD_GMb_50m_rio = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif")
# resample data to target shape using the bilinear method
new_res = SD_GMb_50m_rio.read(
out_shape=(
SD_GMb_50m_rio.count,
int(SD_GMb_50m_rio.height * upscale_factor),
int(SD_GMb_50m_rio.width * upscale_factor)
),
resampling=Resampling.bilinear
)
# scale image transform
transform = SD_GMb_50m_rio.transform * SD_GMb_50m_rio.transform.scale(
(SD_GMb_50m_rio.width / new_res.shape[-1]),
(SD_GMb_50m_rio.height / new_res.shape[-2])
)
# display the raster
fig9, ax9 = pyplot.subplots()
pos9 = ax9.imshow(new_res[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax9.set_title('GM Snow Depth 50m')
fig9.colorbar(pos9, ax=ax9)
###Output
_____no_output_____
###Markdown
Play around with different upscaling factors and see what sort of results you get. How do the maximum and minimum values across the area change? Other possibilities: - Load the 3 m dataset and resample from the higher resolution. - You can clip to larger areas, such as a model domain, to resample to larger pixel sizes.- Load another dataset and see if you see the same patterns.
###Code
SD_GMb_50m_rio
SD_GMb_50m_rio.close()
###Output
_____no_output_____
###Markdown
Lidar remote sensing of snow Intro ASOSee an overview of ASO operations [here](https://www.cbrfc.noaa.gov/report/AWRA2019_Pres3.pdf)ASO set-up: Riegl Q1560 dual laser scanning lidar 1064nm (image credit ASO) ASO data collection (image credit ASO)Laser reflections together create a 3D point cloud of the earth surface (image credit ASO) Point clouds can be classified and processed using specialised software such as [pdal](https://pdal.io/). We won't cover that here, because ASO has already processed all the snow depth datasets for us. ASO rasterises the point clouds to produce snow depth maps as rasters. Point clouds can also be rasterised to create canopy height models (CHMs) or digital terrain models (DTMs). These formats allow us to analyse the information easier. ASO states "Snow depths in exposed areas are within 1-2 cm at the 50 m scale" However, point-to-point variability can exist between manual and lidar measurements due to:- vegetation, particularly shrubs- geo-location accuracy of manual measurements- combination of both in forests Basic data inspection Import the packages needed for this tutorial
###Code
!pip3 install pycrs
# general purpose data manipulation and analysis
import numpy as np
# packages for working with raster datasets
import rasterio
from rasterio.mask import mask
from rasterio.plot import show
from rasterio.enums import Resampling
import xarray # allows us to work with raster data as arrays
# packages for working with geospatial data
import geopandas as gpd
import pycrs
from shapely.geometry import box
# import packages for viewing the data
import matplotlib.pyplot as pyplot
#define paths
import os
CURDIR = os.path.dirname(os.path.realpath("__file__"))
# matplotlib functionality
%matplotlib inline
# %matplotlib notebook
###Output
_____no_output_____
###Markdown
The command *%matplotlib notebook* allows you to plot data interactively, which makes things way more interesting. If you want, you can test to see if this works for you. If not, go back to *%matplotlib inline* Data overview and visualisation
###Code
# open the raster
fparts_SD_GM_3m = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clipped.tif"
SD_GM_3m = rasterio.open(fparts_SD_GM_3m)
# check the CRS - is it consistent with other datasets we want to use?
SD_GM_3m.crs
###Output
_____no_output_____
###Markdown
ASO datasets are in EPSG: 32612. However, you might find other SnowEx datasets are in EPGS:26912. This can be changed using reproject in rioxarray. See [here](https://corteva.github.io/rioxarray/stable/examples/reproject.html) for an example. For now, we'll stay in 32612. With the above raster open, you can look at the different attributes of the raster. For example, the cellsize:
###Code
SD_GM_3m.res
###Output
_____no_output_____
###Markdown
The raster boundaries...
###Code
SD_GM_3m.bounds
###Output
_____no_output_____
###Markdown
And the dimensions. Note this is in pixels, not in meters. To get the total size, you can multiply the dimensions by the resolution.
###Code
print(SD_GM_3m.width,SD_GM_3m.height)
###Output
2667 1667
###Markdown
rasterio.open allows you to quickly look at the data...
###Code
fig1, ax1 = pyplot.subplots(1, figsize=(5, 5))
show((SD_GM_3m, 1), cmap='Blues', interpolation='none', ax=ax1)
###Output
_____no_output_____
###Markdown
While this can allow us to very quickly visualise the data, it doesn't show us a lot about the data itself. We can also open the data from the geotiff as as a data array, giving us more flexibility in the data analysis.
###Code
# First, close the rasterio file
SD_GM_3m.close()
###Output
_____no_output_____
###Markdown
Now we can re-open the data as an array and visualise it using pyplot.
###Code
dat_array_3m = xarray.open_rasterio(fparts_SD_GM_3m)
# plot the raster
fig2, ax2 = pyplot.subplots()
pos2 = ax2.imshow(dat_array_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5);
ax2.set_title('GM Snow Depth 3m')
fig2.colorbar(pos2, ax=ax2)
###Output
_____no_output_____
###Markdown
We set the figure to display the colorbar with a maximum of 1.5m. But you can see in the north of the area there are some very deep snow depths.
###Code
np.nanmax(dat_array_3m)
###Output
_____no_output_____
###Markdown
Optional - use the interactive plot to pan and zoom in and out to have a look at the snow depth distribution across the Grand Mesa. This should work for you if you run your notebook locally. We can clip the larger domain to a smaller areas to better visualise the snow depth distributions in the areas we're interested in. Depending on the field site, you could look at distributions in different slope classes, vegetation classes (bush vs forest vs open) or aspect classes. For now, we'll focus on a forest-dominated area and use the canopy height model (CHM) to clip the snow depth data. Canopy height modelsWe will use an existing raster of a canopy height model (CHM) to clip our snow depth map. This CHM is an area investigated by [Mazzotti et al. 2019](https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019WR024898). You can also access the data [here](https://www.envidat.ch//metadata/als-based-snow-depth).
###Code
# load the chm
chm = xarray.open_rasterio('data/CHM_20160926GMb_700x700_EPSG32612.tif')
# check the crs is the same as the snow depth data
chm.crs
###Output
_____no_output_____
###Markdown
Don't forget that if the coordinate systems in your datasets don't match then you will need to transform one of them. You can change the coordinate systems using the links above. (Note, I've already transformed this dataset from EPSG 32613). Let's have a quick look at the chm data as an xarray.
###Code
chm
###Output
_____no_output_____
###Markdown
You can see the resolution of the CHM is 0.5m, which is much higher than the snow depth dataset. Can you think why we would want to have CHM at such a high resolution? There are two main reasons:- resolution high enough to represent individual trees- maximum canopy height can mis-represented in lower resolution CHMs We can extract simple statistics from the dataset the same way you would with a numpy dataset. For example:
###Code
chm.data.max()
# plot the CHM, setting the maximum color value to the maximum canopy height in the dataset
fig3, ax3 = pyplot.subplots()
pos3 = ax3.imshow(chm.data[0,:,:], cmap='viridis', vmin=0, vmax=chm.data.max())
ax3.set_title('CHM Grand Mesa B')
fig3.colorbar(pos3, ax=ax3)
###Output
_____no_output_____
###Markdown
If you play around and zoom in, you can see individual trees. If you were wanting to investigate the role of canopy structure at the individual tree level on snow depth distribution, this is the level of detail you would want to work with. Clipping rasters Let's clip the snow depth dataset to the same boundaries as the CHM. One way to clip the snow depth raster is to use another raster as an area of interest. We will use the CHM as a mask, following [this](https://automating-gis-processes.github.io/CSC18/lessons/L6/clipping-raster.html) tutorial. You can also use shapefiles (see [here](https://rasterio.readthedocs.io/en/latest/topics/masking-by-shapefile.html) for another example) if you want to use more complicated geometry, or you can manually define your coordinates.We can extract the boundaries of the CHM and create a bounding box using the Shapely package
###Code
bbox = box(chm.x.min(),chm.y.min(),chm.x.max(),chm.y.max())
print(bbox)
###Output
POLYGON ((753719.9471975124 4321584.199675269, 753719.9471975124 4322328.699675269, 752975.4471975124 4322328.699675269, 752975.4471975124 4321584.199675269, 753719.9471975124 4321584.199675269))
###Markdown
If you want to come back and do this later, you don't need a raster or shapefile. If you only know the min/max coordinates of the area you're interested in, that's fine too.
###Code
# bbox = box(minx,miny,maxx,maxy)
###Output
_____no_output_____
###Markdown
You could also add a buffer around your CHM, if you wanted to see a bigger area:
###Code
#buffer = 200
#bbox = box(cb[0]-buffer,cb[1]-buffer,cb[2]+buffer,cb[3]+buffer)
###Output
_____no_output_____
###Markdown
But for now let's just stay with the same limits as the CHM.We need to put the bounding box into a geodataframe
###Code
geo = gpd.GeoDataFrame({'geometry': bbox}, index=[0], crs=chm.crs)
###Output
/srv/conda/envs/notebook/lib/python3.8/site-packages/pyproj/crs/crs.py:292: FutureWarning: '+init=<authority>:<code>' syntax is deprecated. '<authority>:<code>' is the preferred initialization method. When making the change, be mindful of axis order changes: https://pyproj4.github.io/pyproj/stable/gotchas.html#axis-order-changes-in-proj-6
projstring = _prepare_from_string(projparams)
###Markdown
And then extract the coordinates to a format that we can use with rasterio.
###Code
def getFeatures(gdf):
"""Function to parse features from GeoDataFrame in such a manner that rasterio wants them"""
import json
return [json.loads(gdf.to_json())['features'][0]['geometry']]
coords = getFeatures(geo)
print(coords)
###Output
[{'type': 'Polygon', 'coordinates': [[[753719.9471975124, 4321584.199675269], [753719.9471975124, 4322328.699675269], [752975.4471975124, 4322328.699675269], [752975.4471975124, 4321584.199675269], [753719.9471975124, 4321584.199675269]]]}]
###Markdown
After all that, we're ready to clip the raster. We do this using the mask function from rasterio, and specifying crop=TRUEWe also need to re-open the dataset as a rasterio object.
###Code
SD_GM_3m.close()
SD_GM_3m = rasterio.open(fparts_SD_GM_3m)
out_img, out_transform = mask(SD_GM_3m, coords, crop=True)
###Output
_____no_output_____
###Markdown
We also need to copy the meta information across to the new raster
###Code
out_meta = SD_GM_3m.meta.copy()
epsg_code = int(SD_GM_3m.crs.data['init'][5:])
###Output
_____no_output_____
###Markdown
And update the metadata with the new dimsions etc.
###Code
out_meta.update({"driver": "GTiff",
....: "height": out_img.shape[1],
....: "width": out_img.shape[2],
....: "transform": out_transform,
....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()}
....: )
....:
###Output
_____no_output_____
###Markdown
Next, we should save this new raster. Let's call the area 'GMb', to match the name of the CHM.
###Code
out_tif = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clip_GMb.tif"
with rasterio.open(out_tif, "w", **out_meta) as dest:
dest.write(out_img)
###Output
_____no_output_____
###Markdown
To check the result is correct, we can read the data back in.
###Code
SD_GMb_3m = xarray.open_rasterio(out_tif)
# plot the new SD map
fig4, ax4 = pyplot.subplots()
pos4 = ax4.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax4.set_title('GMb Snow Depth 3m')
fig4.colorbar(pos4, ax=ax4)
###Output
_____no_output_____
###Markdown
Here's an aerial image of the same area. What patterns do you see in the snow depth map when compared to the aerial image?(Image from Google Earth)If you plotted snow depth compared to canopy height, what do you think you'd see in the graph? Raster resolution ASO also creates a 50m SD data product. So, let's have a look at that in the same area.
###Code
SD_GM_50m = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m.tif")
out_img_50, out_transform_50 = mask(SD_GM_50m, coords, crop=True)
out_meta_50 = SD_GM_50m.meta.copy()
epsg_code_50 = int(SD_GM_50m.crs.data['init'][5:])
out_meta_50.update({"driver": "GTiff",
....: "height": out_img_50.shape[1],
....: "width": out_img_50.shape[2],
....: "transform": out_transform_50,
....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()}
....: )
....:
out_tif_50 = "data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif"
with rasterio.open(out_tif_50, "w", **out_meta_50) as dest:
dest.write(out_img_50)
SD_GM_50m.close()
SD_GMb_50m = xarray.open_rasterio(out_tif_50)
###Output
_____no_output_____
###Markdown
Now we have the two rasters clipped to the same area, we can compare them.
###Code
### plot them side by side with a minimum and maximum values of 0m and 1.5m
fig5, ax5 = pyplot.subplots()
pos5 = ax5.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax5.set_title('GMb Snow Depth 3m')
fig5.colorbar(pos5, ax=ax5)
fig6, ax6 = pyplot.subplots()
pos6 = ax6.imshow(SD_GMb_50m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax6.set_title('GM Snow Depth 50m')
fig6.colorbar(pos6, ax=ax6)
###Output
_____no_output_____
###Markdown
Let's have a look at the two resolutions next to each other. What do you notice? We can look at the data in more detail. For example, histograms show us the snow depth distribution across the area.
###Code
# plot histograms of the snow depth distributions across a range from 0 to 1.5m in 25cm increments
fig7, ax7 = pyplot.subplots(figsize=(5, 5))
pyplot.hist(SD_GMb_3m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025));
ax7.set_title('GM Snow Depth 3m')
ax7.set_xlim((0,1.5))
fig8, ax8 = pyplot.subplots(figsize=(5, 5))
pyplot.hist(SD_GMb_50m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025));
ax8.set_title('GM Snow Depth 50m')
ax8.set_xlim((0,1.5))
###Output
_____no_output_____
###Markdown
Things to think about:- What are the maximum and minimum snow depths between the two datasets?- Does the distribution in snow depths across the area change with resolution?- How representative are the different datasets for snow depth at different process scales? Can you see the forest in the 50m data?- There are snow free areas in the 3m data, but not in the 50m. What do you think this means for validating modelled snow depletion?
###Code
SD_GMb_3m.close()
SD_GMb_50m.close()
chm.close()
###Output
_____no_output_____
###Markdown
Resampling If you are looking to compare your modelled snow depth, you can resample your lidar snow depth to the same resolution as your model. You can see the code [here](https://rasterio.readthedocs.io/en/latest/topics/resampling.html)Let's say we want to sample the whole domain at 250 m resolution.
###Code
# Resample your raster
# select your upscale_factor - this is related to the resolution of your raster
# upscale_factor = old_resolution/desired_resolution
upscale_factor = 50/250
SD_GMb_50m_rio = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif")
# resample data to target shape using the bilinear method
new_res = SD_GMb_50m_rio.read(
out_shape=(
SD_GMb_50m_rio.count,
int(SD_GMb_50m_rio.height * upscale_factor),
int(SD_GMb_50m_rio.width * upscale_factor)
),
resampling=Resampling.bilinear
)
# scale image transform
transform = SD_GMb_50m_rio.transform * SD_GMb_50m_rio.transform.scale(
(SD_GMb_50m_rio.width / new_res.shape[-1]),
(SD_GMb_50m_rio.height / new_res.shape[-2])
)
# display the raster
fig9, ax9 = pyplot.subplots()
pos9 = ax9.imshow(new_res[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax9.set_title('GM Snow Depth 50m')
fig9.colorbar(pos9, ax=ax9)
###Output
_____no_output_____
###Markdown
Play around with different upscaling factors and see what sort of results you get. How do the maximum and minimum values across the area change? Other possibilities: - Load the 3 m dataset and resample from the higher resolution. - You can clip to larger areas, such as a model domain, to resample to larger pixel sizes.- Load another dataset and see if you see the same patterns.
###Code
SD_GMb_50m_rio.close()
###Output
_____no_output_____
###Markdown
Lidar remote sensing of snow Intro ASOSee an overview of ASO operations [here](https://www.cbrfc.noaa.gov/report/AWRA2019_Pres3.pdf)ASO set-up: Riegl Q1560 dual laser scanning lidar 1064nm (image credit ASO) ASO data collection (image credit ASO)Laser reflections together create a 3D point cloud of the earth surface (image credit ASO) Point clouds can be classified and processed using specialised software such as [pdal](https://pdal.io/). We won't cover that here, because ASO has already processed all the snow depth datasets for us. ASO rasterises the point clouds to produce snow depth maps as rasters. Point clouds can also be rasterised to create canopy height models (CHMs) or digital terrain models (DTMs). These formats allow us to analyse the information easier. ASO states "Snow depths in exposed areas are within 1-2 cm at the 50 m scale" However, point-to-point variability can exist between manual and lidar measurements due to:- vegetation, particularly shrubs- geo-location accuracy of manual measurements- combination of both in forests Basic data inspection Import the packages needed for this tutorial
###Code
!pip install pycrs>=1 --no-deps
# general purpose data manipulation and analysis
import numpy as np
# packages for working with raster datasets
import rasterio
from rasterio.mask import mask
from rasterio.plot import show
from rasterio.enums import Resampling
import xarray # allows us to work with raster data as arrays
# packages for working with geospatial data
import geopandas as gpd
import pycrs
from shapely.geometry import box
# import packages for viewing the data
import matplotlib.pyplot as pyplot
#define paths
import os
CURDIR = os.path.dirname(os.path.realpath("__file__"))
# matplotlib functionality
%matplotlib inline
# %matplotlib notebook
###Output
_____no_output_____
###Markdown
The command *%matplotlib notebook* allows you to plot data interactively, which makes things way more interesting. If you want, you can test to see if this works for you. If not, go back to *%matplotlib inline* Data overview and visualisation
###Code
# open the raster
fparts_SD_GM_3m = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clipped.tif"
SD_GM_3m = rasterio.open(fparts_SD_GM_3m)
# check the CRS - is it consistent with other datasets we want to use?
SD_GM_3m.crs
###Output
_____no_output_____
###Markdown
ASO datasets are in EPSG: 32612. However, you might find other SnowEx datasets are in EPGS:26912. This can be changed using reproject in rioxarray. See [here](https://corteva.github.io/rioxarray/stable/examples/reproject.html) for an example. For now, we'll stay in 32612. With the above raster open, you can look at the different attributes of the raster. For example, the cellsize:
###Code
SD_GM_3m.res
###Output
_____no_output_____
###Markdown
The raster boundaries...
###Code
SD_GM_3m.bounds
###Output
_____no_output_____
###Markdown
And the dimensions. Note this is in pixels, not in meters. To get the total size, you can multiply the dimensions by the resolution.
###Code
print(SD_GM_3m.width,SD_GM_3m.height)
###Output
_____no_output_____
###Markdown
rasterio.open allows you to quickly look at the data...
###Code
fig1, ax1 = pyplot.subplots(1, figsize=(5, 5))
show((SD_GM_3m, 1), cmap='Blues', interpolation='none', ax=ax1)
###Output
_____no_output_____
###Markdown
While this can allow us to very quickly visualise the data, it doesn't show us a lot about the data itself. We can also open the data from the geotiff as as a data array, giving us more flexibility in the data analysis.
###Code
# First, close the rasterio file
SD_GM_3m.close()
###Output
_____no_output_____
###Markdown
Now we can re-open the data as an array and visualise it using pyplot.
###Code
dat_array_3m = xarray.open_rasterio(fparts_SD_GM_3m)
# plot the raster
fig2, ax2 = pyplot.subplots()
pos2 = ax2.imshow(dat_array_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5);
ax2.set_title('GM Snow Depth 3m')
fig2.colorbar(pos2, ax=ax2)
###Output
_____no_output_____
###Markdown
We set the figure to display the colorbar with a maximum of 1.5m. But you can see in the north of the area there are some very deep snow depths.
###Code
np.nanmax(dat_array_3m)
###Output
_____no_output_____
###Markdown
Optional - use the interactive plot to pan and zoom in and out to have a look at the snow depth distribution across the Grand Mesa. This should work for you if you run your notebook locally. We can clip the larger domain to a smaller areas to better visualise the snow depth distributions in the areas we're interested in. Depending on the field site, you could look at distributions in different slope classes, vegetation classes (bush vs forest vs open) or aspect classes. For now, we'll focus on a forest-dominated area and use the canopy height model (CHM) to clip the snow depth data. Canopy height modelsWe will use an existing raster of a canopy height model (CHM) to clip our snow depth map. This CHM is an area investigated by [Mazzotti et al. 2019](https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2019WR024898). You can also access the data [here](https://www.envidat.ch//metadata/als-based-snow-depth).
###Code
# load the chm
chm = xarray.open_rasterio('data/CHM_20160926GMb_700x700_EPSG32612.tif')
# check the crs is the same as the snow depth data
chm.crs
###Output
_____no_output_____
###Markdown
Don't forget that if the coordinate systems in your datasets don't match then you will need to transform one of them. You can change the coordinate systems using the links above. (Note, I've already transformed this dataset from EPSG 32613). Let's have a quick look at the chm data as an xarray.
###Code
chm
###Output
_____no_output_____
###Markdown
You can see the resolution of the CHM is 0.5m, which is much higher than the snow depth dataset. Can you think why we would want to have CHM at such a high resolution? There are two main reasons:- resolution high enough to represent individual trees- maximum canopy height can mis-represented in lower resolution CHMs We can extract simple statistics from the dataset the same way you would with a numpy dataset. For example:
###Code
chm.data.max()
# plot the CHM, setting the maximum color value to the maximum canopy height in the dataset
fig3, ax3 = pyplot.subplots()
pos3 = ax3.imshow(chm.data[0,:,:], cmap='viridis', vmin=0, vmax=chm.data.max())
ax3.set_title('CHM Grand Mesa B')
fig3.colorbar(pos3, ax=ax3)
###Output
_____no_output_____
###Markdown
If you play around and zoom in, you can see individual trees. If you were wanting to investigate the role of canopy structure at the individual tree level on snow depth distribution, this is the level of detail you would want to work with. Clipping rasters Let's clip the snow depth dataset to the same boundaries as the CHM. One way to clip the snow depth raster is to use another raster as an area of interest. We will use the CHM as a mask, following [this](https://automating-gis-processes.github.io/CSC18/lessons/L6/clipping-raster.html) tutorial. You can also use shapefiles (see [here](https://rasterio.readthedocs.io/en/latest/topics/masking-by-shapefile.html) for another example) if you want to use more complicated geometry, or you can manually define your coordinates.We can extract the boundaries of the CHM and create a bounding box using the Shapely package
###Code
bbox = box(chm.x.min(),chm.y.min(),chm.x.max(),chm.y.max())
print(bbox)
###Output
_____no_output_____
###Markdown
If you want to come back and do this later, you don't need a raster or shapefile. If you only know the min/max coordinates of the area you're interested in, that's fine too.
###Code
# bbox = box(minx,miny,maxx,maxy)
###Output
_____no_output_____
###Markdown
You could also add a buffer around your CHM, if you wanted to see a bigger area:
###Code
#buffer = 200
#bbox = box(cb[0]-buffer,cb[1]-buffer,cb[2]+buffer,cb[3]+buffer)
###Output
_____no_output_____
###Markdown
But for now let's just stay with the same limits as the CHM.We need to put the bounding box into a geodataframe
###Code
geo = gpd.GeoDataFrame({'geometry': bbox}, index=[0], crs=chm.crs)
###Output
_____no_output_____
###Markdown
And then extract the coordinates to a format that we can use with rasterio.
###Code
def getFeatures(gdf):
"""Function to parse features from GeoDataFrame in such a manner that rasterio wants them"""
import json
return [json.loads(gdf.to_json())['features'][0]['geometry']]
coords = getFeatures(geo)
print(coords)
###Output
_____no_output_____
###Markdown
After all that, we're ready to clip the raster. We do this using the mask function from rasterio, and specifying crop=TRUEWe also need to re-open the dataset as a rasterio object.
###Code
SD_GM_3m.close()
SD_GM_3m = rasterio.open(fparts_SD_GM_3m)
out_img, out_transform = mask(SD_GM_3m, coords, crop=True)
###Output
_____no_output_____
###Markdown
We also need to copy the meta information across to the new raster
###Code
out_meta = SD_GM_3m.meta.copy()
epsg_code = int(SD_GM_3m.crs.data['init'][5:])
###Output
_____no_output_____
###Markdown
And update the metadata with the new dimsions etc.
###Code
out_meta.update({"driver": "GTiff",
....: "height": out_img.shape[1],
....: "width": out_img.shape[2],
....: "transform": out_transform,
....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()}
....: )
....:
###Output
_____no_output_____
###Markdown
Next, we should save this new raster. Let's call the area 'GMb', to match the name of the CHM.
###Code
out_tif = "data/ASO_GrandMesa_2020Feb1-2_snowdepth_3m_clip_GMb.tif"
with rasterio.open(out_tif, "w", **out_meta) as dest:
dest.write(out_img)
###Output
_____no_output_____
###Markdown
To check the result is correct, we can read the data back in.
###Code
SD_GMb_3m = xarray.open_rasterio(out_tif)
# plot the new SD map
fig4, ax4 = pyplot.subplots()
pos4 = ax4.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax4.set_title('GMb Snow Depth 3m')
fig4.colorbar(pos4, ax=ax4)
###Output
_____no_output_____
###Markdown
Here's an aerial image of the same area. What patterns do you see in the snow depth map when compared to the aerial image?(Image from Google Earth)If you plotted snow depth compared to canopy height, what do you think you'd see in the graph? Raster resolution ASO also creates a 50m SD data product. So, let's have a look at that in the same area.
###Code
SD_GM_50m = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m.tif")
out_img_50, out_transform_50 = mask(SD_GM_50m, coords, crop=True)
out_meta_50 = SD_GM_50m.meta.copy()
epsg_code_50 = int(SD_GM_50m.crs.data['init'][5:])
out_meta_50.update({"driver": "GTiff",
....: "height": out_img_50.shape[1],
....: "width": out_img_50.shape[2],
....: "transform": out_transform_50,
....: "crs": pycrs.parse.from_epsg_code(epsg_code).to_proj4()}
....: )
....:
out_tif_50 = "data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif"
with rasterio.open(out_tif_50, "w", **out_meta_50) as dest:
dest.write(out_img_50)
SD_GM_50m.close()
SD_GMb_50m = xarray.open_rasterio(out_tif_50)
###Output
_____no_output_____
###Markdown
Now we have the two rasters clipped to the same area, we can compare them.
###Code
### plot them side by side with a minimum and maximum values of 0m and 1.5m
fig5, ax5 = pyplot.subplots()
pos5 = ax5.imshow(SD_GMb_3m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax5.set_title('GMb Snow Depth 3m')
fig5.colorbar(pos5, ax=ax5)
fig6, ax6 = pyplot.subplots()
pos6 = ax6.imshow(SD_GMb_50m.data[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax6.set_title('GM Snow Depth 50m')
fig6.colorbar(pos6, ax=ax6)
###Output
_____no_output_____
###Markdown
Let's have a look at the two resolutions next to each other. What do you notice? We can look at the data in more detail. For example, histograms show us the snow depth distribution across the area.
###Code
# plot histograms of the snow depth distributions across a range from 0 to 1.5m in 25cm increments
fig7, ax7 = pyplot.subplots(figsize=(5, 5))
pyplot.hist(SD_GMb_3m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025));
ax7.set_title('GM Snow Depth 3m')
ax7.set_xlim((0,1.5))
fig8, ax8 = pyplot.subplots(figsize=(5, 5))
pyplot.hist(SD_GMb_50m.data.flatten(),bins=np.arange(0, 1.5 + 0.025, 0.025));
ax8.set_title('GM Snow Depth 50m')
ax8.set_xlim((0,1.5))
###Output
_____no_output_____
###Markdown
Things to think about:- What are the maximum and minimum snow depths between the two datasets?- Does the distribution in snow depths across the area change with resolution?- How representative are the different datasets for snow depth at different process scales? Can you see the forest in the 50m data?- There are snow free areas in the 3m data, but not in the 50m. What do you think this means for validating modelled snow depletion?
###Code
SD_GMb_3m.close()
SD_GMb_50m.close()
chm.close()
###Output
_____no_output_____
###Markdown
Resampling If you are looking to compare your modelled snow depth, you can resample your lidar snow depth to the same resolution as your model. You can see the code [here](https://rasterio.readthedocs.io/en/latest/topics/resampling.html)Let's say we want to sample the whole domain at 250 m resolution.
###Code
# Resample your raster
# select your upscale_factor - this is related to the resolution of your raster
# upscale_factor = old_resolution/desired_resolution
upscale_factor = 50/250
SD_GMb_50m_rio = rasterio.open("data/ASO_GrandMesa_Mosaic_2020Feb1-2_snowdepth_50m_clip_GMb.tif")
# resample data to target shape using the bilinear method
new_res = SD_GMb_50m_rio.read(
out_shape=(
SD_GMb_50m_rio.count,
int(SD_GMb_50m_rio.height * upscale_factor),
int(SD_GMb_50m_rio.width * upscale_factor)
),
resampling=Resampling.bilinear
)
# scale image transform
transform = SD_GMb_50m_rio.transform * SD_GMb_50m_rio.transform.scale(
(SD_GMb_50m_rio.width / new_res.shape[-1]),
(SD_GMb_50m_rio.height / new_res.shape[-2])
)
# display the raster
fig9, ax9 = pyplot.subplots()
pos9 = ax9.imshow(new_res[0,:,:], cmap='Blues', vmin=0, vmax=1.5)
ax9.set_title('GM Snow Depth 50m')
fig9.colorbar(pos9, ax=ax9)
###Output
_____no_output_____
###Markdown
Play around with different upscaling factors and see what sort of results you get. How do the maximum and minimum values across the area change? Other possibilities: - Load the 3 m dataset and resample from the higher resolution. - You can clip to larger areas, such as a model domain, to resample to larger pixel sizes.- Load another dataset and see if you see the same patterns.
###Code
SD_GMb_50m_rio.close()
###Output
_____no_output_____ |
7. Translator.ipynb | ###Markdown
Traductor usando Python Instalar el paquete necesario
###Code
#!pip install googletrans
###Output
_____no_output_____
###Markdown
Se importan las librerรญas necesarias
###Code
import googletrans
from googletrans import Translator
#print(googletrans.LANGUAGES)
###Output
_____no_output_____
###Markdown
Se crea una clase Translator y se llama la funciรณn 'translate'
###Code
translator = Translator()
result = translator.translate('Quiero comprar un pan por favor', src='es', dest='de')
print("Idioma original: ", result.src)
print("Idioma de destino: ", result.dest)
print()
print(result.origin)
print(result.text)
###Output
Idioma original: es
Idioma de destino: de
Quiero comprar un pan por favor
Ich mรถchte bitte ein Brot kaufen
|
_site/lectures/Week 03 - Functions, Loops, Comprehensions and Generators/01.a - Python Functions.ipynb | ###Markdown
Python Functions[Python Function Tutorial](https://www.datacamp.com/community/tutorials/functions-python-tutorial)Functions are used to encapsulate a set of relatred instructions that you want to use within your program to carry out a specific task. This helps with the organization of your code as well as code reusability. Oftent times functions accept parameters and return values, but that's not always the case.There are three types of functions in Python:1. Built-in functions, such as help() to ask for help, min() to get the minimum value, print() to print an object to the terminal,โฆ You can find an overview with more of these functions here.1. User-Defined Functions (UDFs), which are functions that users create to help them out; And1. Anonymous functions, which are also called lambda functions because they are not declared with the standard def keyword. Functions vs. MethodsA method is a function that's part of a class which we'll discuss in another lecture. Keep in mind all methods are functions but not all functions are methods. Parameters vs. ArgumentsParameters are the names used when defining a function or a method, and into which arguments will be mapped. In other words, arguments are the things which are supplied to any function or method call, while the function or method code refers to the arguments by their parameter names.Consider the function```pythondef sum(a, b): return a + b``````sum``` has 2 parameters *a* and *b*. If you call the ```sum``` function with values **2** and **3** then **2** and **3** are the arguments. Defining User FunctionsThe four steps to defining a function in Python are the following:1. Use the keyword ```def``` to declare the function and follow this up with the function name.1. Add parameters to the function: they should be within the parentheses of the function. End your line with a colon.1. Add statements that the functions should execute.1. End your function with a return statement if the function should output something. Without the return statement, your function will return an object None.
###Code
# takes no parameters, returns none
def say_hello():
print('hello')
say_hello()
x = say_hello()
print(type(x), x)
# takes parameters, returns none
def say_something(message):
print(message)
x = say_something('hello class')
print(type(x), x)
###Output
hello class
<class 'NoneType'> None
###Markdown
The return StatementSometimes it's useful to reuturn values from functions. We'll refactor our code to return values.
###Code
def get_message():
return 'hello class'
def say_something():
message = get_message()
print(message)
x = get_message()
print(type(x), x)
say_something()
def ask_user_to_say_something():
message = input('say something')
print(message)
ask_user_to_say_something()
def say_anything(fn):
message = fn()
print(message)
fn = get_message
say_anything(fn)
fn = input
say_anything(fn)
print(type(say_something()))
x = say_something()
print(type(x), x)
###Output
hello class
<class 'NoneType'> None
###Markdown
returning multiple valuesIn python you can return values in a variety of data types including [primitive data structures](https://www.datacamp.com/community/tutorials/data-structures-pythonprimitive) such as integers, floats, strings, & booleans as well as [non-primitive data structures](https://www.datacamp.com/community/tutorials/data-structures-pythonnonprimitive) such as arrays, lists, tuples, dictionaries, sets, and files.
###Code
# returning a list
def get_messages():
return ['hello class', 'python is great', 'here we\'re retuning a list']
messages = get_messages()
print(type(messages), messages)
for message in messages:
print(type(message), message)
# returning a tuple... more on tuples later
def get_message():
return ('hello class', 3)
message = get_message()
print(type(message), message)
for i in range(0, message[1]):
print(message[0])
def get_message():
return 'hello class', 3 # ('s are optional
message = get_message()
print(type(message), message)
for i in range(0, message[1]):
print(message[0])
message, iterations = get_message()
print(type(message), message)
for i in range(0, iterations):
print(message)
###Output
<class 'str'> hello class
hello class
hello class
hello class
###Markdown
Function Arguments in PythonThere are four types of arguments that Python functions can take:1. Default arguments1. Required arguments1. Keyword arguments1. Variable number of arguments Default ArgumentsDefault arguments are those that take a default value if no argument value is passed during the function call. You can assign this default value by with the assignment operator =, just like in the following example:
###Code
import random
def get_random_numbers(n=1):
if n == 1:
return random.random()
elif n > 1:
numbers = []
for i in range(0, n):
numbers.append(random.random())
return numbers
w = get_random_numbers()
print('w:', type(w), w)
x = get_random_numbers(1)
print('x:', type(x), x)
y = get_random_numbers(n=3)
print('y:', type(y), y)
z = get_random_numbers(n=-1)
print('z:', type(z), z)
# note : this might be a better implementation
def get_random_numbers(n=1):
if n == 1:
return [random.random()]
elif n > 1:
numbers = []
for i in range(0, n):
numbers.append(random.random())
return numbers
else:
return []
w = get_random_numbers()
print('w:', type(w), w)
x = get_random_numbers(1)
print('x:', type(x), x)
y = get_random_numbers(n=3)
print('y:', type(y), y)
z = get_random_numbers(n=-1)
print('z:', type(z), z)
###Output
w: <class 'list'> [0.10411388008487865]
x: <class 'list'> [0.7998484189289874]
y: <class 'list'> [0.4288972071605044, 0.8683514593535158, 0.05420177266859705]
z: <class 'list'> []
###Markdown
Required ArgumentsRequired arguments are mandatory and you will generate an error if they're not present. Required arguments must be passed in precisely the right order, just like in the following example:
###Code
def say_something(message, number_of_times):
for i in range(0, number_of_times):
print(message)
# arguments passed in the proper order
say_something('hello', 3)
# arguments passed incorrectly
say_something(3, 'hello')
###Output
_____no_output_____
###Markdown
Keyword ArgumentsYou can use keyword arguments to make sure that you call all the parameters in the right order. You can do so by specifying their parameter name in the function call.
###Code
say_something(message='hello', number_of_times=3)
say_something(number_of_times=3, message='hello')
###Output
hello
hello
hello
hello
hello
hello
###Markdown
Variable Number of ArgumentsIn cases where you donโt know the exact number of arguments that you want to pass to a function, you can use the following syntax with *args:
###Code
def add(*x):
print(type(x), x)
total = 0
for i in x:
total += i
return total
total = add(1)
print(total)
total = add(1, 1)
print(total)
total = add(1, 2, 3, 4, 5)
print(total)
###Output
<class 'tuple'> (1,)
1
<class 'tuple'> (1, 1)
2
<class 'tuple'> (1, 2, 3, 4, 5)
15
###Markdown
The asterisk ```*``` is placed before the variable name that holds the values of all nonkeyword variable arguments. Note here that you might as well have passed ```*varint```, ```*var_int_args``` or any other name to the ```plus()``` function.
###Code
# You can spedify any combination of required, keyword, and variable arguments.
def add(a, b, *args):
total = a + b
for arg in args:
total += arg
return total
total = add(1, 1)
print(total)
total = add(1, 1, 2)
print(total)
total = add(1, 2, 3, 4, 5)
print(total)
###Output
2
4
15
###Markdown
Global vs Local VariablesIn general, variables that are defined inside a function body have a local scope, and those defined outside have a global scope. That means that local variables are defined within a function block and can only be accessed inside that function, while global variables can be obtained by all functions that might be in your script:
###Code
# global variable
score = 0
def player_hit():
global score
hit_points = -10 # local variable
score += hit_points
def enemy_hit():
global score
hit_points = 5 # local variable
score += hit_points
enemy_hit()
enemy_hit()
enemy_hit()
enemy_hit()
player_hit()
enemy_hit()
player_hit()
print(score)
###Output
5
###Markdown
Anonymous Functions in PythonAnonymous functions are also called lambda functions in Python because instead of declaring them with the standard def keyword, you use the ```lambda``` keyword.
###Code
def double(x):
return x * 2
y = double(3)
print(y)
d = lambda x: x * 2
y = d(3)
print(y)
sdlfjsdk = lambda x, n: x if n < 5 else 0
result = sdlfjsdk(4, 6)
print(result)
a = lambda x: x ** 2 if x < 0 else x
print(a(-1))
print(a(-2))
print(a(3))
add = lambda x, y: x + y
x = add(2, 3)
print(x)
###Output
5
###Markdown
You use anonymous functions when you require a nameless function for a short period of time, and that is created at runtime. Specific contexts in which this would be relevant is when youโre working with ```filter()```, ```map()``` and ```reduce()```:* ```filter()``` function filters the original input list on the basis of a criterion > 10. * ```map()``` applies a function to all items of the list* ```reduce()``` is part of the functools library. You use this function cumulatively to the items of the my_list list, from left to right and reduce the sequence to a single value.
###Code
from functools import reduce
my_list = [1,2,3,4,5,6,7,8,9,10]
# Use lambda function with `filter()`
filtered_list = list(filter(lambda x: (x * 2 > 10), my_list))
# Use lambda function with `map()`
mapped_list = list(map(lambda x: x * 2, my_list))
# Use lambda function with `reduce()`
reduced_list = reduce(lambda x, y: x + y, my_list)
print(filtered_list)
print(mapped_list)
print(reduced_list)
###Output
[6, 7, 8, 9, 10]
[2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
55
###Markdown
Using main() as a FunctionYou can easily define a main() function and call it just like you have done with all of the other functions above:
###Code
# Define `main()` function
def main():
print("This is a main function")
main()
###Output
This is a main function
###Markdown
However, as it stands now, the code of your ```main()``` function will be called when you import it as a module. To make sure that this doesnโt happen, you call the ```main()``` function when ```__name__ == '__main__'```.
###Code
# Define `main()` function
def start_here():
print("This is a main function")
# Execute `main()` function
if __name__ == '__main__':
start_here()
###Output
This is a main function
|
02_inpainting/inpainting_gmcnn_train.ipynb | ###Markdown
Most code here is taken directly from https://github.com/shepnerd/inpainting_gmcnn/tree/master/pytorch with minor adjustments and refactoring into a Jupyter notebook, including a convenient way of providing arguments for train_options.The code cell under "Create elliptical masks" is original code.Otherwise, the cell titles refer to the module at the above Github link that the code was originally taken from. Load libraries and mount Google Drive
###Code
from google.colab import drive
## data.data
import torch
import numpy as np
import cv2
import os
from torch.utils.data import Dataset
## model.basemodel
# import os
# import torch
import torch.nn as nn
## model.basenet
# import os
# import torch
# import torch.nn as nn
## model.layer
# import torch
# import torch.nn as nn
# import torch.nn.functional as F
# from util.utils import gauss_kernel
import torchvision.models as models
# import numpy as np
## model.loss
# import torch
# import torch.nn as nn
import torch.autograd as autograd
import torch.nn.functional as F
# from model.layer import VGG19FeatLayer
from functools import reduce
## model.net
# import torch
# import torch.nn as nn
# import torch.nn.functional as F
# from model.basemodel import BaseModel
# from model.basenet import BaseNet
# from model.loss import WGANLoss, IDMRFLoss
# from model.layer import init_weights, PureUpsampling, ConfidenceDrivenMaskLayer, SpectralNorm
# import numpy as np
## options.train_options
import argparse
# import os
import time
## Original code - elliptical mask
# import cv2
# import numpy as np
from numpy import random
from numpy.random import randint
# from matplotlib import pyplot as plt
import math
## utils.utils
# import numpy as np
import scipy.stats as st
# import cv2
# import time
# import os
import glob
## Dependencies from train.py
# import os
from torch.utils.data import DataLoader
from torchvision import transforms
import torchvision.utils as vutils
try:
from tensorboardX import SummaryWriter
except:
!pip install tensorboardX
from tensorboardX import SummaryWriter
# from data.data import InpaintingDataset, ToTensor
# from model.net import InpaintingModel_GMCNN
# from options.train_options import TrainOptions
# from util.utils import getLatest
drive.mount("/content/drive")
dir_path = "/content/drive/MyDrive/redi-detecting-cheating"
###Output
_____no_output_____
###Markdown
Arguments for training
###Code
train_args = {'--dataset': 'expanded_isic_no_patch_fifth_run',
'--data_file': '{}'.format(os.path.join(dir_path, 'models', 'train_files.txt')),
'--mask_dir': '{}'.format(os.path.join(dir_path, 'data', 'masks', 'dilated-masks-224')),
'--load_model_dir': '{}'.format(os.path.join(dir_path, 'models', 'inpainting_gmcnn', \
'20210607-102044_GMCNN_expanded_isic_no_patch_fourth_run_b8_s224x224_gc32_dc64_randmask-ellipse')),
'--train_spe': '650',
'--viz_steps': '25' # Print a training update to screen after this many iterations.
}
# train_spe:
# Expect roughly 1,250 iterations per epoch for training set size of approx. 10,000 images (without patches) in batches of 8.
# Thus train_spe set to 650 to checkpoint the model halfway through each epoch, and then this is overwritten when the epoch is complete.
###Output
_____no_output_____
###Markdown
data.data
###Code
class ToTensor(object):
def __call__(self, sample):
entry = {}
for k in sample:
if k == 'rect':
entry[k] = torch.IntTensor(sample[k])
else:
entry[k] = torch.FloatTensor(sample[k])
return entry
class InpaintingDataset(Dataset):
def __init__(self, info_list, root_dir='', mask_dir=None, im_size=(224, 224), transform=None):
if os.path.isfile(info_list):
filenames = open(info_list, 'rt').read().splitlines()
elif os.path.isdir(info_list):
filenames = glob.glob(os.path.join(info_list, '*.jpg')) # Changed from png.
if mask_dir:
mask_files = os.listdir(mask_dir) # Get a list of all mask filenames.
# Take only files that do not have a corresponding mask.
filenames = [file for file in filenames if os.path.basename(file) not in mask_files]
self.filenames = filenames
self.root_dir = root_dir
self.transform = transform
self.im_size = im_size
np.random.seed(2018)
def __len__(self):
return len(self.filenames)
def read_image(self, filepath):
image = cv2.imread(filepath)
if image is None: # Some images are empty
return None
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
h, w, c = image.shape
if h != self.im_size[0] or w != self.im_size[1]:
ratio = max(1.0*self.im_size[0]/h, 1.0*self.im_size[1]/w)
im_scaled = cv2.resize(image, None, fx=ratio, fy=ratio)
h, w, _ = im_scaled.shape
h_idx = (h-self.im_size[0]) // 2
w_idx = (w-self.im_size[1]) // 2
im_scaled = im_scaled[h_idx:h_idx+self.im_size[0], w_idx:w_idx+self.im_size[1],:]
im_scaled = np.transpose(im_scaled, [2, 0, 1])
else:
im_scaled = np.transpose(image, [2, 0, 1])
return im_scaled
def __getitem__(self, idx):
image = self.read_image(os.path.join(self.root_dir, self.filenames[idx]))
sample = {'gt': image}
if self.transform:
sample = self.transform(sample)
return sample
###Output
_____no_output_____
###Markdown
model.basemodel
###Code
# a complex model consisted of several nets, and each net will be explicitly defined in other py class files
class BaseModel(nn.Module):
def __init__(self):
super(BaseModel,self).__init__()
def init(self, opt):
self.opt = opt
self.gpu_ids = opt.gpu_ids
self.save_dir = opt.model_folder
self.device = torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu')
self.model_names = []
def setInput(self, inputData):
self.input = inputData
def forward(self):
pass
def optimize_parameters(self):
pass
def get_current_visuals(self):
pass
def get_current_losses(self):
pass
def update_learning_rate(self):
pass
def test(self):
with torch.no_grad():
self.forward()
# save models to the disk
def save_networks(self, which_epoch):
for name in self.model_names:
if isinstance(name, str):
save_filename = '%s_net_%s.pth' % (which_epoch, name)
save_path = os.path.join(self.save_dir, save_filename)
net = getattr(self, 'net' + name)
if len(self.gpu_ids) > 0 and torch.cuda.is_available():
torch.save(net.state_dict(), save_path)
# net.cuda(self.gpu_ids[0])
else:
torch.save(net.state_dict(), save_path)
def __patch_instance_norm_state_dict(self, state_dict, module, keys, i=0):
key = keys[i]
if i + 1 == len(keys): # at the end, pointing to a parameter/buffer
if module.__class__.__name__.startswith('InstanceNorm') and \
(key == 'running_mean' or key == 'running_var'):
if getattr(module, key) is None:
state_dict.pop('.'.join(keys))
else:
self.__patch_instance_norm_state_dict(state_dict, getattr(module, key), keys, i + 1)
# load models from the disk
def load_networks(self, load_path):
for name in self.model_names:
if isinstance(name, str):
net = getattr(self, 'net' + name)
if isinstance(net, torch.nn.DataParallel):
net = net.module
print('loading the model from %s' % load_path)
# if you are using PyTorch newer than 0.4 (e.g., built from
# GitHub source), you can remove str() on self.device
state_dict = torch.load(load_path)
# patch InstanceNorm checkpoints prior to 0.4
for key in list(state_dict.keys()): # need to copy keys here because we mutate in loop
self.__patch_instance_norm_state_dict(state_dict, net, key.split('.'))
net.load_state_dict(state_dict)
# print network information
def print_networks(self, verbose=True):
print('---------- Networks initialized -------------')
for name in self.model_names:
if isinstance(name, str):
net = getattr(self, 'net' + name)
num_params = 0
for param in net.parameters():
num_params += param.numel()
if verbose:
print(net)
print('[Network %s] Total number of parameters : %.3f M' % (name, num_params / 1e6))
print('-----------------------------------------------')
# set requies_grad=Fasle to avoid computation
def set_requires_grad(self, nets, requires_grad=False):
if not isinstance(nets, list):
nets = [nets]
for net in nets:
if net is not None:
for param in net.parameters():
param.requires_grad = requires_grad
###Output
_____no_output_____
###Markdown
model.basenet
###Code
class BaseNet(nn.Module):
def __init__(self):
super(BaseNet, self).__init__()
def init(self, opt):
self.opt = opt
self.gpu_ids = opt.gpu_ids
self.save_dir = opt.checkpoint_dir
self.device = torch.device('cuda:{}'.format(self.gpu_ids[0])) if self.gpu_ids else torch.device('cpu')
def forward(self, *input):
return super(BaseNet, self).forward(*input)
def test(self, *input):
with torch.no_grad():
self.forward(*input)
def save_network(self, network_label, epoch_label):
save_filename = '%s_net_%s.pth' % (epoch_label, network_label)
save_path = os.path.join(self.save_dir, save_filename)
torch.save(self.cpu().state_dict(), save_path)
def load_network(self, network_label, epoch_label):
save_filename = '%s_net_%s.pth' % (epoch_label, network_label)
save_path = os.path.join(self.save_dir, save_filename)
if not os.path.isfile(save_path):
print('%s not exists yet!' % save_path)
else:
try:
self.load_state_dict(torch.load(save_path))
except:
pretrained_dict = torch.load(save_path)
model_dict = self.state_dict()
try:
pretrained_dict = {k: v for k, v in pretrained_dict.items() if k in model_dict}
self.load_state_dict(pretrained_dict)
print('Pretrained network %s has excessive layers; Only loading layers that are used' % network_label)
except:
print('Pretrained network %s has fewer layers; The following are not initialized: ' % network_label)
for k, v in pretrained_dict.items():
if v.size() == model_dict[k].size():
model_dict[k] = v
for k, v in model_dict.items():
if k not in pretrained_dict or v.size() != pretrained_dict[k].size():
print(k.split('.')[0])
self.load_state_dict(model_dict)
###Output
_____no_output_____
###Markdown
model.layer
###Code
class Conv2d_BN(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True):
super(Conv2d_BN, self).__init__()
self.model = nn.Sequential([
nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding, dilation, groups, bias),
nn.BatchNorm2d(out_channels)
])
def forward(self, *input):
return self.model(*input)
class upsampling(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, scale=2):
super(upsampling, self).__init__()
assert isinstance(scale, int)
self.conv = nn.Conv2d(in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding,
dilation=dilation, groups=groups, bias=bias)
self.scale = scale
def forward(self, x):
h, w = x.size(2) * self.scale, x.size(3) * self.scale
xout = self.conv(F.interpolate(input=x, size=(h, w), mode='nearest', align_corners=True))
return xout
class PureUpsampling(nn.Module):
def __init__(self, scale=2, mode='bilinear'):
super(PureUpsampling, self).__init__()
assert isinstance(scale, int)
self.scale = scale
self.mode = mode
def forward(self, x):
h, w = x.size(2) * self.scale, x.size(3) * self.scale
if self.mode == 'nearest':
xout = F.interpolate(input=x, size=(h, w), mode=self.mode)
else:
xout = F.interpolate(input=x, size=(h, w), mode=self.mode, align_corners=True)
return xout
class GaussianBlurLayer(nn.Module):
def __init__(self, size, sigma, in_channels=1, stride=1, pad=1):
super(GaussianBlurLayer, self).__init__()
self.size = size
self.sigma = sigma
self.ch = in_channels
self.stride = stride
self.pad = nn.ReflectionPad2d(pad)
def forward(self, x):
kernel = gauss_kernel(self.size, self.sigma, self.ch, self.ch)
kernel_tensor = torch.from_numpy(kernel)
kernel_tensor = kernel_tensor.cuda()
x = self.pad(x)
blurred = F.conv2d(x, kernel_tensor, stride=self.stride)
return blurred
class ConfidenceDrivenMaskLayer(nn.Module):
def __init__(self, size=65, sigma=1.0/40, iters=7):
super(ConfidenceDrivenMaskLayer, self).__init__()
self.size = size
self.sigma = sigma
self.iters = iters
self.propagationLayer = GaussianBlurLayer(size, sigma, pad=32)
def forward(self, mask):
# here mask 1 indicates missing pixels and 0 indicates the valid pixels
init = 1 - mask
mask_confidence = None
for i in range(self.iters):
mask_confidence = self.propagationLayer(init)
mask_confidence = mask_confidence * mask
init = mask_confidence + (1 - mask)
return mask_confidence
class VGG19(nn.Module):
def __init__(self, pool='max'):
super(VGG19, self).__init__()
self.conv1_1 = nn.Conv2d(3, 64, kernel_size=3, padding=1)
self.conv1_2 = nn.Conv2d(64, 64, kernel_size=3, padding=1)
self.conv2_1 = nn.Conv2d(64, 128, kernel_size=3, padding=1)
self.conv2_2 = nn.Conv2d(128, 128, kernel_size=3, padding=1)
self.conv3_1 = nn.Conv2d(128, 256, kernel_size=3, padding=1)
self.conv3_2 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
self.conv3_3 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
self.conv3_4 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
self.conv4_1 = nn.Conv2d(256, 512, kernel_size=3, padding=1)
self.conv4_2 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv4_3 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv4_4 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv5_1 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv5_2 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv5_3 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
self.conv5_4 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
if pool == 'max':
self.pool1 = nn.MaxPool2d(kernel_size=2, stride=2)
self.pool2 = nn.MaxPool2d(kernel_size=2, stride=2)
self.pool3 = nn.MaxPool2d(kernel_size=2, stride=2)
self.pool4 = nn.MaxPool2d(kernel_size=2, stride=2)
self.pool5 = nn.MaxPool2d(kernel_size=2, stride=2)
elif pool == 'avg':
self.pool1 = nn.AvgPool2d(kernel_size=2, stride=2)
self.pool2 = nn.AvgPool2d(kernel_size=2, stride=2)
self.pool3 = nn.AvgPool2d(kernel_size=2, stride=2)
self.pool4 = nn.AvgPool2d(kernel_size=2, stride=2)
self.pool5 = nn.AvgPool2d(kernel_size=2, stride=2)
def forward(self, x):
out = {}
out['r11'] = F.relu(self.conv1_1(x))
out['r12'] = F.relu(self.conv1_2(out['r11']))
out['p1'] = self.pool1(out['r12'])
out['r21'] = F.relu(self.conv2_1(out['p1']))
out['r22'] = F.relu(self.conv2_2(out['r21']))
out['p2'] = self.pool2(out['r22'])
out['r31'] = F.relu(self.conv3_1(out['p2']))
out['r32'] = F.relu(self.conv3_2(out['r31']))
out['r33'] = F.relu(self.conv3_3(out['r32']))
out['r34'] = F.relu(self.conv3_4(out['r33']))
out['p3'] = self.pool3(out['r34'])
out['r41'] = F.relu(self.conv4_1(out['p3']))
out['r42'] = F.relu(self.conv4_2(out['r41']))
out['r43'] = F.relu(self.conv4_3(out['r42']))
out['r44'] = F.relu(self.conv4_4(out['r43']))
out['p4'] = self.pool4(out['r44'])
out['r51'] = F.relu(self.conv5_1(out['p4']))
out['r52'] = F.relu(self.conv5_2(out['r51']))
out['r53'] = F.relu(self.conv5_3(out['r52']))
out['r54'] = F.relu(self.conv5_4(out['r53']))
out['p5'] = self.pool5(out['r54'])
return out
class VGG19FeatLayer(nn.Module):
def __init__(self):
super(VGG19FeatLayer, self).__init__()
self.vgg19 = models.vgg19(pretrained=True).features.eval().cuda()
self.mean = torch.tensor([0.485, 0.456, 0.406]).view(1, 3, 1, 1).cuda()
def forward(self, x):
out = {}
x = x - self.mean
ci = 1
ri = 0
for layer in self.vgg19.children():
if isinstance(layer, nn.Conv2d):
ri += 1
name = 'conv{}_{}'.format(ci, ri)
elif isinstance(layer, nn.ReLU):
ri += 1
name = 'relu{}_{}'.format(ci, ri)
layer = nn.ReLU(inplace=False)
elif isinstance(layer, nn.MaxPool2d):
ri = 0
name = 'pool_{}'.format(ci)
ci += 1
elif isinstance(layer, nn.BatchNorm2d):
name = 'bn_{}'.format(ci)
else:
raise RuntimeError('Unrecognized layer: {}'.format(layer.__class__.__name__))
x = layer(x)
out[name] = x
# print([x for x in out])
return out
def init_weights(net, init_type='normal', gain=0.02):
def init_func(m):
classname = m.__class__.__name__
if hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
if init_type == 'normal':
nn.init.normal_(m.weight.data, 0.0, gain)
elif init_type == 'xavier':
nn.init.xavier_normal_(m.weight.data, gain=gain)
elif init_type == 'kaiming':
nn.init.kaiming_normal_(m.weight.data, a=0, mode='fan_in')
elif init_type == 'orthogonal':
nn.init.orthogonal_(m.weight.data, gain=gain)
else:
raise NotImplementedError('initialization method [%s] is not implemented' % init_type)
if hasattr(m, 'bias') and m.bias is not None:
nn.init.constant_(m.bias.data, 0.0)
elif classname.find('BatchNorm2d') != -1:
nn.init.normal_(m.weight.data, 1.0, gain)
nn.init.constant_(m.bias.data, 0.0)
print('initialize network with %s' % init_type)
net.apply(init_func)
def init_net(net, init_type='normal', gpu_ids=[]):
if len(gpu_ids) > 0:
assert(torch.cuda.is_available())
net.to(gpu_ids[0])
net = torch.nn.DataParallel(net, gpu_ids)
init_weights(net, init_type)
return net
def l2normalize(v, eps=1e-12):
return v / (v.norm()+eps)
class SpectralNorm(nn.Module):
def __init__(self, module, name='weight', power_iteration=1):
super(SpectralNorm, self).__init__()
self.module = module
self.name = name
self.power_iteration = power_iteration
if not self._made_params():
self._make_params()
def _update_u_v(self):
u = getattr(self.module, self.name + '_u')
v = getattr(self.module, self.name + '_v')
w = getattr(self.module, self.name + '_bar')
height = w.data.shape[0]
for _ in range(self.power_iteration):
v.data = l2normalize(torch.mv(torch.t(w.view(height, -1).data), u.data))
u.data = l2normalize(torch.mv(w.view(height, -1).data, v.data))
sigma = u.dot(w.view(height, -1).mv(v))
setattr(self.module, self.name, w / sigma.expand_as(w))
def _made_params(self):
try:
u = getattr(self.module, self.name + '_u')
v = getattr(self.module, self.name + '_v')
w = getattr(self.module, self.name + '_bar')
return True
except AttributeError:
return False
def _make_params(self):
w = getattr(self.module, self.name)
height = w.data.shape[0]
width = w.view(height, -1).data.shape[1]
u = nn.Parameter(w.data.new(height).normal_(0, 1), requires_grad=False)
v = nn.Parameter(w.data.new(width).normal_(0, 1), requires_grad=False)
u.data = l2normalize(u.data)
v.data = l2normalize(v.data)
w_bar = nn.Parameter(w.data)
del self.module._parameters[self.name]
self.module.register_parameter(self.name+'_u', u)
self.module.register_parameter(self.name+'_v', v)
self.module.register_parameter(self.name+'_bar', w_bar)
def forward(self, *input):
self._update_u_v()
return self.module.forward(*input)
class PartialConv(nn.Module):
def __init__(self, in_channels=3, out_channels=32, ksize=3, stride=1):
super(PartialConv, self).__init__()
self.ksize = ksize
self.stride = stride
self.fnum = 32
self.padSize = self.ksize // 2
self.pad = nn.ReflectionPad2d(self.padSize)
self.eplison = 1e-5
self.conv = nn.Conv2d(in_channels, out_channels, stride=stride, kernel_size=ksize)
def forward(self, x, mask):
mask_ch = mask.size(1)
sum_kernel_np = np.ones((mask_ch, mask_ch, self.ksize, self.ksize), dtype=np.float32)
sum_kernel = torch.from_numpy(sum_kernel_np).cuda()
x = x * mask / (F.conv2d(mask, sum_kernel, stride=1, padding=self.padSize)+self.eplison)
x = self.pad(x)
x = self.conv(x)
mask = F.max_pool2d(mask, self.ksize, stride=self.stride, padding=self.padSize)
return x, mask
class GatedConv(nn.Module):
def __init__(self, in_channels=3, out_channels=32, ksize=3, stride=1, act=F.elu):
super(GatedConv, self).__init__()
self.ksize = ksize
self.stride = stride
self.act = act
self.padSize = self.ksize // 2
self.pad = nn.ReflectionPad2d(self.padSize)
self.convf = nn.Conv2d(in_channels, out_channels, stride=stride, kernel_size=ksize)
self.convm = nn.Conv2d(in_channels, out_channels, stride=stride, kernel_size=ksize,
padding=self.padSize)
def forward(self, x):
x = self.pad(x)
x = self.convf(x)
x = self.act(x)
m = self.convm(x)
m = F.sigmoid(m)
x = x * m
return x
class GatedDilatedConv(nn.Module):
def __init__(self, in_channels, out_channels, ksize=3, stride=1, pad=1, dilation=2, act=F.elu):
super(GatedDilatedConv, self).__init__()
self.ksize = ksize
self.stride = stride
self.act = act
self.padSize = pad
self.pad = nn.ReflectionPad2d(self.padSize)
self.convf = nn.Conv2d(in_channels, out_channels, stride=stride, kernel_size=ksize, dilation=dilation)
self.convm = nn.Conv2d(in_channels, out_channels, stride=stride, kernel_size=ksize, dilation=dilation,
padding=self.padSize)
def forward(self, x):
x = self.pad(x)
x = self.convf(x)
x = self.act(x)
m = self.convm(x)
m = F.sigmoid(m)
x = x * m
return x
###Output
_____no_output_____
###Markdown
model.loss
###Code
class WGANLoss(nn.Module):
def __init__(self):
super(WGANLoss, self).__init__()
def __call__(self, input, target):
d_loss = (input - target).mean()
g_loss = -input.mean()
return {'g_loss': g_loss, 'd_loss': d_loss}
def gradient_penalty(xin, yout, mask=None):
gradients = autograd.grad(yout, xin, create_graph=True,
grad_outputs=torch.ones(yout.size()).cuda(), retain_graph=True, only_inputs=True)[0]
if mask is not None:
gradients = gradients * mask
gradients = gradients.view(gradients.size(0), -1)
gp = ((gradients.norm(2, dim=1) - 1) ** 2).mean()
return gp
def random_interpolate(gt, pred):
batch_size = gt.size(0)
alpha = torch.rand(batch_size, 1, 1, 1).cuda()
# alpha = alpha.expand(gt.size()).cuda()
interpolated = gt * alpha + pred * (1 - alpha)
return interpolated
class IDMRFLoss(nn.Module):
def __init__(self, featlayer=VGG19FeatLayer):
super(IDMRFLoss, self).__init__()
self.featlayer = featlayer()
self.feat_style_layers = {'relu3_2': 1.0, 'relu4_2': 1.0}
self.feat_content_layers = {'relu4_2': 1.0}
self.bias = 1.0
self.nn_stretch_sigma = 0.5
self.lambda_style = 1.0
self.lambda_content = 1.0
def sum_normalize(self, featmaps):
reduce_sum = torch.sum(featmaps, dim=1, keepdim=True)
return featmaps / reduce_sum
def patch_extraction(self, featmaps):
patch_size = 1
patch_stride = 1
patches_as_depth_vectors = featmaps.unfold(2, patch_size, patch_stride).unfold(3, patch_size, patch_stride)
self.patches_OIHW = patches_as_depth_vectors.permute(0, 2, 3, 1, 4, 5)
dims = self.patches_OIHW.size()
self.patches_OIHW = self.patches_OIHW.view(-1, dims[3], dims[4], dims[5])
return self.patches_OIHW
def compute_relative_distances(self, cdist):
epsilon = 1e-5
div = torch.min(cdist, dim=1, keepdim=True)[0]
relative_dist = cdist / (div + epsilon)
return relative_dist
def exp_norm_relative_dist(self, relative_dist):
scaled_dist = relative_dist
dist_before_norm = torch.exp((self.bias - scaled_dist)/self.nn_stretch_sigma)
self.cs_NCHW = self.sum_normalize(dist_before_norm)
return self.cs_NCHW
def mrf_loss(self, gen, tar):
meanT = torch.mean(tar, 1, keepdim=True)
gen_feats, tar_feats = gen - meanT, tar - meanT
gen_feats_norm = torch.norm(gen_feats, p=2, dim=1, keepdim=True)
tar_feats_norm = torch.norm(tar_feats, p=2, dim=1, keepdim=True)
gen_normalized = gen_feats / gen_feats_norm
tar_normalized = tar_feats / tar_feats_norm
cosine_dist_l = []
BatchSize = tar.size(0)
for i in range(BatchSize):
tar_feat_i = tar_normalized[i:i+1, :, :, :]
gen_feat_i = gen_normalized[i:i+1, :, :, :]
patches_OIHW = self.patch_extraction(tar_feat_i)
cosine_dist_i = F.conv2d(gen_feat_i, patches_OIHW)
cosine_dist_l.append(cosine_dist_i)
cosine_dist = torch.cat(cosine_dist_l, dim=0)
cosine_dist_zero_2_one = - (cosine_dist - 1) / 2
relative_dist = self.compute_relative_distances(cosine_dist_zero_2_one)
rela_dist = self.exp_norm_relative_dist(relative_dist)
dims_div_mrf = rela_dist.size()
k_max_nc = torch.max(rela_dist.view(dims_div_mrf[0], dims_div_mrf[1], -1), dim=2)[0]
div_mrf = torch.mean(k_max_nc, dim=1)
div_mrf_sum = -torch.log(div_mrf)
div_mrf_sum = torch.sum(div_mrf_sum)
return div_mrf_sum
def forward(self, gen, tar):
gen_vgg_feats = self.featlayer(gen)
tar_vgg_feats = self.featlayer(tar)
style_loss_list = [self.feat_style_layers[layer] * self.mrf_loss(gen_vgg_feats[layer], tar_vgg_feats[layer]) for layer in self.feat_style_layers]
self.style_loss = reduce(lambda x, y: x+y, style_loss_list) * self.lambda_style
content_loss_list = [self.feat_content_layers[layer] * self.mrf_loss(gen_vgg_feats[layer], tar_vgg_feats[layer]) for layer in self.feat_content_layers]
self.content_loss = reduce(lambda x, y: x+y, content_loss_list) * self.lambda_content
return self.style_loss + self.content_loss
class StyleLoss(nn.Module):
def __init__(self, featlayer=VGG19FeatLayer, style_layers=None):
super(StyleLoss, self).__init__()
self.featlayer = featlayer()
if style_layers is not None:
self.feat_style_layers = style_layers
else:
self.feat_style_layers = {'relu2_2': 1.0, 'relu3_2': 1.0, 'relu4_2': 1.0}
def gram_matrix(self, x):
b, c, h, w = x.size()
feats = x.view(b * c, h * w)
g = torch.mm(feats, feats.t())
return g.div(b * c * h * w)
def _l1loss(self, gen, tar):
return torch.abs(gen-tar).mean()
def forward(self, gen, tar):
gen_vgg_feats = self.featlayer(gen)
tar_vgg_feats = self.featlayer(tar)
style_loss_list = [self.feat_style_layers[layer] * self._l1loss(self.gram_matrix(gen_vgg_feats[layer]), self.gram_matrix(tar_vgg_feats[layer])) for
layer in self.feat_style_layers]
style_loss = reduce(lambda x, y: x + y, style_loss_list)
return style_loss
class ContentLoss(nn.Module):
def __init__(self, featlayer=VGG19FeatLayer, content_layers=None):
super(ContentLoss, self).__init__()
self.featlayer = featlayer()
if content_layers is not None:
self.feat_content_layers = content_layers
else:
self.feat_content_layers = {'relu4_2': 1.0}
def _l1loss(self, gen, tar):
return torch.abs(gen-tar).mean()
def forward(self, gen, tar):
gen_vgg_feats = self.featlayer(gen)
tar_vgg_feats = self.featlayer(tar)
content_loss_list = [self.feat_content_layers[layer] * self._l1loss(gen_vgg_feats[layer], tar_vgg_feats[layer]) for
layer in self.feat_content_layers]
content_loss = reduce(lambda x, y: x + y, content_loss_list)
return content_loss
class TVLoss(nn.Module):
def __init__(self):
super(TVLoss, self).__init__()
def forward(self, x):
h_x, w_x = x.size()[2:]
h_tv = torch.abs(x[:, :, 1:, :] - x[:, :, :h_x-1, :])
w_tv = torch.abs(x[:, :, :, 1:] - x[:, :, :, :w_x-1])
loss = torch.sum(h_tv) + torch.sum(w_tv)
return loss
###Output
_____no_output_____
###Markdown
options.train_options
###Code
import argparse
import os
import time
class TrainOptions:
def __init__(self):
self.parser = argparse.ArgumentParser()
self.initialized = False
def initialize(self):
# experiment specifics
self.parser.add_argument('--dataset', type=str, default='isic',
help='dataset of the experiment.')
self.parser.add_argument('--data_file', type=str, default=os.path.join(dir_path, 'models', 'train_files.txt'), help='the file storing training image paths')
self.parser.add_argument('--mask_dir', type=str, default='', help='the directory storing mask files')
self.parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0 0,1,2')
self.parser.add_argument('--checkpoint_dir', type=str, default=os.path.join(dir_path, 'models', 'inpainting_gmcnn'), help='models are saved here')
self.parser.add_argument('--load_model_dir', type=str, default=os.path.join(dir_path, 'models', 'inpainting_gmcnn', \
'celebahq256_rect'), help='pretrained models are given here')
self.parser.add_argument('--phase', type=str, default='train')
# input/output sizes
self.parser.add_argument('--batch_size', type=int, default=8, help='input batch size')
# for setting inputs
self.parser.add_argument('--random_crop', type=int, default=1,
help='using random crop to process input image when '
'the required size is smaller than the given size')
self.parser.add_argument('--random_mask', type=int, default=1)
self.parser.add_argument('--mask_type', type=str, default='ellipse')
self.parser.add_argument('--pretrain_network', type=int, default=0)
self.parser.add_argument('--lambda_adv', type=float, default=1e-3)
self.parser.add_argument('--lambda_rec', type=float, default=1.4)
self.parser.add_argument('--lambda_ae', type=float, default=1.2)
self.parser.add_argument('--lambda_mrf', type=float, default=0.05)
self.parser.add_argument('--lambda_gp', type=float, default=10)
self.parser.add_argument('--random_seed', type=bool, default=False)
self.parser.add_argument('--padding', type=str, default='SAME')
self.parser.add_argument('--D_max_iters', type=int, default=5)
self.parser.add_argument('--lr', type=float, default=1e-5, help='learning rate for training')
self.parser.add_argument('--train_spe', type=int, default=1000)
self.parser.add_argument('--epochs', type=int, default=40)
self.parser.add_argument('--viz_steps', type=int, default=5)
self.parser.add_argument('--spectral_norm', type=int, default=1)
self.parser.add_argument('--img_shapes', type=str, default='224,224,3',
help='given shape parameters: h,w,c or h,w')
self.parser.add_argument('--mask_shapes', type=str, default='40',
help='given mask parameters: h,w or if mask_type==ellipse then this should be a number to represent the width of the ellipse')
self.parser.add_argument('--max_delta_shapes', type=str, default='32,32')
self.parser.add_argument('--margins', type=str, default='0,0')
# for generator
self.parser.add_argument('--g_cnum', type=int, default=32,
help='# of generator filters in first conv layer')
self.parser.add_argument('--d_cnum', type=int, default=64,
help='# of discriminator filters in first conv layer')
# for id-mrf computation
self.parser.add_argument('--vgg19_path', type=str, default='vgg19_weights/imagenet-vgg-verydeep-19.mat')
# for instance-wise features
self.initialized = True
def parse(self, args=[]):
if not self.initialized:
self.initialize()
if isinstance(args, dict): # If args is supplied as a dict, flatten to a list.
args = [item for pair in args.items() for item in pair]
elif not isinstance(args, list): # Otherwise, it should be a list.
raise('args should be a dict or a list.')
self.opt = self.parser.parse_args(args=args)
self.opt.dataset_path = self.opt.data_file
str_ids = self.opt.gpu_ids.split(',')
self.opt.gpu_ids = []
for str_id in str_ids:
id = int(str_id)
if id >= 0:
self.opt.gpu_ids.append(str(id))
assert self.opt.random_crop in [0, 1]
self.opt.random_crop = True if self.opt.random_crop == 1 else False
assert self.opt.random_mask in [0, 1]
self.opt.random_mask = True if self.opt.random_mask == 1 else False
assert self.opt.pretrain_network in [0, 1]
self.opt.pretrain_network = True if self.opt.pretrain_network == 1 else False
assert self.opt.spectral_norm in [0, 1]
self.opt.spectral_norm = True if self.opt.spectral_norm == 1 else False
assert self.opt.padding in ['SAME', 'MIRROR']
assert self.opt.mask_type in ['rect', 'stroke', 'ellipse']
str_img_shapes = self.opt.img_shapes.split(',')
self.opt.img_shapes = [int(x) for x in str_img_shapes]
if self.opt.mask_type == 'ellipse':
self.opt.mask_shapes = int(self.opt.mask_shapes)
else:
str_mask_shapes = self.opt.mask_shapes.split(',')
self.opt.mask_shapes = [int(x) for x in str_mask_shapes]
str_max_delta_shapes = self.opt.max_delta_shapes.split(',')
self.opt.max_delta_shapes = [int(x) for x in str_max_delta_shapes]
str_margins = self.opt.margins.split(',')
self.opt.margins = [int(x) for x in str_margins]
# model name and date
self.opt.date_str = time.strftime('%Y%m%d-%H%M%S')
self.opt.model_name = 'GMCNN'
self.opt.model_folder = self.opt.date_str + '_' + self.opt.model_name
self.opt.model_folder += '_' + self.opt.dataset
self.opt.model_folder += '_b' + str(self.opt.batch_size)
self.opt.model_folder += '_s' + str(self.opt.img_shapes[0]) + 'x' + str(self.opt.img_shapes[1])
self.opt.model_folder += '_gc' + str(self.opt.g_cnum)
self.opt.model_folder += '_dc' + str(self.opt.d_cnum)
self.opt.model_folder += '_randmask-' + self.opt.mask_type if self.opt.random_mask else ''
self.opt.model_folder += '_pretrain' if self.opt.pretrain_network else ''
if os.path.isdir(self.opt.checkpoint_dir) is False:
os.mkdir(self.opt.checkpoint_dir)
self.opt.model_folder = os.path.join(self.opt.checkpoint_dir, self.opt.model_folder)
if os.path.isdir(self.opt.model_folder) is False:
os.mkdir(self.opt.model_folder)
# set gpu ids
if len(self.opt.gpu_ids) > 0:
os.environ['CUDA_VISIBLE_DEVICES'] = ','.join(self.opt.gpu_ids)
args = vars(self.opt)
print('------------ Options -------------')
for k, v in sorted(args.items()):
print('%s: %s' % (str(k), str(v)))
print('-------------- End ----------------')
return self.opt
###Output
_____no_output_____
###Markdown
Create elliptical masksAdded original code
###Code
def find_angle(pos1, pos2, ret_type = 'deg'):
# Find the angle between two pixel points, pos1 and pos2.
angle_rads = math.atan2(pos2[1] - pos1[1], pos2[0] - pos1[1])
if ret_type == 'rads':
return angle_rads
elif ret_type == 'deg':
return math.degrees(angle_rads) # Convert from radians to degrees.
def sample_centre_pts(n, imsize, xlimits=(50,250), ylimits=(50,250)):
# Function to generate random sample of points for the centres of the elliptical masks.
pts = np.empty((n,2)) # Empty array to hold the final points
count=0
while count < n:
sample = randint(0, imsize[0], (n,2))[0] # Assumes im_size is symmetric
# Check the point is in the valid region.
is_valid = (sample[0] < xlimits[0]) | (sample[0] > xlimits[1]) | \
(sample[1] < ylimits[0]) | (sample[1] > ylimits[1])
if is_valid: # Only take the point if it's within the valid region.
pts[count] = sample
count += 1
return pts
def generate_ellipse_mask(imsize, mask_size, seed=None):
im_centre = (int(imsize[0]/2), int(imsize[1]/2))
x_bounds = (int(0.1*imsize[0]), int(imsize[0] - 0.1*imsize[0])) # Bounds for the valid region of mask centres.
y_bounds = (int(0.1*imsize[1]), int(imsize[1] - 0.1*imsize[1]))
if seed is not None:
random.seed(seed) # Set seed for repeatability
n = 1 + random.binomial(1, 0.3) # The number of masks per image either 1 (70% of the time) or 2 (30% of the time)
centre_pts = sample_centre_pts(n, imsize, x_bounds, y_bounds) # Get a random sample for the mask centres.
startAngle = 0.0
endAngle = 360.0 # Draw full ellipses (although part may fall outside the image)
mask = np.zeros((imsize[0], imsize[1], 1), np.float32) # Create blank canvas for the mask.
for pt in centre_pts:
size = abs(int(random.normal(mask_size, mask_size/5.0))) # Randomness introduced in the mask size.
ratio = 2*random.random(1) + 1 # Ratio between length and width. Sample from Unif(1,3).
centrex = int(pt[0])
centrey = int(pt[1])
angle = find_angle(im_centre, (centrex, centrey)) # Get the angle between the centre of the image and the mask centre.
angle = int(angle + random.normal(0.0, 5.0)) # Base the angle of rotation on the above angle.
mask = cv2.ellipse(mask, (centrex,centrey), (size, int(size*ratio)),
angle, startAngle, endAngle,
color=1, thickness=-1) # Insert a ellipse with the parameters defined above.
mask = np.minimum(mask, 1.0) # This may be redundant.
mask = np.transpose(mask, [2, 0, 1]) # bring the 'channel' axis to the first axis.
mask = np.expand_dims(mask, 0) # Add in extra axis at axis=0 - resulting shape (1, 1, )
return mask
# test_mask = generate_ellipse_mask(imsize = (224,224), mask_size = 40)
# from matplotlib import pyplot as plt
# plt.imshow(test_mask[0][0], cmap='Greys_r')
# plt.show()
###Output
_____no_output_____
###Markdown
utils.utils
###Code
def gauss_kernel(size=21, sigma=3, inchannels=3, outchannels=3):
interval = (2 * sigma + 1.0) / size
x = np.linspace(-sigma-interval/2,sigma+interval/2,size+1)
ker1d = np.diff(st.norm.cdf(x))
kernel_raw = np.sqrt(np.outer(ker1d, ker1d))
kernel = kernel_raw / kernel_raw.sum()
out_filter = np.array(kernel, dtype=np.float32)
out_filter = out_filter.reshape((1, 1, size, size))
out_filter = np.tile(out_filter, [outchannels, inchannels, 1, 1])
return out_filter
def np_free_form_mask(maxVertex, maxLength, maxBrushWidth, maxAngle, h, w):
mask = np.zeros((h, w, 1), np.float32)
numVertex = np.random.randint(maxVertex + 1)
startY = np.random.randint(h)
startX = np.random.randint(w)
brushWidth = 0
for i in range(numVertex):
angle = np.random.randint(maxAngle + 1)
angle = angle / 360.0 * 2 * np.pi
if i % 2 == 0:
angle = 2 * np.pi - angle
length = np.random.randint(maxLength + 1)
brushWidth = np.random.randint(10, maxBrushWidth + 1) // 2 * 2
nextY = startY + length * np.cos(angle)
nextX = startX + length * np.sin(angle)
nextY = np.maximum(np.minimum(nextY, h - 1), 0).astype(np.int)
nextX = np.maximum(np.minimum(nextX, w - 1), 0).astype(np.int)
cv2.line(mask, (startY, startX), (nextY, nextX), 1, brushWidth)
cv2.circle(mask, (startY, startX), brushWidth // 2, 2)
startY, startX = nextY, nextX
cv2.circle(mask, (startY, startX), brushWidth // 2, 2)
return mask
def generate_rect_mask(im_size, mask_size, margin=8, rand_mask=True):
mask = np.zeros((im_size[0], im_size[1])).astype(np.float32)
if rand_mask:
sz0, sz1 = mask_size[0], mask_size[1]
of0 = np.random.randint(margin, im_size[0] - sz0 - margin)
of1 = np.random.randint(margin, im_size[1] - sz1 - margin)
else:
sz0, sz1 = mask_size[0], mask_size[1]
of0 = (im_size[0] - sz0) // 2
of1 = (im_size[1] - sz1) // 2
mask[of0:of0+sz0, of1:of1+sz1] = 1
mask = np.expand_dims(mask, axis=0)
mask = np.expand_dims(mask, axis=0)
rect = np.array([[of0, sz0, of1, sz1]], dtype=int)
return mask, rect
def generate_stroke_mask(im_size, parts=10, maxVertex=20, maxLength=100, maxBrushWidth=24, maxAngle=360):
mask = np.zeros((im_size[0], im_size[1], 1), dtype=np.float32)
for i in range(parts):
mask = mask + np_free_form_mask(maxVertex, maxLength, maxBrushWidth, maxAngle, im_size[0], im_size[1])
mask = np.minimum(mask, 1.0)
mask = np.transpose(mask, [2, 0, 1])
mask = np.expand_dims(mask, 0)
return mask
def generate_mask(type, im_size, mask_size):
if type == 'rect':
return generate_rect_mask(im_size, mask_size)
elif type == 'ellipse':
return generate_ellipse_mask(im_size, mask_size), None
else:
return generate_stroke_mask(im_size), None
def getLatest(folder_path):
files = glob.glob(folder_path)
file_times = list(map(lambda x: time.ctime(os.path.getctime(x)), files))
return files[sorted(range(len(file_times)), key=lambda x: file_times[x])[-1]]
###Output
_____no_output_____
###Markdown
model.net
###Code
# generative multi-column convolutional neural net
class GMCNN(BaseNet):
def __init__(self, in_channels, out_channels, cnum=32, act=F.elu, norm=F.instance_norm, using_norm=False):
super(GMCNN, self).__init__()
self.act = act
self.using_norm = using_norm
if using_norm is True:
self.norm = norm
else:
self.norm = None
ch = cnum
# network structure
self.EB1 = []
self.EB2 = []
self.EB3 = []
self.decoding_layers = []
self.EB1_pad_rec = []
self.EB2_pad_rec = []
self.EB3_pad_rec = []
self.EB1.append(nn.Conv2d(in_channels, ch, kernel_size=7, stride=1))
self.EB1.append(nn.Conv2d(ch, ch * 2, kernel_size=7, stride=2))
self.EB1.append(nn.Conv2d(ch * 2, ch * 2, kernel_size=7, stride=1))
self.EB1.append(nn.Conv2d(ch * 2, ch * 4, kernel_size=7, stride=2))
self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1))
self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1))
self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1, dilation=2))
self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1, dilation=4))
self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1, dilation=8))
self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1, dilation=16))
self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1))
self.EB1.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=7, stride=1))
self.EB1.append(PureUpsampling(scale=4))
self.EB1_pad_rec = [3, 3, 3, 3, 3, 3, 6, 12, 24, 48, 3, 3, 0]
self.EB2.append(nn.Conv2d(in_channels, ch, kernel_size=5, stride=1))
self.EB2.append(nn.Conv2d(ch, ch * 2, kernel_size=5, stride=2))
self.EB2.append(nn.Conv2d(ch * 2, ch * 2, kernel_size=5, stride=1))
self.EB2.append(nn.Conv2d(ch * 2, ch * 4, kernel_size=5, stride=2))
self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1))
self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1))
self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1, dilation=2))
self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1, dilation=4))
self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1, dilation=8))
self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1, dilation=16))
self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1))
self.EB2.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, stride=1))
self.EB2.append(PureUpsampling(scale=2, mode='nearest'))
self.EB2.append(nn.Conv2d(ch * 4, ch * 2, kernel_size=5, stride=1))
self.EB2.append(nn.Conv2d(ch * 2, ch * 2, kernel_size=5, stride=1))
self.EB2.append(PureUpsampling(scale=2))
self.EB2_pad_rec = [2, 2, 2, 2, 2, 2, 4, 8, 16, 32, 2, 2, 0, 2, 2, 0]
self.EB3.append(nn.Conv2d(in_channels, ch, kernel_size=3, stride=1))
self.EB3.append(nn.Conv2d(ch, ch * 2, kernel_size=3, stride=2))
self.EB3.append(nn.Conv2d(ch * 2, ch * 2, kernel_size=3, stride=1))
self.EB3.append(nn.Conv2d(ch * 2, ch * 4, kernel_size=3, stride=2))
self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1))
self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1))
self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1, dilation=2))
self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1, dilation=4))
self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1, dilation=8))
self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1, dilation=16))
self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1))
self.EB3.append(nn.Conv2d(ch * 4, ch * 4, kernel_size=3, stride=1))
self.EB3.append(PureUpsampling(scale=2, mode='nearest'))
self.EB3.append(nn.Conv2d(ch * 4, ch * 2, kernel_size=3, stride=1))
self.EB3.append(nn.Conv2d(ch * 2, ch * 2, kernel_size=3, stride=1))
self.EB3.append(PureUpsampling(scale=2, mode='nearest'))
self.EB3.append(nn.Conv2d(ch * 2, ch, kernel_size=3, stride=1))
self.EB3.append(nn.Conv2d(ch, ch, kernel_size=3, stride=1))
self.EB3_pad_rec = [1, 1, 1, 1, 1, 1, 2, 4, 8, 16, 1, 1, 0, 1, 1, 0, 1, 1]
self.decoding_layers.append(nn.Conv2d(ch * 7, ch // 2, kernel_size=3, stride=1))
self.decoding_layers.append(nn.Conv2d(ch // 2, out_channels, kernel_size=3, stride=1))
self.decoding_pad_rec = [1, 1]
self.EB1 = nn.ModuleList(self.EB1)
self.EB2 = nn.ModuleList(self.EB2)
self.EB3 = nn.ModuleList(self.EB3)
self.decoding_layers = nn.ModuleList(self.decoding_layers)
# padding operations
padlen = 49
self.pads = [0] * padlen
for i in range(padlen):
self.pads[i] = nn.ReflectionPad2d(i)
self.pads = nn.ModuleList(self.pads)
def forward(self, x):
x1, x2, x3 = x, x, x
for i, layer in enumerate(self.EB1):
pad_idx = self.EB1_pad_rec[i]
x1 = layer(self.pads[pad_idx](x1))
if self.using_norm:
x1 = self.norm(x1)
if pad_idx != 0:
x1 = self.act(x1)
for i, layer in enumerate(self.EB2):
pad_idx = self.EB2_pad_rec[i]
x2 = layer(self.pads[pad_idx](x2))
if self.using_norm:
x2 = self.norm(x2)
if pad_idx != 0:
x2 = self.act(x2)
for i, layer in enumerate(self.EB3):
pad_idx = self.EB3_pad_rec[i]
x3 = layer(self.pads[pad_idx](x3))
if self.using_norm:
x3 = self.norm(x3)
if pad_idx != 0:
x3 = self.act(x3)
x_d = torch.cat((x1, x2, x3), 1)
x_d = self.act(self.decoding_layers[0](self.pads[self.decoding_pad_rec[0]](x_d)))
x_d = self.decoding_layers[1](self.pads[self.decoding_pad_rec[1]](x_d))
x_out = torch.clamp(x_d, -1, 1)
return x_out
# return one dimensional output indicating the probability of realness or fakeness
class Discriminator(BaseNet):
def __init__(self, in_channels, cnum=32, fc_channels=8*8*32*4, act=F.elu, norm=None, spectral_norm=True):
super(Discriminator, self).__init__()
self.act = act
self.norm = norm
self.embedding = None
self.logit = None
ch = cnum
self.layers = []
if spectral_norm:
self.layers.append(SpectralNorm(nn.Conv2d(in_channels, ch, kernel_size=5, padding=2, stride=2)))
self.layers.append(SpectralNorm(nn.Conv2d(ch, ch * 2, kernel_size=5, padding=2, stride=2)))
self.layers.append(SpectralNorm(nn.Conv2d(ch * 2, ch * 4, kernel_size=5, padding=2, stride=2)))
self.layers.append(SpectralNorm(nn.Conv2d(ch * 4, ch * 4, kernel_size=5, padding=2, stride=2)))
self.layers.append(SpectralNorm(nn.Linear(fc_channels, 1)))
else:
self.layers.append(nn.Conv2d(in_channels, ch, kernel_size=5, padding=2, stride=2))
self.layers.append(nn.Conv2d(ch, ch * 2, kernel_size=5, padding=2, stride=2))
self.layers.append(nn.Conv2d(ch*2, ch*4, kernel_size=5, padding=2, stride=2))
self.layers.append(nn.Conv2d(ch*4, ch*4, kernel_size=5, padding=2, stride=2))
self.layers.append(nn.Linear(fc_channels, 1))
self.layers = nn.ModuleList(self.layers)
def forward(self, x):
for layer in self.layers[:-1]:
x = layer(x)
if self.norm is not None:
x = self.norm(x)
x = self.act(x)
self.embedding = x.view(x.size(0), -1)
self.logit = self.layers[-1](self.embedding)
return self.logit
class GlobalLocalDiscriminator(BaseNet):
def __init__(self, in_channels, cnum=32, g_fc_channels=16*16*32*4, l_fc_channels=8*8*32*4, act=F.elu, norm=None,
spectral_norm=True):
super(GlobalLocalDiscriminator, self).__init__()
self.act = act
self.norm = norm
self.global_discriminator = Discriminator(in_channels=in_channels, fc_channels=g_fc_channels, cnum=cnum,
act=act, norm=norm, spectral_norm=spectral_norm)
self.local_discriminator = Discriminator(in_channels=in_channels, fc_channels=l_fc_channels, cnum=cnum,
act=act, norm=norm, spectral_norm=spectral_norm)
def forward(self, x_g, x_l):
x_global = self.global_discriminator(x_g)
x_local = self.local_discriminator(x_l)
return x_global, x_local
# from util.utils import generate_mask
class InpaintingModel_GMCNN(BaseModel):
def __init__(self, in_channels, act=F.elu, norm=None, opt=None):
super(InpaintingModel_GMCNN, self).__init__()
self.opt = opt
self.init(opt)
self.confidence_mask_layer = ConfidenceDrivenMaskLayer()
self.netGM = GMCNN(in_channels, out_channels=3, cnum=opt.g_cnum, act=act, norm=norm).cuda()
# self.netGM = GMCNN(in_channels, out_channels=3, cnum=opt.g_cnum, act=act, norm=norm).cpu()
init_weights(self.netGM)
self.model_names = ['GM']
if self.opt.phase == 'test':
return
self.netD = None
self.optimizer_G = torch.optim.Adam(self.netGM.parameters(), lr=opt.lr, betas=(0.5, 0.9))
self.optimizer_D = None
self.wganloss = None
self.recloss = nn.L1Loss()
self.aeloss = nn.L1Loss()
self.mrfloss = None
self.lambda_adv = opt.lambda_adv
self.lambda_rec = opt.lambda_rec
self.lambda_ae = opt.lambda_ae
self.lambda_gp = opt.lambda_gp
self.lambda_mrf = opt.lambda_mrf
self.G_loss = None
self.G_loss_reconstruction = None
self.G_loss_mrf = None
self.G_loss_adv, self.G_loss_adv_local = None, None
self.G_loss_ae = None
self.D_loss, self.D_loss_local = None, None
self.GAN_loss = None
self.gt, self.gt_local = None, None
self.mask, self.mask_01 = None, None
self.rect = None
self.im_in, self.gin = None, None
self.completed, self.completed_local = None, None
self.completed_logit, self.completed_local_logit = None, None
self.gt_logit, self.gt_local_logit = None, None
self.pred = None
if self.opt.pretrain_network is False:
if self.opt.mask_type == 'rect':
self.netD = GlobalLocalDiscriminator(3, cnum=opt.d_cnum, act=act,
g_fc_channels=opt.img_shapes[0]//16*opt.img_shapes[1]//16*opt.d_cnum*4,
l_fc_channels=opt.mask_shapes[0]//16*opt.mask_shapes[1]//16*opt.d_cnum*4,
spectral_norm=self.opt.spectral_norm).cuda()
else:
self.netD = GlobalLocalDiscriminator(3, cnum=opt.d_cnum, act=act,
spectral_norm=self.opt.spectral_norm,
g_fc_channels=opt.img_shapes[0]//16*opt.img_shapes[1]//16*opt.d_cnum*4,
l_fc_channels=opt.img_shapes[0]//16*opt.img_shapes[1]//16*opt.d_cnum*4).cuda()
init_weights(self.netD)
self.optimizer_D = torch.optim.Adam(filter(lambda x: x.requires_grad, self.netD.parameters()), lr=opt.lr,
betas=(0.5, 0.9))
self.wganloss = WGANLoss()
self.mrfloss = IDMRFLoss()
def initVariables(self):
self.gt = self.input['gt']
mask, rect = generate_mask(self.opt.mask_type, self.opt.img_shapes, self.opt.mask_shapes)
self.mask_01 = torch.from_numpy(mask).cuda().repeat([self.opt.batch_size, 1, 1, 1])
self.mask = self.confidence_mask_layer(self.mask_01)
if self.opt.mask_type == 'rect':
self.rect = [rect[0, 0], rect[0, 1], rect[0, 2], rect[0, 3]]
self.gt_local = self.gt[:, :, self.rect[0]:self.rect[0] + self.rect[1],
self.rect[2]:self.rect[2] + self.rect[3]]
else:
self.gt_local = self.gt
self.im_in = self.gt * (1 - self.mask_01)
self.gin = torch.cat((self.im_in, self.mask_01), 1)
def forward_G(self):
self.G_loss_reconstruction = self.recloss(self.completed * self.mask, self.gt.detach() * self.mask)
self.G_loss_reconstruction = self.G_loss_reconstruction / torch.mean(self.mask_01)
self.G_loss_ae = self.aeloss(self.pred * (1 - self.mask_01), self.gt.detach() * (1 - self.mask_01))
self.G_loss_ae = self.G_loss_ae / torch.mean(1 - self.mask_01)
self.G_loss = self.lambda_rec * self.G_loss_reconstruction + self.lambda_ae * self.G_loss_ae
if self.opt.pretrain_network is False:
# discriminator
self.completed_logit, self.completed_local_logit = self.netD(self.completed, self.completed_local)
self.G_loss_mrf = self.mrfloss((self.completed_local+1)/2.0, (self.gt_local.detach()+1)/2.0)
self.G_loss = self.G_loss + self.lambda_mrf * self.G_loss_mrf
self.G_loss_adv = -self.completed_logit.mean()
self.G_loss_adv_local = -self.completed_local_logit.mean()
self.G_loss = self.G_loss + self.lambda_adv * (self.G_loss_adv + self.G_loss_adv_local)
def forward_D(self):
self.completed_logit, self.completed_local_logit = self.netD(self.completed.detach(), self.completed_local.detach())
self.gt_logit, self.gt_local_logit = self.netD(self.gt, self.gt_local)
# hinge loss
self.D_loss_local = nn.ReLU()(1.0 - self.gt_local_logit).mean() + nn.ReLU()(1.0 + self.completed_local_logit).mean()
self.D_loss = nn.ReLU()(1.0 - self.gt_logit).mean() + nn.ReLU()(1.0 + self.completed_logit).mean()
self.D_loss = self.D_loss + self.D_loss_local
def backward_G(self):
self.G_loss.backward()
def backward_D(self):
self.D_loss.backward(retain_graph=True)
def optimize_parameters(self):
self.initVariables()
self.pred = self.netGM(self.gin)
self.completed = self.pred * self.mask_01 + self.gt * (1 - self.mask_01)
if self.opt.mask_type == 'rect':
self.completed_local = self.completed[:, :, self.rect[0]:self.rect[0] + self.rect[1],
self.rect[2]:self.rect[2] + self.rect[3]]
else:
self.completed_local = self.completed
if self.opt.pretrain_network is False:
for i in range(self.opt.D_max_iters):
self.optimizer_D.zero_grad()
self.optimizer_G.zero_grad()
self.forward_D()
self.backward_D()
self.optimizer_D.step()
self.optimizer_G.zero_grad()
self.forward_G()
self.backward_G()
self.optimizer_G.step()
def get_current_losses(self):
l = {'G_loss': self.G_loss.item(), 'G_loss_rec': self.G_loss_reconstruction.item(),
'G_loss_ae': self.G_loss_ae.item()}
if self.opt.pretrain_network is False:
l.update({'G_loss_adv': self.G_loss_adv.item(),
'G_loss_adv_local': self.G_loss_adv_local.item(),
'D_loss': self.D_loss.item(),
'G_loss_mrf': self.G_loss_mrf.item()})
return l
def get_current_visuals(self):
return {'input': self.im_in.cpu().detach().numpy(), 'gt': self.gt.cpu().detach().numpy(),
'completed': self.completed.cpu().detach().numpy()}
def get_current_visuals_tensor(self):
return {'input': self.im_in.cpu().detach(), 'gt': self.gt.cpu().detach(),
'completed': self.completed.cpu().detach()}
def evaluate(self, im_in, mask):
im_in = torch.from_numpy(im_in).type(torch.FloatTensor).cuda() / 127.5 - 1
mask = torch.from_numpy(mask).type(torch.FloatTensor).cuda()
im_in = im_in * (1-mask)
xin = torch.cat((im_in, mask), 1)
ret = self.netGM(xin) * mask + im_in * (1-mask)
ret = (ret.cpu().detach().numpy() + 1) * 127.5
return ret.astype(np.uint8)
config
###Output
_____no_output_____
###Markdown
train.py Set up model and data
###Code
config = TrainOptions().parse(args=train_args)
print('loading data..')
dataset = InpaintingDataset(config.dataset_path, '', config.mask_dir, im_size=config.img_shapes, transform=transforms.Compose([
ToTensor()
]))
print('Length of training dataset: {} images'.format(len(dataset)))
dataloader = DataLoader(dataset, batch_size=config.batch_size, shuffle=True, num_workers=2, drop_last=True)
print('data loaded..')
print('configuring model..')
ourModel = InpaintingModel_GMCNN(in_channels=4, opt=config)
ourModel.print_networks()
if config.load_model_dir != '':
print('Loading pretrained model from {}'.format(config.load_model_dir))
ourModel.load_networks(getLatest(os.path.join(config.load_model_dir, '*.pth')))
print('Loading done.')
# ourModel = torch.nn.DataParallel(ourModel).cuda()
print('model setting up..')
writer = SummaryWriter(log_dir=config.model_folder)
cnt = 0
###Output
_____no_output_____
###Markdown
Run training
###Code
print('training initializing..')
for epoch in range(config.epochs):
for i, data in enumerate(dataloader):
gt = data['gt'].cuda()
# normalize to values between -1 and 1
gt = gt / 127.5 - 1
data_in = {'gt': gt}
ourModel.setInput(data_in)
ourModel.optimize_parameters()
if (i+1) % config.viz_steps == 0:
ret_loss = ourModel.get_current_losses()
if config.pretrain_network is False:
print(
'[%d, %5d] G_loss: %.4f (rec: %.4f, ae: %.4f, adv: %.4f, mrf: %.4f), D_loss: %.4f'
% (epoch + 1, i + 1, ret_loss['G_loss'], ret_loss['G_loss_rec'], ret_loss['G_loss_ae'],
ret_loss['G_loss_adv'], ret_loss['G_loss_mrf'], ret_loss['D_loss']))
writer.add_scalar('adv_loss', ret_loss['G_loss_adv'], cnt)
writer.add_scalar('D_loss', ret_loss['D_loss'], cnt)
writer.add_scalar('G_mrf_loss', ret_loss['G_loss_mrf'], cnt)
else:
print('[%d, %5d] G_loss: %.4f (rec: %.4f, ae: %.4f)'
% (epoch + 1, i + 1, ret_loss['G_loss'], ret_loss['G_loss_rec'], ret_loss['G_loss_ae']))
writer.add_scalar('G_loss', ret_loss['G_loss'], cnt)
writer.add_scalar('reconstruction_loss', ret_loss['G_loss_rec'], cnt)
writer.add_scalar('autoencoder_loss', ret_loss['G_loss_ae'], cnt)
images = ourModel.get_current_visuals_tensor()
im_completed = vutils.make_grid(images['completed'], normalize=True, scale_each=True)
im_input = vutils.make_grid(images['input'], normalize=True, scale_each=True)
im_gt = vutils.make_grid(images['gt'], normalize=True, scale_each=True)
writer.add_image('gt', im_gt, cnt)
writer.add_image('input', im_input, cnt)
writer.add_image('completed', im_completed, cnt)
if (i+1) % config.train_spe == 0:
print('saving model ..')
ourModel.save_networks(epoch+1)
cnt += 1
print('Epoch Complete: overwriting saved model ...')
ourModel.save_networks(epoch+1)
writer.export_scalars_to_json(os.path.join(config.model_folder, 'GMCNN_scalars.json'))
writer.close()
###Output
_____no_output_____ |
Elo Merchant Category Recommendation/code/baseline-v8.ipynb | ###Markdown
- V1 : subsector_id - V2 : merchant_category_id- V3 : city_id- V4 : merchant_category_id + TRICK- V6 : v2 + TRICK- V7 : TRICK ์ ๊ฑฐ + -1์ ๊ฐฏ์ - V8 : -1์ธ ์ ๋ณด ๋ชจ๋ ์ ๊ฑฐ?
###Code
import gc
import logging
import datetime
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import lightgbm as lgb
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold, KFold
from sklearn.metrics import mean_squared_error, log_loss
from tqdm import tqdm
#settings
warnings.filterwarnings('ignore')
np.random.seed(2018)
version = 7
#logger
def get_logger():
FORMAT = '[%(levelname)s]%(asctime)s:%(name)s:%(message)s'
logging.basicConfig(format=FORMAT)
logger = logging.getLogger('main')
logger.setLevel(logging.DEBUG)
return logger
# reduce memory
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
logger = get_logger()
#load data
logger.info('Start read data')
train_df = reduce_mem_usage(pd.read_csv('input/train.csv'))
test_df = reduce_mem_usage(pd.read_csv('input/test.csv'))
historical_trans_df = reduce_mem_usage(pd.read_csv('input/historical_transactions.csv'))
new_merchant_trans_df = reduce_mem_usage(pd.read_csv('input/new_merchant_transactions.csv'))
#process NAs
logger.info('Start processing NAs')
#process NAs for merchant
#process NA2 for transactions
for df in [historical_trans_df, new_merchant_trans_df]:
df['category_2'].fillna(1.0,inplace=True)
df['category_3'].fillna('A',inplace=True)
df['merchant_id'].fillna('M_ID_00a6ca8a8a',inplace=True)
#define function for aggregation
def create_new_columns(name,aggs):
return [name + '_' + k + '_' + agg for k in aggs.keys() for agg in aggs[k]]
# -1์ ๊ฐฏ์
historical_trans_df['none_cnt'] = 0
historical_trans_df.loc[(historical_trans_df['city_id']==-1) | (historical_trans_df['installments']==-1) | (historical_trans_df['merchant_category_id']==-1)
| (historical_trans_df['state_id']==-1) | (historical_trans_df['subsector_id']==-1), 'none_cnt'] = 1
df = historical_trans_df[['card_id','none_cnt']]
df = df.groupby('card_id')['none_cnt'].agg({'mean','var'}).reset_index()
df.columns = ['card_id','hist_none_cnt_var','hist_none_cnt_mean']
train_df = pd.merge(train_df,df,how='left',on='card_id')
test_df = pd.merge(test_df,df,how='left',on='card_id')
del df
del historical_trans_df['none_cnt']
gc.collect()
feature = 'merchant_category_id'
uniquecardidcity = historical_trans_df.groupby(['card_id'])[feature].unique().reset_index()
uniquecardidcity['histset_{}'.format(feature)] = uniquecardidcity[feature].apply(set)
newhistuniquecardidcity = new_merchant_trans_df.groupby(['card_id'])[feature].unique().reset_index()
newhistuniquecardidcity['newhistset_{}'.format(feature)] = newhistuniquecardidcity[feature].apply(set)
uniquecardidcity = uniquecardidcity.merge(newhistuniquecardidcity[['card_id','newhistset_{}'.format(feature)]], on='card_id',how='left')
uniquecardidcity_na = uniquecardidcity[uniquecardidcity['newhistset_{}'.format(feature)].isnull()]
uniquecardidcity = uniquecardidcity.dropna(axis=0)
uniquecardidcity['union'] = uniquecardidcity.apply(lambda x: len(x['histset_{}'.format(feature)].union(x['newhistset_{}'.format(feature)])), axis=1)
uniquecardidcity['hist_new_difference_{}'.format(feature)] = uniquecardidcity.apply(lambda x: len(x['histset_{}'.format(feature)].difference(x['newhistset_{}'.format(feature)])), axis=1)
uniquecardidcity['new_hist_difference_{}'.format(feature)] = uniquecardidcity.apply(lambda x: len(x['newhistset_{}'.format(feature)].difference(x['histset_{}'.format(feature)])), axis=1)
uniquecardidcity['intersection_{}'.format(feature)] = uniquecardidcity.apply(lambda x: len(x['histset_{}'.format(feature)].intersection(x['newhistset_{}'.format(feature)])), axis=1)
uniquecardidcity['hist_new_difference_{}'.format(feature)] = uniquecardidcity['hist_new_difference_{}'.format(feature)]/uniquecardidcity['union']
uniquecardidcity['new_hist_difference_{}'.format(feature)] = uniquecardidcity['new_hist_difference_{}'.format(feature)]/uniquecardidcity['union']
uniquecardidcity['intersection_{}'.format(feature)] = uniquecardidcity['intersection_{}'.format(feature)]/uniquecardidcity['union']
uniquecardidcity = uniquecardidcity[['card_id','hist_new_difference_{}'.format(feature),'new_hist_difference_{}'.format(feature),'intersection_{}'.format(feature)]]
uniquecardidcity_na['union'] = uniquecardidcity_na['histset_{}'.format(feature)].apply(lambda x : len(x))
uniquecardidcity_na['hist_new_difference_{}'.format(feature)] = 1
uniquecardidcity_na['new_hist_difference_{}'.format(feature)] = np.nan
uniquecardidcity_na['intersection_{}'.format(feature)] = np.nan
uniquecardidcity_na = uniquecardidcity_na[['card_id','hist_new_difference_{}'.format(feature),'new_hist_difference_{}'.format(feature),'intersection_{}'.format(feature)]]
unique = pd.concat([uniquecardidcity,uniquecardidcity_na])
train_df = pd.merge(train_df,unique,how='left',on='card_id')
test_df = pd.merge(test_df,unique,how='left',on='card_id')
del unique,uniquecardidcity,uniquecardidcity_na
gc.collect()
len_hist = historical_trans_df.shape[0]
hist_new_df_all = pd.concat([historical_trans_df,new_merchant_trans_df])
hist_new_df_all.head()
def frequency_encoding(frame, col):
freq_encoding = frame.groupby([col]).size()/frame.shape[0]
freq_encoding = freq_encoding.reset_index().rename(columns={0:'{}_Frequency'.format(col)})
return frame.merge(freq_encoding, on=col, how='left')
cat_cols = ['city_id','merchant_category_id','merchant_id','state_id','subsector_id']
freq_cat_cols = ['{}_Frequency'.format(col) for col in cat_cols]
for col in tqdm(cat_cols):
hist_new_df_all = frequency_encoding(hist_new_df_all, col)
historical_trans_df = hist_new_df_all[:len_hist]
new_merchant_trans_df = hist_new_df_all[len_hist:]
del hist_new_df_all
gc.collect()
###Output
_____no_output_____
###Markdown
feature = 'merchant_category_id'uniquecardidcity = historical_trans_df.groupby(['card_id'])[feature].unique().reset_index()uniquecardidcity['histcityidset'] = uniquecardidcity[feature].apply(set)newhistuniquecardidcity = new_merchant_trans_df.groupby(['card_id'])[feature].unique().reset_index() newhistuniquecardidcity['newhistcityidset'] = newhistuniquecardidcity[feature].apply(set)uniquecardidcity = uniquecardidcity.merge(newhistuniquecardidcity[['card_id','newhistcityidset']], on='card_id',how='left') uniquecardidcity = uniquecardidcity.dropna() newhist์ ์๋ cardid drop uniquecardidcity['union'] = uniquecardidcity.apply(lambda row: row['histcityidset'].union(row['newhistcityidset']), axis=1)uniquecardidcity['union'] = uniquecardidcity.apply(lambda row: len(row['histcityidset'].union(row['newhistcityid_set'])), axis=1)uniquecardidcity['intersection'] = uniquecardidcity.apply(lambda row: len(row['histcityidset'].intersection(row['newhistcityid_set'])), axis=1)uniquecardidcity['diff_hist_new_{}'.format(feature)] = uniquecardidcity.apply(lambda row: row['histcityidset'].difference(row['newhistcityid_set']), axis=1)uniquecardidcity['diff_new_hist_{}'.format(feature)] = uniquecardidcity.apply(lambda row: row['newhistcityid_set'].difference(row['histcityidset']), axis=1)uniquecardidcity['intersection'] = uniquecardidcity['intersection']/uniquecardidcity['union']uniquecardidcity['diff_hist_new_{}'.format(feature)] = uniquecardidcity['diff_hist_new_{}'.format(feature)]/uniquecardidcity['union']uniquecardidcity['diff_new_hist_{}'.format(feature)] = uniquecardidcity['diff_new_hist_{}'.format(feature)]/uniquecardidcity['union']del uniquecardidcity['union'] uniquecardidcity['intersection'] = uniquecardidcity.apply(lambda row: len(row['histcityidset'].intersection(row['newhistcityid_set'])), axis=1)uniquecardidcity['diff_hist_new_{}'.format(feature)] = uniquecardidcity.apply(lambda row: row['histcityidset'].difference(row['newhistcityid_set']), axis=1)uniquecardidcity['diff_new_hist_{}'.format(feature)] = uniquecardidcity.apply(lambda row: row['newhistcityid_set'].difference(row['histcityidset']), axis=1)uniquecardidcity['intersection'] = uniquecardidcity['intersection']/uniquecardidcity['union']uniquecardidcity['diff_hist_new_{}'.format(feature)] = uniquecardidcity['diff_hist_new_{}'.format(feature)]/uniquecardidcity['union']uniquecardidcity['diff_new_hist_{}'.format(feature)] = uniquecardidcity['diff_new_hist_{}'.format(feature)]/uniquecardidcity['union']del uniquecardidcity['union']
###Code
#data processing historical and new merchant data
logger.info('process historical and new merchant datasets')
for df in [historical_trans_df, new_merchant_trans_df]:
df['purchase_date'] = pd.to_datetime(df['purchase_date'])
df['year'] = df['purchase_date'].dt.year
df['weekofyear'] = df['purchase_date'].dt.weekofyear
df['month'] = df['purchase_date'].dt.month
df['dayofweek'] = df['purchase_date'].dt.dayofweek
df['weekend'] = (df.purchase_date.dt.weekday >=5).astype(int)
df['hour'] = df['purchase_date'].dt.hour
df['authorized_flag'] = df['authorized_flag'].map({'Y':1, 'N':0})
df['category_1'] = df['category_1'].map({'Y':1, 'N':0})
df['category_3'] = df['category_3'].map({'A':0, 'B':1, 'C':2})
df['month_diff'] = ((pd.datetime(2012,4,1) - df['purchase_date']).dt.days)//30
df['month_diff'] += df['month_lag']
# Reference_date๋ ์ฌ๊ธฐ์ 201903๊ณผ ๊ฐ์ ํ์์ผ๋ก ๊ณ์ฐ๋ฉ๋๋ค.
df['reference_date'] = (df['year']+(df['month'] - df['month_lag'])//12)*100 + (((df['month'] - df['month_lag'])%12) + 1)*1
#3.691
#df['installments'].replace(-1, np.nan,inplace=True)
#df['installments'].replace(999, np.nan,inplace=True)
#-1์ NAN์ ์์งํ๊ณ 999๋ 12๋ณด๋ค ํฐ ๊ฒ์ ์๋ฏธํ์ง ์์๊น???
#df['installments'].replace(-1, np.nan,inplace=True)
#df['installments'].replace(999, 13, inplace=True)
# trim
# 3.691๋จ์ผ ์ปค๋์ ์ฝ๋๋ฅผ ๊ฐ์ ธ์๊ณ , amount์ max๋ ์ค์ํ๋ค๊ณ ์๊ฐํด์ ์ ์ฒ๋ฆฌ๋ ํ์ง ์์์ต๋๋ค.
#df['purchase_amount'] = df['purchase_amount'].apply(lambda x: min(x, 0.8))
df['price'] = df['purchase_amount'] / df['installments']
df['duration'] = df['purchase_amount']*df['month_diff']
df['amount_month_ratio'] = df['purchase_amount']/df['month_diff']
###Output
[INFO]2019-02-06 10:26:01,430:main:process historical and new merchant datasets
###Markdown
์ ์ ์ ์๋์ด. https://www.kaggle.com/prashanththangavel/c-ustomer-l-ifetime-v-aluehist = historical_trans_df[['card_id','purchase_date','purchase_amount']]hist = hist.sort_values(by=['card_id', 'purchase_date'], ascending=[True, True])from datetime import datetimez = hist.groupby('card_id')['purchase_date'].max().reset_index()q = hist.groupby('card_id')['purchase_date'].min().reset_index()z.columns = ['card_id', 'Max']q.columns = ['card_id', 'Min'] Extracting current timestampcurr_date = pd.datetime(2012,4,1)rec = pd.merge(z,q,how = 'left',on = 'card_id')rec['Min'] = pd.to_datetime(rec['Min'])rec['Max'] = pd.to_datetime(rec['Max']) Time value rec['Recency'] = (curr_date - rec['Max']).astype('timedelta64[D]') current date - most recent date Recency valuerec['Time'] = (rec['Max'] - rec['Min']).astype('timedelta64[D]') Age of customer, MAX - MINrec = rec[['card_id','Time','Recency']] Frequencyfreq = hist.groupby('card_id').size().reset_index()freq.columns = ['card_id', 'Frequency']freq.head() Monitarymon = hist.groupby('card_id')['purchase_amount'].sum().reset_index()mon.columns = ['card_id', 'Monitary']mon.head()final = pd.merge(freq,mon,how = 'left', on = 'card_id')final = pd.merge(final,rec,how = 'left', on = 'card_id')final['historic_CLV'] = final['Frequency'] * final['Monitary'] final['AOV'] = final['Monitary']/final['Frequency'] AOV - Average order value (i.e) total_purchase_amt/total_transfinal['Predictive_CLV'] = final['Time']*final['AOV']*final['Monitary']*final['Recency'] historical_trans_df = pd.merge(historical_trans_df,final,on='card_id',how='left')del historical_trans_df['Frequency']del finaldel mondel freqdel recdel zdel qdel curr_datedel histgc.collect()
###Code
# ์ด ๋ถ๋ถ์ ์นด๋๊ฐ ์ฌ์ฉ๋์ด์ง๊ณ ๋ค์ ์นด๋๊ฐ ์ฌ์ฉ๋์ด์ง๊ธฐ ๊น์ง์ ์๊ฐ์ ๊ณ์ฐํ ๊ฒ ์
๋๋ค. (diff)
# ๋ฉ๋ชจ๋ฆฌ ๋ฌธ์ ๋๋ฌธ์ card_id๋ง ๊ฐ์ง๊ณ ํ๋๋ฐ ๋ค๋ฅธ ์์ id๋ฅผ ํ์ฉํ๋ฉด ๋ ์ข์ ๊ฒ ๊ฐ์ต๋๋ค.
# ์ค๊ฐ์ 1440์ผ๋ก ๋๋ ์ฃผ๋๋ฐ ์ด๋ ๋ถ์ผ๋ก ๊ณ์ฐํ ๊ฒ์ day๋ก ๋ฐ๊ฟ์ฃผ๊ธฐ ์ํจ์
๋๋ค.
logger.info('process frequency of date cusotmer comeback by historical')
df = historical_trans_df[['card_id', 'purchase_date']]
df.sort_values(['card_id','purchase_date'], inplace=True)
df['purchase'] = df.groupby(['card_id'])['purchase_date'].agg(['diff']).dropna(axis=0).astype('timedelta64[m]')
df['purchase'] = df['purchase'] //1440 #๋ช ์ผ๋ง๋ค ์ฌ๋์ด ๋ฐฉ๋ฌธํ๋์ง๋ฅผ ๋ฐ์.
del df['purchase_date']
#del df['subsector_id']
aggs = {}
aggs['purchase'] = ['min','max','mean','std','median']
df = df.groupby('card_id')['purchase'].agg(aggs).reset_index()
new_columns = ['card_id']
new_columns1 = create_new_columns('hist_freq',aggs)
for i in new_columns1:
new_columns.append(i)
df.columns = new_columns
train_df = train_df.merge(df, on='card_id', how='left')
test_df = test_df.merge(df, on='card_id', how='left')
del df
gc.collect()
logger.info('process frequency of date cusotmer comeback by new')
df = new_merchant_trans_df[['card_id', 'purchase_date']]
df.sort_values(['card_id','purchase_date'], inplace=True)
df['purchase'] = df.groupby(['card_id'])['purchase_date'].agg(['diff']).dropna(axis=0).astype('timedelta64[m]')
df['purchase'] = df['purchase'] //1440 #๋ช ์ผ๋ง๋ค ์ฌ๋์ด ๋ฐฉ๋ฌธํ๋์ง๋ฅผ ๋ฐ์.
del df['purchase_date']
#del df['subsector_id']
aggs = {}
aggs['purchase'] = ['min','max','mean','std','median']
df = df.groupby('card_id')['purchase'].agg(aggs).reset_index()
new_columns = ['card_id']
new_columns1 = create_new_columns('new_hist_freq',aggs)
for i in new_columns1:
new_columns.append(i)
df.columns = new_columns
train_df = train_df.merge(df, on='card_id', how='left')
test_df = test_df.merge(df, on='card_id', how='left')
del df,new_columns1
gc.collect()
# ๊ธฐ์กด์๋ card_id๋ฅผ ๊ธฐ์ค์ผ๋ก ํ๊ท ์ ๋ด์๋๋ฐ ์ด๋ฒ์๋ ๊ฑฐ๊พธ๋ก Reference๋ฅผ ๊ธฐ์ค์ผ๋ก aggregation์ ํด๋ดค์ต๋๋ค.
logger.info('process reference_date by hist')
historical_trans_df_re = historical_trans_df[['reference_date','purchase_amount','authorized_flag','month_lag']]
aggs = {}
aggs['purchase_amount'] = ['min','max','mean','sum','std','median']
aggs['authorized_flag'] = ['min','max','mean','sum','std','median']
historical_trans_df_re = historical_trans_df_re.groupby(['reference_date'])[['purchase_amount','authorized_flag']].agg(aggs).reset_index()
new_columns = ['hist_reference_date_median']
new_columns1 = create_new_columns('hist_reference',aggs)
for i in new_columns1:
new_columns.append(i)
historical_trans_df_re.columns = new_columns
del new_columns1
gc.collect();
logger.info('process reference_date by new')
new_merchant_trans_df_re = new_merchant_trans_df[['reference_date','purchase_amount']]
aggs = {}
aggs['purchase_amount'] = ['max','mean','std','median']
new_merchant_trans_df_re = new_merchant_trans_df_re.groupby(['reference_date'])['purchase_amount'].agg(aggs).reset_index()
new_columns = ['hist_reference_date_median']
new_columns1 = create_new_columns('new_hist_reference',aggs)
for i in new_columns1:
new_columns.append(i)
new_merchant_trans_df_re.columns = new_columns
del new_columns1
gc.collect();
# month_lag๋ฅผ ํ์ฉํ์ฌ purchase_amount์ ๊ฐ์ค์น๋ฅผ ์ค ๊ฒ์
๋๋ค.
# ์ด ๋ถ๋ถ์ ๋ ๊ฐ์ ์ํฌ ์ ์์ ๊ฒ ๊ฐ์๋ฐ ์ ์๋๋ ์ค ์
๋๋ค ใ
ใ
...
historical_trans_df1 = historical_trans_df[['card_id','month_lag','purchase_amount']]
historical_trans_df3 = historical_trans_df1.groupby(['card_id','month_lag'])['purchase_amount'].agg({'count','mean'}).reset_index()
historical_trans_df3.columns = ['card_id','month_lag','month_lag_cnt','month_lag_amount_mean']
historical_trans_df3['month_lag_cnt'] = historical_trans_df3['month_lag_cnt']/(1-historical_trans_df3['month_lag'])
historical_trans_df3['month_lag_amount_mean'] = historical_trans_df3['month_lag_amount_mean']/(1-historical_trans_df3['month_lag'])
del historical_trans_df3['month_lag']
aggs = {}
aggs['month_lag_cnt'] = ['min','max','mean','sum','std']
aggs['month_lag_amount_mean'] = ['min','max','mean','sum','std']
historical_trans_df3 = historical_trans_df3.groupby(['card_id']).agg(aggs).reset_index()
new_columns = ['card_id']
new_columns1 = create_new_columns('hist_weight',aggs)
for i in new_columns1:
new_columns.append(i)
historical_trans_df3.columns = new_columns
del historical_trans_df1
#merge with train, test
train_df = train_df.merge(historical_trans_df3,on='card_id',how='left')
test_df = test_df.merge(historical_trans_df3,on='card_id',how='left')
del historical_trans_df3,new_columns1,new_columns
gc.collect();
#define aggregations with historical_trans_df
logger.info('Aggregate historical trans')
aggs = {}
for col in ['subsector_id','merchant_id','merchant_category_id']:
aggs[col] = ['nunique']
for col in ['month', 'hour', 'weekofyear', 'dayofweek', 'year']:
aggs[col] = ['nunique', 'mean', 'min', 'max']
aggs['purchase_amount'] = ['sum','max','min','mean','var']
aggs['installments'] = ['sum','max','min','mean','var']
aggs['purchase_date'] = ['max','min']
aggs['month_lag'] = ['max','min','mean','var']
aggs['month_diff'] = ['mean', 'min', 'max', 'var']
aggs['authorized_flag'] = ['sum', 'mean', 'min', 'max']
aggs['weekend'] = ['sum', 'mean', 'min', 'max']
aggs['category_1'] = ['sum', 'mean', 'min', 'max']
#aggs['category_2'] = ['sum', 'mean', 'min', 'max']
#aggs['category_3'] = ['sum', 'mean', 'min', 'max']
aggs['card_id'] = ['size', 'count']
aggs['reference_date'] = ['median']
## ์๋ ๋ถ๋ถ์ด 3.691์ปค๋์์ ๊ฐ์ ธ์จ ์ฝ๋์
๋๋ค.
aggs['duration']=['mean','min','max','var','skew']
aggs['amount_month_ratio']=['mean','min','max','var','skew']
aggs['price'] = ['sum','mean','max','min','var']
## Version3 Encoding
aggs['city_id_Frequency'] = ['mean','sum','var','median']
aggs['merchant_category_id_Frequency'] = ['mean','sum','var','median']
aggs['merchant_id_Frequency'] = ['mean','sum','var','median']
aggs['state_id_Frequency'] = ['mean','sum','var','median']
aggs['subsector_id_Frequency'] = ['mean','sum','var','median']
new_columns = create_new_columns('hist',aggs)
historical_trans_group_df = historical_trans_df.groupby('card_id').agg(aggs)
historical_trans_group_df.columns = new_columns
historical_trans_group_df.reset_index(drop=False,inplace=True)
historical_trans_group_df['hist_purchase_date_diff'] = (historical_trans_group_df['hist_purchase_date_max'] - historical_trans_group_df['hist_purchase_date_min']).dt.days
historical_trans_group_df['hist_purchase_date_average'] = historical_trans_group_df['hist_purchase_date_diff']/historical_trans_group_df['hist_card_id_size']
historical_trans_group_df['hist_purchase_date_uptonow'] = (pd.datetime(2012,4,1) - historical_trans_group_df['hist_purchase_date_max']).dt.days
historical_trans_group_df['hist_purchase_date_uptomin'] = (pd.datetime(2012,4,1) - historical_trans_group_df['hist_purchase_date_min']).dt.days
#merge with train, test
train_df = train_df.merge(historical_trans_group_df,on='card_id',how='left')
test_df = test_df.merge(historical_trans_group_df,on='card_id',how='left')
#cleanup memory
del historical_trans_group_df; gc.collect()
#define aggregations with new_merchant_trans_df
logger.info('Aggregate new merchant trans')
aggs = {}
for col in ['subsector_id','merchant_id','merchant_category_id']:
aggs[col] = ['nunique']
for col in ['month', 'hour', 'weekofyear', 'dayofweek', 'year']:
aggs[col] = ['nunique', 'mean', 'min', 'max']
aggs['purchase_amount'] = ['sum','max','min','mean','var']
aggs['installments'] = ['sum','max','min','mean','var']
aggs['purchase_date'] = ['max','min']
aggs['month_lag'] = ['max','min','mean','var']
aggs['month_diff'] = ['mean', 'max', 'min', 'var']
aggs['weekend'] = ['sum', 'mean', 'min', 'max']
aggs['category_1'] = ['sum', 'mean', 'min', 'max']
aggs['authorized_flag'] = ['sum']
#aggs['category_2'] = ['sum', 'mean', 'min', 'max']
#aggs['category_3'] = ['sum', 'mean', 'min', 'max']
aggs['card_id'] = ['size']
aggs['reference_date'] = ['median']
##3.691
aggs['duration']=['mean','min','max','var','skew']
aggs['amount_month_ratio']=['mean','min','max','var','skew']
aggs['price'] = ['sum','mean','max','min','var']
## Version3 Encoding
aggs['city_id_Frequency'] = ['mean','sum','var','median']
aggs['merchant_category_id_Frequency'] = ['mean','sum','var','median']
aggs['merchant_id_Frequency'] = ['mean','sum','var','median']
aggs['state_id_Frequency'] = ['mean','sum','var','median']
aggs['subsector_id_Frequency'] = ['mean','sum','var','median']
new_columns = create_new_columns('new_hist',aggs)
new_merchant_trans_group_df = new_merchant_trans_df.groupby('card_id').agg(aggs)
new_merchant_trans_group_df.columns = new_columns
new_merchant_trans_group_df.reset_index(drop=False,inplace=True)
new_merchant_trans_group_df['new_hist_purchase_date_diff'] = (new_merchant_trans_group_df['new_hist_purchase_date_max'] - new_merchant_trans_group_df['new_hist_purchase_date_min']).dt.days
new_merchant_trans_group_df['new_hist_purchase_date_average'] = new_merchant_trans_group_df['new_hist_purchase_date_diff']/new_merchant_trans_group_df['new_hist_card_id_size']
new_merchant_trans_group_df['new_hist_purchase_date_uptonow'] = (pd.datetime(2012,4,1) - new_merchant_trans_group_df['new_hist_purchase_date_max']).dt.days
new_merchant_trans_group_df['new_hist_purchase_date_uptomin'] = (pd.datetime(2012,4,1) - new_merchant_trans_group_df['new_hist_purchase_date_min']).dt.days
#merge with train, test
train_df = train_df.merge(new_merchant_trans_group_df,on='card_id',how='left')
test_df = test_df.merge(new_merchant_trans_group_df,on='card_id',how='left')
#clean-up memory
del new_merchant_trans_group_df; gc.collect()
del historical_trans_df; gc.collect()
del new_merchant_trans_df; gc.collect()
#merge with train, test
train_df = train_df.merge(historical_trans_df_re,on='hist_reference_date_median',how='left')
test_df = test_df.merge(historical_trans_df_re,on='hist_reference_date_median',how='left')
train_df = train_df.merge(new_merchant_trans_df_re,on='hist_reference_date_median',how='left')
test_df = test_df.merge(new_merchant_trans_df_re,on='hist_reference_date_median',how='left')
del historical_trans_df_re
del new_merchant_trans_df_re
gc.collect()
#process train
logger.info('Process train')
train_df['outliers'] = 0
train_df.loc[train_df['target'] < -30, 'outliers'] = 1
train_df['outliers'].value_counts()
logger.info('Process train and test')
## process both train and test
for df in [train_df, test_df]:
df['first_active_month'] = pd.to_datetime(df['first_active_month'])
df['dayofweek'] = df['first_active_month'].dt.dayofweek
df['weekofyear'] = df['first_active_month'].dt.weekofyear
df['dayofyear'] = df['first_active_month'].dt.dayofyear
df['quarter'] = df['first_active_month'].dt.quarter
df['is_month_start'] = df['first_active_month'].dt.is_month_start
df['month'] = df['first_active_month'].dt.month
df['year'] = df['first_active_month'].dt.year
df['first_active_month1'] = 100*df['year']+df['month']
df['elapsed_time'] = (pd.datetime(2012,4,1) - df['first_active_month']).dt.days
#hist_reference_date_median์ 201901๊ณผ ๊ฐ์ ํ์์ธ๋ฐ ์ด๋ฅผ 2019-01๋ก ๋ฐ๊ฟ์ pd.to_datetime์ด ๋์ํ๋ก ํ์์ ๋ฐ๊ฟ์ฃผ์์ต๋๋ค.
df['hist_reference_date_median'] = df['hist_reference_date_median'].astype(str)
df['hist_reference_date_median'] = df['hist_reference_date_median'].apply(lambda x: x[0:4]+'-'+x[4:6])
df['hist_reference_date_median'] = pd.to_datetime(df['hist_reference_date_median'])
df['ref_year'] =df['hist_reference_date_median'].dt.year
df['ref_month'] =df['hist_reference_date_median'].dt.month
df['reference_month1'] = 100*df['ref_year']+df['ref_month']
# df['days_feature1'] = df['elapsed_time'] * df['feature_1']
# df['days_feature2'] = df['elapsed_time'] * df['feature_2']
# df['days_feature3'] = df['elapsed_time'] * df['feature_3']
# df['days_feature1_ratio'] = df['feature_1'] / df['elapsed_time']
# df['days_feature2_ratio'] = df['feature_2'] / df['elapsed_time']
# df['days_feature3_ratio'] = df['feature_3'] / df['elapsed_time']
## 3.691์์ ๊ฐ์ ธ์จ ์ฝ๋์
๋๋ค.
df['purchase_amount_total'] = df['new_hist_purchase_amount_sum']+df['hist_purchase_amount_sum']
df['purchase_amount_mean'] = df['new_hist_purchase_amount_mean']+df['hist_purchase_amount_mean']
df['purchase_amount_max'] = df['new_hist_purchase_amount_max']+df['hist_purchase_amount_max']
df['purchase_amount_min'] = df['new_hist_purchase_amount_min']+df['hist_purchase_amount_min']
df['purchase_amount_sum_ratio'] = df['new_hist_purchase_amount_sum']/df['hist_purchase_amount_sum']
#VERSION24์์ RATIO์ถ๊ฐ
#VERSION25, 26์ฐจ์ด.
df['nh_purchase_amount_mean_ratio'] = df['new_hist_purchase_amount_mean']/df['hist_purchase_amount_mean']
## ์ด ๋ถ๋ถ์ ๊ฑฐ๋์ ๊ธฐ๊ฐ์ ๊ณ์ฐํ ๊ฐ๋ค์
๋๋ค. ratio๋ก๋ ํ์ฉํ๋ฉด ์๋ฏธ๊ฐ ์์ ๊ฒ ๊ฐ์ง๋ง ์๋๋ ์ํด๋ดค์ต๋๋ค.
df['hist_first_buy'] = (df['hist_purchase_date_min'] - df['first_active_month']).dt.days
df['hist_last_buy'] = (df['hist_purchase_date_max'] - df['first_active_month']).dt.days
df['new_hist_first_buy'] = (df['new_hist_purchase_date_min'] - df['first_active_month']).dt.days
df['new_hist_last_buy'] = (df['new_hist_purchase_date_max'] - df['first_active_month']).dt.days
## ๋ง์ฐฌ๊ฐ์ง๋ก ๊ฑฐ๋์ ๊ธฐ๊ฐ์ ๊ณ์ฐํ ๊ฐ๋ค์
๋๋ค. ์์๋ first_active_month๊ฐ ๊ธฐ์ค์ด๊ณ ์๋๋ reference_date๊ฐ ๊ธฐ์ค์
๋๋ค.
df['year_month'] = df['year']*100 + df['month']
df['hist_diff_reference_date_first'] = (df['hist_reference_date_median'] - df['first_active_month']).dt.days
df['hist_diff_reference_date_min'] = (df['hist_reference_date_median'] - df['hist_purchase_date_min']).dt.days
df['hist_diff_reference_date_max'] = (df['hist_reference_date_median'] - df['hist_purchase_date_max']).dt.days
df['new_hist_diff_reference_date_min'] = (df['hist_reference_date_median'] - df['new_hist_purchase_date_min']).dt.days
df['new_hist_diff_reference_date_max'] = (df['hist_reference_date_median'] - df['new_hist_purchase_date_max']).dt.days
## ๊ฑฐ๋์ ๊ธฐ๊ฐ์ ๊ณ์ฐํ ๊ฐ์
๋๋ค.
df['hist_diff_first_last'] = df['hist_last_buy'] - df['hist_first_buy']
df['new_hist_diff_first_last'] = df['new_hist_last_buy'] - df['new_hist_first_buy']
#version11
## ๊ฑฐ๋๊ธฐ๊ฐ๋์ ์ผ๋ง๋ ๊ฑฐ๋๊ฐ ์ด๋ฃจ์ด์ง์ง ํ๊ท ์ ๋ด๋ณธ ๊ฐ์
๋๋ค.
df['hist_diff_first_last_purchase'] = df['hist_purchase_amount_sum'] / df['hist_diff_first_last']
df['new_hist_diff_first_last_purchase'] = df['new_hist_purchase_amount_sum'] / df['new_hist_diff_first_last']
#VERSION24์์ RATIO์ถ๊ฐ
#VERSION30์์ ์ถ๊ฐ.
df['nh_purchase_mean_average_ratio'] = df['new_hist_diff_first_last_purchase']/df['hist_diff_first_last_purchase'] #์ค์๋ ๋ฎ์.
#VERSION25, 27์ฐจ์ด.
df['nh_merchant_id_nunique_ratio'] = df['new_hist_merchant_id_nunique']/df['hist_merchant_id_nunique']
#VERSION4 ID ๊ฐฏ์ ๋น์จ ์ถ๊ฐ
#df['nh_city_id_nunique_ratio'] = df['new_hist_city_id_nunique']/df['hist_city_id_nunique']
#df['nh_state_id_nunique_ratio'] = df['new_hist_state_id_nunique']/df['hist_state_id_nunique']
#CV์ ์ ์์ข์์ ธ์ ์ ๊ฑฐํ์. LB๋ ๋ชจ๋ฆ.
#del df['new_hist_city_id_nunique'], df['hist_city_id_nunique']
#del df['new_hist_state_id_nunique'], df['hist_state_id_nunique']
#del df['nh_city_id_nunique_ratio'], df['nh_state_id_nunique_ratio']
## ์๋ ๋์ผ
df['hist_card_id_size_average'] = df['new_hist_card_id_size'] / df['hist_diff_first_last']
df['new_hist_card_id_size_average'] = df['new_hist_card_id_size'] / df['new_hist_diff_first_last']
# VERSION24์์ RATIO์ถ๊ฐ
# VERSION31์์ ํ
์คํธ์ค..
df['nh_card_id_size_average_ratio'] = df['new_hist_card_id_size_average']/df['hist_card_id_size_average'] #์ค์๋ ๋ฎ์.
#VERSION25, 28์ฐจ์ด.
df['nh_freq_purchase_mean_ratio'] = df['new_hist_freq_purchase_mean']/df['hist_freq_purchase_mean']
# VERSION32์์ ํ
์คํธ์ค..
df['nh_category_1_sum_ratio'] = df['new_hist_category_1_sum']/df['hist_category_1_sum'] #์ค์๋ ๋ฎ์.
#df['nh_category_1_mean_ratio'] = df['new_hist_category_1_mean']/df['hist_category_1_mean'] #์ค์๋ ๋ฎ์.
## ๋ง์ฐฌ๊ฐ์ง๋ก ๊ฑฐ๋์ ๊ธฐ๊ฐ์ ๊ณ์ฐ hist์ new์์ ๊ด๊ณ
df['diff_new_hist_date_min_max'] = (df['new_hist_purchase_date_min'] - df['hist_purchase_date_max']).dt.days
df['diff_new_hist_date_max_max'] = (df['new_hist_purchase_date_max'] - df['hist_purchase_date_max']).dt.days
df['diff_new_hist_date_max_min'] = (df['new_hist_purchase_date_max'] - df['hist_purchase_date_min']).dt.days
#Version14 : ์ค์ํํ ๋ณ์๋ฅผ ๋๋ ์ ์ํธ์์ฉํ๋๋ก ๋ง๋ฌ.
df['diff_new_hist_date_max_amount_max'] = df['new_hist_purchase_amount_max']/df['diff_new_hist_date_max_max']
df['hist_flag_ratio'] = df['hist_authorized_flag_sum'] / df['hist_card_id_size']
### LB 3.691 ์ปค๋์์ ์ถ๊ฐํ ๋ถ๋ถ.
df['installments_total'] = df['new_hist_installments_sum']+df['hist_installments_sum']
df['installments_mean'] = df['new_hist_installments_mean']+df['hist_installments_mean']
df['installments_max'] = df['new_hist_installments_max']+df['hist_installments_max']
df['installments_ratio'] = df['new_hist_installments_sum']/df['hist_installments_sum']
df['price_total'] = df['purchase_amount_total'] / df['installments_total']
df['price_mean'] = df['purchase_amount_mean'] / df['installments_mean']
df['price_max'] = df['purchase_amount_max'] / df['installments_max']
df['duration_mean'] = df['new_hist_duration_mean']+df['hist_duration_mean']
# VERSION24์์ RATIO์ถ๊ฐ
#VERSION25, 29์ฐจ์ด.
#df['duration_ratio'] = df['new_hist_duration_mean']/df['hist_duration_mean']
df['duration_min'] = df['new_hist_duration_min']+df['hist_duration_min']
df['duration_max'] = df['new_hist_duration_max']+df['hist_duration_max']
df['amount_month_ratio_mean']=df['new_hist_amount_month_ratio_mean']+df['hist_amount_month_ratio_mean']
df['amount_month_ratio_min']=df['new_hist_amount_month_ratio_min']+df['hist_amount_month_ratio_min']
df['amount_month_ratio_max']=df['new_hist_amount_month_ratio_max']+df['hist_amount_month_ratio_max']
df['new_CLV'] = df['new_hist_card_id_size'] * df['new_hist_purchase_amount_sum'] / df['new_hist_month_diff_mean']
df['hist_CLV'] = df['hist_card_id_size'] * df['hist_purchase_amount_sum'] / df['hist_month_diff_mean']
df['CLV_ratio'] = df['new_CLV'] / df['hist_CLV']
for f in ['hist_purchase_date_max','hist_purchase_date_min','new_hist_purchase_date_max',\
'new_hist_purchase_date_min']:
df[f] = df[f].astype(np.int64) * 1e-9
df['card_id_total'] = df['new_hist_card_id_size']+df['hist_card_id_size']
del df['year']
del df['year_month']
del df['new_hist_reference_date_median']
for f in ['feature_1','feature_2','feature_3']:
order_label = train_df.groupby([f])['outliers'].mean()
train_df[f] = train_df[f].map(order_label)
test_df[f] = test_df[f].map(order_label)
#for df in [train_df, test_df]:
# df['feature_sum'] = df['feature_1'] + df['feature_2'] + df['feature_3']
# df['feature_mean'] = df['feature_sum']/3
# df['feature_max'] = df[['feature_1', 'feature_2', 'feature_3']].max(axis=1)
# df['feature_min'] = df[['feature_1', 'feature_2', 'feature_3']].min(axis=1)
# df['feature_var'] = df[['feature_1', 'feature_2', 'feature_3']].std(axis=1)
## ์ ๋ํฌ ๊ฐ์ด 1์ด๋ฉด ์ ๊ฑฐํ๋ ์ฝ๋์
๋๋ค.
for col in train_df.columns:
if train_df[col].nunique() == 1:
print(col)
del train_df[col]
del test_df[col]
##
train_columns = [c for c in train_df.columns if c not in ['card_id', 'first_active_month','target','outliers','hist_reference_date_median']]
target = train_df['target']
#del train_df['target']
###Output
[INFO]2019-02-06 10:43:12,475:main:Process train
[INFO]2019-02-06 10:43:12,585:main:Process train and test
###Markdown
from scipy.stats import ks_2sampfrom tqdm import tqdmlist_p_value =[]for i in tqdm(train_columns): list_p_value.append(ks_2samp(test_df[i] , train_df[i])[1])Se = pd.Series(list_p_value, index = train_columns).sort_values() list_discarded = list(Se[Se < .1].index) for i in list_discarded: train_columns.remove(i)
###Code
train = train_df.copy()
train = train.loc[train['target']>-30]
target = train['target']
del train['target']
param = {'num_leaves': 31,
'min_data_in_leaf': 30,
'objective':'regression',
'max_depth': -1,
'learning_rate': 0.015,
"min_child_samples": 20,
"boosting": "gbdt",
"feature_fraction": 0.9,
"bagging_freq": 1,
"bagging_fraction": 0.9 ,
"metric": 'rmse',
"lambda_l1": 0.1,
"verbosity": -1,
"nthread": 24,
"seed": 6}
#prepare fit model with cross-validation
np.random.seed(2019)
folds = KFold(n_splits=9, shuffle=True, random_state=4950)
oof = np.zeros(len(train))
predictions = np.zeros(len(test_df))
feature_importance_df = pd.DataFrame()
for fold_, (trn_idx, val_idx) in enumerate(folds.split(train)):
strLog = "fold {}".format(fold_+1)
print(strLog)
trn_data = lgb.Dataset(train.iloc[trn_idx][train_columns], label=target.iloc[trn_idx])#, categorical_feature=categorical_feats)
val_data = lgb.Dataset(train.iloc[val_idx][train_columns], label=target.iloc[val_idx])#, categorical_feature=categorical_feats)
num_round = 10000
clf = lgb.train(param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=100, early_stopping_rounds = 100)
oof[val_idx] = clf.predict(train.iloc[val_idx][train_columns], num_iteration=clf.best_iteration)
#feature importance
fold_importance_df = pd.DataFrame()
fold_importance_df["Feature"] = train_columns
fold_importance_df["importance"] = clf.feature_importance()
fold_importance_df["fold"] = fold_ + 1
feature_importance_df = pd.concat([feature_importance_df, fold_importance_df], axis=0)
#predictions
predictions += clf.predict(test_df[train_columns], num_iteration=clf.best_iteration) / folds.n_splits
cv_score = np.sqrt(mean_squared_error(oof, target))
print(cv_score)
withoutoutlier_predictions = predictions.copy()
model_without_outliers = pd.DataFrame({"card_id":test_df["card_id"].values})
model_without_outliers["target"] = withoutoutlier_predictions
model_without_outliers.to_csv('hyeonwoo_without_outlier.csv',index=False)
###Output
_____no_output_____
###Markdown
Outlier Model
###Code
train = train_df.copy()
target = train['outliers']
del train['target']
del train['outliers']
param = {'num_leaves': 31,
'min_data_in_leaf': 30,
'objective':'binary',
'max_depth': 5,
'learning_rate': 0.01,
"boosting": "gbdt",
"feature_fraction": 0.6,
"bagging_freq": 1,
"bagging_fraction": 0.7 ,
"metric": 'binary_logloss',
"lambda_l1": 0.1,
"verbosity": -1,
"nthread": 24,
"random_state": 6}
folds = KFold(n_splits=9, shuffle=True, random_state=4950)
oof = np.zeros(len(train))
predictions = np.zeros(len(test_df))
#start = time.time()
for fold_, (trn_idx, val_idx) in enumerate(folds.split(train.values, target.values)):
print("fold {}".format(fold_+1))
trn_data = lgb.Dataset(train.iloc[trn_idx][train_columns], label=target.iloc[trn_idx])#, categorical_feature=categorical_feats)
val_data = lgb.Dataset(train.iloc[val_idx][train_columns], label=target.iloc[val_idx])#, categorical_feature=categorical_feats)
num_round = 10000
clf = lgb.train(param, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=100, early_stopping_rounds = 200)
oof[val_idx] = clf.predict(train.iloc[val_idx][train_columns], num_iteration=clf.best_iteration)
predictions += clf.predict(test_df[train_columns], num_iteration=clf.best_iteration) / folds.n_splits
print("CV score: {:<8.5f}".format(log_loss(target, oof)))
print("CV score: {:<8.5f}".format(log_loss(target, oof)))
df_outlier_prob = pd.DataFrame({"card_id":test_df["card_id"].values})
df_outlier_prob["target"] = predictions
df_outlier_prob.sort_values('target',ascending=False)
###Output
_____no_output_____ |
Reinforcement_Learning_Specialization/Course_3_Prediction_and_Control_with_Function_Approximation/Week1/RL_C3_week1_Semi-gradient_TD(0)_with_State_Aggregation.ipynb | ###Markdown
Assignment 1 - TD with State AggregationWelcome to your Course 3 Programming Assignment 1. In this assignment, you will implement **semi-gradient TD(0) with State Aggregation** in an environment with a large state space. This assignment will focus on the **policy evaluation task** (prediction problem) where the goal is to accurately estimate state values under a given (fixed) policy.**In this assignment, you will:**1. Implement semi-gradient TD(0) with function approximation (state aggregation).2. Understand how to use supervised learning approaches to approximate value functions.3. Compare the impact of different resolutions of state aggregation, and see first hand how function approximation can speed up learning through generalization.**Note: You can create new cells for debugging purposes but please do not duplicate any Read-only cells. This may break the grader.** 500-State RandomWalk EnvironmentIn this assignment, we will implement and use a smaller 500 state version of the problem we covered in lecture (see "State Aggregation with Monte Carloโ, and Example 9.1 in the [textbook](http://www.incompleteideas.net/book/RLbook2018.pdf)). The diagram below illustrates the problem.There are 500 states numbered from 1 to 500, left to right, and all episodes begin with the agent located at the center, in state 250. For simplicity, we will consider state 0 and state 501 as the left and right terminal states respectively. The episode terminates when the agent reaches the terminal state (state 0) on the left, or the terminal state (state 501) on the right. Termination on the left (state 0) gives the agent a reward of -1, and termination on the right (state 501) gives the agent a reward of +1.The agent can take one of two actions: go left or go right. If the agent chooses the left action, then it transitions uniform randomly into one of the 100 neighboring states to its left. If the agent chooses the right action, then it transitions randomly into one of the 100 neighboring states to its right. States near the edge may have fewer than 100 neighboring states on that side. In this case, all transitions that would have taken the agent past the edge result in termination. If the agent takes the left action from state 50, then it has a 0.5 chance of terminating on the left. If it takes the right action from state 499, then it has a 0.99 chance of terminating on the right. Your GoalFor this assignment, we will consider the problem of **policy evaluation**: estimating state-value function for a fixed policy.You will evaluate a uniform random policy in the 500-State Random Walk environment. This policy takes the right action with 0.5 probability and the left with 0.5 probability, regardless of which state it is in. This environment has a relatively large number of states. Generalization can significantly speed learning as we will show in this assignment. Often in realistic environments, states are high-dimensional and continuous. For these problems, function approximation is not just useful, it is also necessary. PackagesYou will use the following packages in this assignment.- [numpy](www.numpy.org) : Fundamental package for scientific computing with Python.- [matplotlib](http://matplotlib.org) : Library for plotting graphs in Python.- [RL-Glue](http://www.jmlr.org/papers/v10/tanner09a.html) : Library for reinforcement learning experiments.- [jdc](https://alexhagen.github.io/jdc/) : Jupyter magic that allows defining classes over multiple jupyter notebook cells.- [tqdm](https://tqdm.github.io/) : A package to display progress bar when running experiments- plot_script : custom script to plot results**Please do not import other libraries** - this will break the autograder.
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import jdc
from tqdm import tqdm
from rl_glue import RLGlue
from environment import BaseEnvironment
from agent import BaseAgent
import plot_script
###Output
_____no_output_____
###Markdown
Section 1: Create the 500-State RandomWalk EnvironmentIn this section we have provided you with the implementation of the 500-State RandomWalk Environment. It is useful to know how the environment is implemented. We will also use this environment in the next programming assignment. Once the agent chooses which direction to move, the environment determines how far the agent is moved in that direction. Assume the agent passes either 0 (indicating left) or 1 (indicating right) to the environment.Methods needed to implement the environment are: `env_init`, `env_start`, and `env_step`.- `env_init`: This method sets up the environment at the very beginning of the experiment. Relevant parameters are passed through `env_info` dictionary.- `env_start`: This is the first method called when the experiment starts, returning the start state.- `env_step`: This method takes in action and returns reward, next_state, and is_terminal.
###Code
# ---------------
# Discussion Cell
# ---------------
class RandomWalkEnvironment(BaseEnvironment):
def env_init(self, env_info={}):
"""
Setup for the environment called when the experiment first starts.
Set parameters needed to setup the 500-state random walk environment.
Assume env_info dict contains:
{
num_states: 500 [int],
start_state: 250 [int],
left_terminal_state: 0 [int],
right_terminal_state: 501 [int],
seed: int
}
"""
# set random seed for each run
self.rand_generator = np.random.RandomState(env_info.get("seed"))
# set each class attribute
self.num_states = env_info["num_states"]
self.start_state = env_info["start_state"]
self.left_terminal_state = env_info["left_terminal_state"]
self.right_terminal_state = env_info["right_terminal_state"]
def env_start(self):
"""
The first method called when the experiment starts, called before the
agent starts.
Returns:
The first state from the environment.
"""
# set self.reward_state_term tuple
reward = 0.0
state = self.start_state
is_terminal = False
self.reward_state_term = (reward, state, is_terminal)
# return first state from the environment
return self.reward_state_term[1]
def env_step(self, action):
"""A step taken by the environment.
Args:
action: The action taken by the agent
Returns:
(float, state, Boolean): a tuple of the reward, state,
and boolean indicating if it's terminal.
"""
last_state = self.reward_state_term[1]
# set reward, current_state, and is_terminal
#
# action: specifies direction of movement - 0 (indicating left) or 1 (indicating right) [int]
# current state: next state after taking action from the last state [int]
# reward: -1 if terminated left, 1 if terminated right, 0 otherwise [float]
# is_terminal: indicates whether the episode terminated [boolean]
#
# Given action (direction of movement), determine how much to move in that direction from last_state
# All transitions beyond the terminal state are absorbed into the terminal state.
if action == 0: # left
current_state = max(self.left_terminal_state, last_state + self.rand_generator.choice(range(-100,0)))
elif action == 1: # right
current_state = min(self.right_terminal_state, last_state + self.rand_generator.choice(range(1,101)))
else:
raise ValueError("Wrong action value")
# terminate left
if current_state == self.left_terminal_state:
reward = -1.0
is_terminal = True
# terminate right
elif current_state == self.right_terminal_state:
reward = 1.0
is_terminal = True
else:
reward = 0.0
is_terminal = False
self.reward_state_term = (reward, current_state, is_terminal)
return self.reward_state_term
###Output
_____no_output_____
###Markdown
Section 2: Create Semi-gradient TD(0) Agent with State AggregationNow let's create the Agent that interacts with the Environment.You will create an Agent that learns with semi-gradient TD(0) with state aggregation.For state aggregation, if the resolution (num_groups) is 10, then 500 states are partitioned into 10 groups of 50 states each (i.e., states 1-50 are one group, states 51-100 are another, and so on.)Hence, 50 states would share the same feature and value estimate, and there would be 10 distinct features. The feature vector for each state is a one-hot feature vector of length 10, with a single one indicating the group for that state. (one-hot vector of length 10) Section 2-1: Implement Useful FunctionsBefore we implement the agent, we need to define a couple of useful helper functions.**Please note all random method calls should be called through random number generator. Also do not use random method calls unless specified. In the agent, only `agent_policy` requires random method calls.** Section 2-1a: Selecting actionsIn this part we have implemented `agent_policy()` for you.This method is used in `agent_start()` and `agent_step()` to select appropriate action.Normally, the agent acts differently given state, but in this environment the agent chooses randomly to move either left or right with equal probability.Agent returns 0 for left, and 1 for right.
###Code
# ---------------
# Discussion Cell
# ---------------
def agent_policy(rand_generator, state):
"""
Given random number generator and state, returns an action according to the agent's policy.
Args:
rand_generator: Random number generator
Returns:
chosen action [int]
"""
# set chosen_action as 0 or 1 with equal probability
# state is unnecessary for this agent policy
chosen_action = rand_generator.choice([0,1])
return chosen_action
###Output
_____no_output_____
###Markdown
Section 2-1b: Processing State Features with State AggregationIn this part you will implement `get_state_feature()`This method takes in a state and returns the aggregated feature (one-hot-vector) of that state.The feature vector size is determined by `num_groups`. Use `state` and `num_states_in_group` to determine which element in the feature vector is active.`get_state_feature()` is necessary whenever the agent receives a state and needs to convert it to a feature for learning. The features will thus be used in `agent_step()` and `agent_end()` when the agent updates its state values.
###Code
(500 - 1) // 100
# -----------
# Graded Cell
# -----------
def get_state_feature(num_states_in_group, num_groups, state):
"""
Given state, return the feature of that state
Args:
num_states_in_group [int]
num_groups [int]
state [int] : 1~500
Returns:
one_hot_vector [numpy array]
"""
### Generate state feature (2~4 lines)
# Create one_hot_vector with size of the num_groups, according to state
# For simplicity, assume num_states is always perfectly divisible by num_groups
# Note that states start from index 1, not 0!
# Example:
# If num_states = 100, num_states_in_group = 20, num_groups = 5,
# one_hot_vector would be of size 5.
# For states 1~20, one_hot_vector would be: [1, 0, 0, 0, 0]
#
# one_hot_vector = ?
# ----------------
one_hot_vector = np.zeros(num_groups)
one_hot_vector[(state - 1) // num_states_in_group] = 1
# ----------------
return one_hot_vector
###Output
_____no_output_____
###Markdown
Run the following code to verify your `get_state_feature()` function.
###Code
# -----------
# Tested Cell
# -----------
# The contents of the cell will be tested by the autograder.
# If they do not pass here, they will not pass there.
# Given that num_states = 10 and num_groups = 5, test get_state_feature()
# There are states 1~10, and the state feature vector would be of size 5.
# Only one element would be active for any state feature vector.
# get_state_feature() should support various values of num_states, num_groups, not just this example
# For simplicity, assume num_states will always be perfectly divisible by num_groups
num_states = 10
num_groups = 5
num_states_in_group = int(num_states / num_groups)
# Test 1st group, state = 1
state = 1
features = get_state_feature(num_states_in_group, num_groups, state)
print("1st group: {}".format(features))
assert np.all(features == [1, 0, 0, 0, 0])
# Test 2nd group, state = 3
state = 3
features = get_state_feature(num_states_in_group, num_groups, state)
print("2nd group: {}".format(features))
assert np.all(features == [0, 1, 0, 0, 0])
# Test 3rd group, state = 6
state = 6
features = get_state_feature(num_states_in_group, num_groups, state)
print("3rd group: {}".format(features))
assert np.all(features == [0, 0, 1, 0, 0])
# Test 4th group, state = 7
state = 7
features = get_state_feature(num_states_in_group, num_groups, state)
print("4th group: {}".format(features))
assert np.all(features == [0, 0, 0, 1, 0])
# Test 5th group, state = 10
state = 10
features = get_state_feature(num_states_in_group, num_groups, state)
print("5th group: {}".format(features))
assert np.all(features == [0, 0, 0, 0, 1])
###Output
1st group: [1. 0. 0. 0. 0.]
2nd group: [0. 1. 0. 0. 0.]
3rd group: [0. 0. 1. 0. 0.]
4th group: [0. 0. 0. 1. 0.]
5th group: [0. 0. 0. 0. 1.]
###Markdown
Section 2-2: Implement Agent MethodsNow that we have implemented all the helper functions, let's create an agent. In this part, you will implement `agent_init()`, `agent_start()`, `agent_step()` and `agent_end()`. You will have to use `agent_policy()` that we implemented above. We will implement `agent_message()` later, when returning the learned state-values.To save computation time, we precompute features for all states beforehand in `agent_init()`. The pre-computed features are saved in `self.all_state_features` numpy array. Hence, you do not need to call `get_state_feature()` every time in `agent_step()` and `agent_end()`.The shape of `self.all_state_features` numpy array is `(num_states, feature_size)`, with features of states from State 1-500. Note that index 0 stores features for State 1 (Features for State 0 does not exist). Use `self.all_state_features` to access each feature vector for a state.When saving state values in the agent, recall how the state values are represented with linear function approximation.**State Value Representation**: $\hat{v}(s,\mathbf{w}) = \mathbf{w}\cdot\mathbf{x^T}$ where $\mathbf{w}$ is a weight vector and $\mathbf{x}$ is the feature vector of the state.When performing TD(0) updates with Linear Function Approximation, recall how we perform semi-gradient TD(0) updates using supervised learning.**semi-gradient TD(0) Weight Update Rule**: $\mathbf{w_{t+1}} = \mathbf{w_{t}} + \alpha [R_{t+1} + \gamma \hat{v}(S_{t+1},\mathbf{w}) - \hat{v}(S_t,\mathbf{w})] \nabla \hat{v}(S_t,\mathbf{w})$
###Code
# -----------
# Graded Cell
# -----------
# Create TDAgent
class TDAgent(BaseAgent):
def __init__(self):
self.num_states = None
self.num_groups = None
self.step_size = None
self.discount_factor = None
def agent_init(self, agent_info={}):
"""Setup for the agent called when the experiment first starts.
Set parameters needed to setup the semi-gradient TD(0) state aggregation agent.
Assume agent_info dict contains:
{
num_states: 500 [int],
num_groups: int,
step_size: float,
discount_factor: float,
seed: int
}
"""
# set random seed for each run
self.rand_generator = np.random.RandomState(agent_info.get("seed"))
# set class attributes
self.num_states = agent_info.get("num_states")
self.num_groups = agent_info.get("num_groups")
self.step_size = agent_info.get("step_size")
self.discount_factor = agent_info.get("discount_factor")
# pre-compute all observable features
num_states_in_group = int(self.num_states / self.num_groups)
self.all_state_features = np.array([get_state_feature(num_states_in_group, self.num_groups, state) for state in range(1, self.num_states + 1)])
# ----------------
# initialize all weights to zero using numpy array with correct size
self.weights = np.zeros(self.num_groups)
# your code here
# ----------------
self.last_state = None
self.last_action = None
def agent_start(self, state):
"""The first method called when the experiment starts, called after
the environment starts.
Args:
state (Numpy array): the state from the
environment's evn_start function.
Returns:
self.last_action [int] : The first action the agent takes.
"""
# ----------------
### select action given state (using agent_policy), and save current state and action
# Use self.rand_generator for agent_policy
#
self.last_state = state
self.last_action = agent_policy(self.rand_generator, state)
# your code here
# ----------------
return self.last_action
def agent_step(self, reward, state):
"""A step taken by the agent.
Args:
reward [float]: the reward received for taking the last action taken
state [int]: the state from the environment's step, where the agent ended up after the last step
Returns:
self.last_action [int] : The action the agent is taking.
"""
# get relevant feature
current_state_feature = self.all_state_features[state-1]
last_state_feature = self.all_state_features[self.last_state-1]
### update weights and select action
# (Hint: np.dot method is useful!)
#
# Update weights:
# use self.weights, current_state_feature, and last_state_feature
#
# Select action:
# use self.rand_generator for agent_policy
#
# Current state and selected action should be saved to self.last_state and self.last_action at the end
#
# self.weights = ?
# self.last_state = ?
# self.last_action = ?
# ----------------
self.weights += self.step_size * (\
reward + self.discount_factor *\
self.weights@current_state_feature.T -\
self.weights@last_state_feature.T) *\
last_state_feature # delta V(s,w) is e.g [1,0,0,0,0]
self.last_state = state
self.last_action = agent_policy(self.rand_generator, state)
# ----------------
return self.last_action
def agent_end(self, reward):
"""Run when the agent terminates.
Args:
reward (float): the reward the agent received for entering the
terminal state.
"""
# get relevant feature
last_state_feature = self.all_state_features[self.last_state-1]
### update weights
# Update weights using self.weights and last_state_feature
# (Hint: np.dot method is useful!)
#
# Note that here you don't need to choose action since the agent has reached a terminal state
# Therefore you should not update self.last_state and self.last_action
#
# self.weights = ?
# ----------------
self.weights += self.step_size * (\
reward - self.weights@last_state_feature.T) * last_state_feature
# ----------------
return
def agent_message(self, message):
# We will implement this method later
raise NotImplementedError
###Output
_____no_output_____
###Markdown
Run the following code to verify `agent_init()`
###Code
# -----------
# Tested Cell
# -----------
# The contents of the cell will be tested by the autograder.
# If they do not pass here, they will not pass there.
agent_info = {
"num_states": 500,
"num_groups": 10,
"step_size": 0.1,
"discount_factor": 1.0,
"seed": 1,
}
agent = TDAgent()
agent.agent_init(agent_info)
assert np.all(agent.weights == 0)
assert agent.weights.shape == (10,)
# check attributes
print("num_states: {}".format(agent.num_states))
print("num_groups: {}".format(agent.num_groups))
print("step_size: {}".format(agent.step_size))
print("discount_factor: {}".format(agent.discount_factor))
print("weights shape: {}".format(agent.weights.shape))
print("weights init. value: {}".format(agent.weights))
###Output
num_states: 500
num_groups: 10
step_size: 0.1
discount_factor: 1.0
weights shape: (10,)
weights init. value: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
Run the following code to verify `agent_start()`.Although there is randomness due to `rand_generator.choice()` in `agent_policy()`, we control the seed so your output should match the expected output. Make sure `rand_generator.choice()` is called only once per `agent_policy()` call.
###Code
# -----------
# Tested Cell
# -----------
# The contents of the cell will be tested by the autograder.
# If they do not pass here, they will not pass there.
agent_info = {
"num_states": 500,
"num_groups": 10,
"step_size": 0.1,
"discount_factor": 1.0,
"seed": 1,
}
# Suppose state = 250
state = 250
agent = TDAgent()
agent.agent_init(agent_info)
action = agent.agent_start(state)
assert action == 1
assert agent.last_state == 250
assert agent.last_action == 1
print("Agent state: {}".format(agent.last_state))
print("Agent selected action: {}".format(agent.last_action))
###Output
Agent state: 250
Agent selected action: 1
###Markdown
Run the following code to verify `agent_step()`
###Code
# -----------
# Tested Cell
# -----------
# The contents of the cell will be tested by the autograder.
# If they do not pass here, they will not pass there.
agent_info = {
"num_states": 500,
"num_groups": 10,
"step_size": 0.1,
"discount_factor": 0.9,
"seed": 1,
}
agent = TDAgent()
agent.agent_init(agent_info)
# Initializing the weights to arbitrary values to verify the correctness of weight update
agent.weights = np.array([-1.5, 0.5, 1., -0.5, 1.5, -0.5, 1.5, 0.0, -0.5, -1.0])
# Assume the agent started at State 50
start_state = 50
action = agent.agent_start(start_state)
assert action == 1
# Assume the reward was 10.0 and the next state observed was State 120
reward = 10.0
next_state = 120
action = agent.agent_step(reward, next_state)
assert action == 1
print("Updated weights: {}".format(agent.weights))
assert np.allclose(agent.weights, [-0.26, 0.5, 1., -0.5, 1.5, -0.5, 1.5, 0., -0.5, -1.])
assert agent.last_state == 120
assert agent.last_action == 1
print("last state: {}".format(agent.last_state))
print("last action: {}".format(agent.last_action))
# let's do another
reward = -22
next_state = 222
action = agent.agent_step(reward, next_state)
assert action == 0
assert np.allclose(agent.weights, [-0.26, 0.5, -1.165, -0.5, 1.5, -0.5, 1.5, 0, -0.5, -1])
assert agent.last_state == 222
assert agent.last_action == 0
###Output
Updated weights: [-0.26 0.5 1. -0.5 1.5 -0.5 1.5 0. -0.5 -1. ]
last state: 120
last action: 1
###Markdown
Run the following code to verify `agent_end()`
###Code
# -----------
# Tested Cell
# -----------
# The contents of the cell will be tested by the autograder.
# If they do not pass here, they will not pass there.
agent_info = {
"num_states": 500,
"num_groups": 10,
"step_size": 0.1,
"discount_factor": 0.9,
"seed": 1,
}
agent = TDAgent()
agent.agent_init(agent_info)
# Initializing the weights to arbitrary values to verify the correctness of weight update
agent.weights = np.array([-1.5, 0.5, 1., -0.5, 1.5, -0.5, 1.5, 0.0, -0.5, -1.0])
# Assume the agent started at State 50
start_state = 50
action = agent.agent_start(start_state)
assert action == 1
# Assume the reward was 10.0 and reached the terminal state
agent.agent_end(10.0)
print("Updated weights: {}".format(agent.weights))
assert np.allclose(agent.weights, [-0.35, 0.5, 1., -0.5, 1.5, -0.5, 1.5, 0., -0.5, -1.])
###Output
Updated weights: [-0.35 0.5 1. -0.5 1.5 -0.5 1.5 0. -0.5 -1. ]
###Markdown
**Expected output**: (Note only the 1st element was changed, and the result is different from `agent_step()` ) Initial weights: [-1.5 0.5 1. -0.5 1.5 -0.5 1.5 0. -0.5 -1. ] Updated weights: [-0.35 0.5 1. -0.5 1.5 -0.5 1.5 0. -0.5 -1. ] Section 2-3: Returning Learned State ValuesYou are almost done! Now let's implement a code block in `agent_message()` that returns the learned state values.The method `agent_message()` will return the learned state_value array when `message == 'get state value'`.**Hint**: Think about how state values are represented with linear function approximation. `state_value` array will be a 1D array with length equal to the number of states.
###Code
%%add_to TDAgent
# -----------
# Graded Cell
# -----------
def agent_message(self, message):
if message == 'get state value':
### return state_value
# Use self.all_state_features and self.weights to return the vector of all state values
# Hint: Use np.dot()
#
state_value = self.weights @ self.all_state_features.T
# your code here
return state_value
###Output
_____no_output_____
###Markdown
Run the following code to verify `get_state_val()`
###Code
# -----------
# Tested Cell
# -----------
# The contents of the cell will be tested by the autograder.
# If they do not pass here, they will not pass there.
agent_info = {
"num_states": 20,
"num_groups": 5,
"step_size": 0.1,
"discount_factor": 1.0,
}
agent = TDAgent()
agent.agent_init(agent_info)
test_state_val = agent.agent_message('get state value')
assert test_state_val.shape == (20,)
assert np.all(test_state_val == 0)
print("State value shape: {}".format(test_state_val.shape))
print("Initial State value for all states: {}".format(test_state_val))
###Output
State value shape: (20,)
Initial State value for all states: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
**Expected Output**: State value shape: (20,) Initial State value for all states: [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.] Section 3: Run ExperimentNow that we've implemented all the components of environment and agent, let's run an experiment! We will plot two things: (1) the learned state value function and compare it against the true state values, and (2) a learning curve depicting the error in the learned value estimates over episodes. For the learning curve, what should we plot to see if the agent is learning well? Section 3-1: Prediction Objective (Root Mean Squared Value Error) Recall that the Prediction Objective in function approximation is Mean Squared Value Error $\overline{VE}(\mathbf{w}) \doteq \sum\limits_{s \in \mathcal{S}}\mu(s)[v_\pi(s)-\hat{v}(s,\mathbf{w})]^2$We will use the square root of this measure, the root $\overline{VE}$ to give a rough measure of how much the learned values differ from the true values.`calc RMSVE()` computes the Root Mean Squared Value Error given learned state value $\hat{v}(s, \mathbf{w})$.We provide you with true state value $v_\pi(s)$ and state distribution $\mu(s)$
###Code
# ---------------
# Discussion Cell
# ---------------
# Here we provide you with the true state value and state distribution
true_state_val = np.load('data/true_V.npy')
state_distribution = np.load('data/state_distribution.npy')
def calc_RMSVE(learned_state_val):
assert(len(true_state_val) == len(learned_state_val) == len(state_distribution))
MSVE = np.sum(np.multiply(state_distribution, np.square(true_state_val - learned_state_val)))
RMSVE = np.sqrt(MSVE)
return RMSVE
###Output
_____no_output_____
###Markdown
Section 3-2a: Run Experiment with 10-State AggregationWe have provided you the experiment/plot code in the cell below.
###Code
# ---------------
# Discussion Cell
# ---------------
import os
# Define function to run experiment
def run_experiment(environment, agent, environment_parameters, agent_parameters, experiment_parameters):
rl_glue = RLGlue(environment, agent)
# Sweep Agent parameters
for num_agg_states in agent_parameters["num_groups"]:
for step_size in agent_parameters["step_size"]:
# save rmsve at the end of each evaluation episode
# size: num_episode / episode_eval_frequency + 1 (includes evaluation at the beginning of training)
agent_rmsve = np.zeros(int(experiment_parameters["num_episodes"]/experiment_parameters["episode_eval_frequency"]) + 1)
# save learned state value at the end of each run
agent_state_val = np.zeros(environment_parameters["num_states"])
env_info = {"num_states": environment_parameters["num_states"],
"start_state": environment_parameters["start_state"],
"left_terminal_state": environment_parameters["left_terminal_state"],
"right_terminal_state": environment_parameters["right_terminal_state"]}
agent_info = {"num_states": environment_parameters["num_states"],
"num_groups": num_agg_states,
"step_size": step_size,
"discount_factor": environment_parameters["discount_factor"]}
print('Setting - num. agg. states: {}, step_size: {}'.format(num_agg_states, step_size))
os.system('sleep 0.2')
# one agent setting
for run in tqdm(range(1, experiment_parameters["num_runs"]+1)):
env_info["seed"] = run
agent_info["seed"] = run
rl_glue.rl_init(agent_info, env_info)
# Compute initial RMSVE before training
current_V = rl_glue.rl_agent_message("get state value")
agent_rmsve[0] += calc_RMSVE(current_V)
for episode in range(1, experiment_parameters["num_episodes"]+1):
# run episode
rl_glue.rl_episode(0) # no step limit
if episode % experiment_parameters["episode_eval_frequency"] == 0:
current_V = rl_glue.rl_agent_message("get state value")
agent_rmsve[int(episode/experiment_parameters["episode_eval_frequency"])] += calc_RMSVE(current_V)
# store only one run of state value
if run == 50:
agent_state_val = rl_glue.rl_agent_message("get state value")
# rmsve averaged over runs
agent_rmsve /= experiment_parameters["num_runs"]
save_name = "{}_agg_states_{}_step_size_{}".format('TD_agent', num_agg_states, step_size).replace('.','')
if not os.path.exists('results'):
os.makedirs('results')
# save avg. state value
np.save("results/V_{}".format(save_name), agent_state_val)
# save avg. rmsve
np.save("results/RMSVE_{}".format(save_name), agent_rmsve)
###Output
_____no_output_____
###Markdown
We will first test our implementation using state aggregation with resolution of 10, with three different step sizes: {0.01, 0.05, 0.1}.Note that running the experiment cell below will take **_approximately 5 min_**.
###Code
# ---------------
# Discussion Cell
# ---------------
#### Run Experiment
# Experiment parameters
experiment_parameters = {
"num_runs" : 50,
"num_episodes" : 2000,
"episode_eval_frequency" : 10 # evaluate every 10 episodes
}
# Environment parameters
environment_parameters = {
"num_states" : 500,
"start_state" : 250,
"left_terminal_state" : 0,
"right_terminal_state" : 501,
"discount_factor" : 1.0
}
# Agent parameters
# Each element is an array because we will be later sweeping over multiple values
agent_parameters = {
"num_groups": [10],
"step_size": [0.01, 0.05, 0.1]
}
current_env = RandomWalkEnvironment
current_agent = TDAgent
run_experiment(current_env, current_agent, environment_parameters, agent_parameters, experiment_parameters)
plot_script.plot_result(agent_parameters, 'results')
###Output
Setting - num. agg. states: 10, step_size: 0.01
###Markdown
Is the learned state value plot with step-size=0.01 similar to Figure 9.2 (p.208) in Sutton and Barto?(Note that our environment has less states: 500 states and we have done 2000 episodes, and averaged the performance over 50 runs)Look at the plot of the learning curve. Does RMSVE decrease over time?Would it be possible to reduce RMSVE to 0?You should see the RMSVE decrease over time, but the error seems to plateau. It is impossible to reduce RMSVE to 0, because of function approximation (and we do not decay the step-size parameter to zero). With function approximation, the agent has limited resources and has to trade-off the accuracy of one state for another state. Run the following code to verify your experimental result.
###Code
# -----------
# Graded Cell
# -----------
agent_parameters = {
"num_groups": [10],
"step_size": [0.01, 0.05, 0.1]
}
all_correct = True
for num_agg_states in agent_parameters["num_groups"]:
for step_size in agent_parameters["step_size"]:
filename = 'RMSVE_TD_agent_agg_states_{}_step_size_{}'.format(num_agg_states, step_size).replace('.','')
agent_RMSVE = np.load('results/{}.npy'.format(filename))
correct_RMSVE = np.load('correct_npy/{}.npy'.format(filename))
if not np.allclose(agent_RMSVE, correct_RMSVE):
all_correct=False
if all_correct:
print("Your experiment results are correct!")
else:
print("Your experiment results does not match with ours. Please check if you have implemented all methods correctly.")
###Output
Your experiment results are correct!
###Markdown
Section 3-2b: Run Experiment with Different State Aggregation Resolution and Step-SizeIn this section, we will run some more experiments to see how different parameter settings affect the results!In particular, we will test several values of `num_groups` and `step_size`. Parameter sweeps although necessary, can take lots of time. So now that you have verified your experiment result, here we show you the results of the parameter sweeps that you would see when running the sweeps yourself.We tested several different values of `num_groups`: {10, 100, 500}, and `step-size`: {0.01, 0.05, 0.1}. As before, we performed 2000 episodes per run, and averaged the results over 50 runs for each setting.Run the cell below to display the sweep results.
###Code
# ---------------
# Discussion Cell
# ---------------
# Make sure to verify your experiment result with the test cell above.
# Otherwise the sweep results will not be displayed.
# Experiment parameters
experiment_parameters = {
"num_runs" : 50,
"num_episodes" : 2000,
"episode_eval_frequency" : 10 # evaluate every 10 episodes
}
# Environment parameters
environment_parameters = {
"num_states" : 500,
"start_state" : 250,
"left_terminal_state" : 0,
"right_terminal_state" : 501,
"discount_factor" : 1.0
}
# Agent parameters
# Each element is an array because we will be sweeping over multiple values
agent_parameters = {
"num_groups": [10, 100, 500],
"step_size": [0.01, 0.05, 0.1]
}
if all_correct:
plot_script.plot_result(agent_parameters, 'correct_npy')
else:
raise ValueError("Make sure your experiment result is correct! Otherwise the sweep results will not be displayed.")
###Output
_____no_output_____ |
examples/plotting/notebook/random_walk.ipynb | ###Markdown
*To run these examples you must execute the command `python bokeh-server` in the top-level Bokeh source directory first.*
###Code
output_notebook(url="default")
TS_MULT_us = 1e6
UNIX_EPOCH = datetime.datetime(1970, 1, 1, 0, 0) #offset-naive datetime
def int2dt(ts, ts_mult=TS_MULT_us):
"""Convert timestamp (integer) to datetime"""
return(datetime.datetime.utcfromtimestamp(float(ts)/ts_mult))
def td2int(td, ts_mult=TS_MULT_us):
"""Convert timedelta to integer"""
return(int(td.total_seconds()*ts_mult))
def dt2int(dt, ts_mult=TS_MULT_us):
"""Convert datetime to integer"""
delta = dt - UNIX_EPOCH
return(int(delta.total_seconds()*ts_mult))
def int_from_last_sample(dt, td):
return(dt2int(dt) - dt2int(dt) % td2int(td))
TS_MULT = 1e3
td_delay = datetime.timedelta(seconds=0.5)
delay_s = td_delay.total_seconds()
delay_int = td2int(td_delay, TS_MULT)
value = 1000 # initial value
N = 100 # number of elements into circular buffer
buff = collections.deque([value]*N, maxlen=N)
t_now = datetime.datetime.utcnow()
ts_now = dt2int(t_now, TS_MULT)
t = collections.deque(np.arange(ts_now-N*delay_int, ts_now, delay_int), maxlen=N)
p = figure(x_axis_type="datetime")
p.line(list(t), list(buff), color="#0000FF", name="line_example")
renderer = p.select(dict(name="line_example"))[0]
ds = renderer.data_source
show(p)
while True:
ts_now = dt2int(datetime.datetime.utcnow(), 1e3)
t.append(ts_now)
ds.data['x'] = list(t)
value += np.random.uniform(-1, 1)
buff.append(value)
ds.data['y'] = list(buff)
cursession().store_objects(ds)
time.sleep(delay_s)
###Output
_____no_output_____ |
examples/misc/alanine_dipeptide_committor/4_analysis_help.ipynb | ###Markdown
Analysis helpThis covers stuff that you will need to know in order to use the `committor_results.nc` file.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import openpathsampling as paths
import numpy as np
import pandas as pd
pd.options.display.max_rows = 10
storage = paths.Storage("committor_results.nc", "r")
phi = storage.cvs['phi']
psi = storage.cvs['psi']
%%time
C_7eq = storage.volumes['C_7eq']
alpha_R = storage.volumes['alpha_R']
experiments = storage.tag['experiments']
###Output
CPU times: user 49.6 s, sys: 239 ms, total: 49.8 s
Wall time: 51.4 s
###Markdown
The `experiments` object is a list of tuples `(snapshot, final_state)`. Each `snapshot` is an OPS snapshot object (a point in phase space), and the `final_state` is either the `C_7eq` object or the `alpha_R` object. Directly obtaining a committor analysisAs it happens, `experiments` is in precisely the correct format to be used in one of the approaches to constructing a committor analysis.This section requires OpenPathSampling 0.9.1 or later.
###Code
%%time
committor_analyzer = paths.ShootingPointAnalysis.from_individual_runs(experiments)
###Output
CPU times: user 44 s, sys: 143 ms, total: 44.2 s
Wall time: 49.1 s
###Markdown
Before going further, let's talk a little bit about the implementation of the `ShootingPointAnalysis` object. The main thing to understand is that the purpose of that object is to histogram according to configuration. The first snapshot encountered is kept as a representative of that configuration.So whereas there are 10000 snapshots in `experiments` (containing the full data, including velocities), there are only 1000 entries in the `committor_analyzer` (because, in this data set, I ran 1000 snapshots with 10 shots each.) Per-configuration resultsThe `.to_pandas()` function creates a pandas table with configurations as the index, the final states as columns, and the number of times that configuration led to that final state as entries. With no argument, `to_pandas()` using the an integer for each configuration.
###Code
committor_analyzer.to_pandas()
###Output
_____no_output_____
###Markdown
You can also pass it a function that takes a snapshot and returns a (hashable) value. That value will be used for the index. These collective variables return numpy arrays, so we need to cast the 1D array to a `float`.
###Code
psi_hash = lambda x : float(psi(x))
committor_analyzer.to_pandas(label_function=psi_hash)
###Output
_____no_output_____
###Markdown
You can also directly obtain the committor as a dictionary of (representative) snapshot to committor value. The committor here is defines as the probability of ending in a given state, so you must give the state.
###Code
committor = committor_analyzer.committor(alpha_R)
# show the first 10 values
{k: committor[k] for k in committor.keys()[:10]}
###Output
_____no_output_____
###Markdown
Committor histogram in 1D
###Code
hist1D, bins = committor_analyzer.committor_histogram(psi_hash, alpha_R, bins=20)
bin_widths = [bins[i+1]-bins[i] for i in range(len(bins)-1)]
plt.bar(left=bins[:-1], height=hist1D, width=bin_widths, log=True);
###Output
_____no_output_____
###Markdown
Committor histogram in 2D
###Code
ramachandran_hash = lambda x : (float(phi(x)), float(psi(x)))
hist2D, bins_phi, bins_psi = committor_analyzer.committor_histogram(ramachandran_hash, alpha_R, bins=20)
# not the best, since it doesn't distinguish NaNs, but that's just a matter of plotting
plt.pcolor(bins_phi, bins_psi, hist2D.T, cmap="winter")
plt.clim(0.0, 1.0)
plt.colorbar();
###Output
_____no_output_____
###Markdown
Obtaining information from the snapshotsThe information `committor_results.nc` should be *everything* you could want, including initial velocities for every system. In principle, you'll mainly access that information using collective variables (see documentation on using MDTraj to create OPS collective variables). However, you may decide to access that information directly, so here's how you do that.
###Code
# let's take the first shooting point snapshot
# experiments[N][0] gives shooting snapshot for experiment N
snapshot = experiments[0][0]
###Output
_____no_output_____
###Markdown
OpenMM-based objects come with units. So `snapshot.coordinates` is a unitted value. This can be annoying in analysis, so we have a convenience `snapshot.xyz` to get the version without units.
###Code
snapshot.coordinates
snapshot.xyz
###Output
_____no_output_____
###Markdown
For velocities, we don't have the convenience function, but if you want to remove units from velocities you can do so with `velocity / velocity.unit`.
###Code
snapshot.velocities
snapshot.velocities / snapshot.velocities.unit
###Output
_____no_output_____
###Markdown
Note that snapshots include coordinates and velocities. We have several sets of initial velocities for each initial snapshot. Taking the second shooting snapshot and comparing coordinates and velocities:
###Code
snapshot2 = experiments[1][0]
np.all(snapshot.coordinates == snapshot2.coordinates)
np.any(snapshot.velocities == snapshot2.velocities)
###Output
_____no_output_____ |
examples/3.01a-Wind-Load_Power_Curve.ipynb | ###Markdown
Create a Power curved from scratch
###Code
# - Windspeeds given in m/s
# - Capacty factors given from 0 (no generation) to 1 (100% generation)
pc = rk.wind.PowerCurve(
wind_speed=[1,2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 25],
capacity_factor=[0,0,.06,.16,.3,.6,.75,.85,.91,.95,.975,.99,1.0, 1.0])
pc
###Output
_____no_output_____
###Markdown
Load a real turbine's power curve
###Code
# Consult the Turbine Library
rk.wind.TurbineLibrary().head()
# See specific Turbine Information
rk.wind.TurbineLibrary().loc["E115_2500"]
# Retrieve the power curve
pc = rk.wind.TurbineLibrary().loc["E115_2500"].PowerCurve
pc
###Output
_____no_output_____
###Markdown
Access Power Curve
###Code
# Direct access to the power curve's capacity factor values
# - used 'pc.wind_speed' for wind speed values
# - used 'pc.capacity_factor' for capacity factor values
for i,ws,cf in zip(range(10), pc.wind_speed, pc.capacity_factor):
print("The capacity factor at {:.1f} m/s is {:.3f}".format(ws, cf))
print("...")
###Output
The capacity factor at 1.0 m/s is 0.000
The capacity factor at 2.0 m/s is 0.001
The capacity factor at 3.0 m/s is 0.019
The capacity factor at 4.0 m/s is 0.062
The capacity factor at 5.0 m/s is 0.136
The capacity factor at 6.0 m/s is 0.263
The capacity factor at 7.0 m/s is 0.414
The capacity factor at 8.0 m/s is 0.620
The capacity factor at 9.0 m/s is 0.816
The capacity factor at 10.0 m/s is 0.953
...
|
Complete-Python-Bootcamp/StringIO.ipynb | ###Markdown
StringIO The StringIO module implements an in-memory file like object. This object can then be used as input or output to most functions that would expect a standard file object.The best way to show this is by example:
###Code
import StringIO
# Arbitrary String
message = 'This is just a normal string.'
# Use StringIO method to set as file object
f = StringIO.StringIO(message)
###Output
_____no_output_____
###Markdown
Now we have an object *f* that we will be able to treat just like a file. For example:
###Code
f.read()
###Output
_____no_output_____
###Markdown
We can also write to it:
###Code
f.write(' Second line written to file like object')
# Reset cursor just like you would a file
f.seek(0)
# Read again
f.read()
###Output
_____no_output_____ |
OpenFDA 2017 First Quarter Data [Ver. 1.2 10272017][NEW] - Copy.ipynb | ###Markdown
Modeling OpenFDA FAERS data for Exploratory Analysis into Adverse Events Describing the code and abbreviations from OpenFDA data The dataset from OpenFDA comes in the form of 7 separate ASCII text file delimited by '$'. File Descriptions for ASCII Data Files: 1. DEMOyyQq.TXT contains patient demographic and administrative information,a single record for each event report. 2. DRUGyyQq.TXT contains drug/biologic information for as many medications aswere reported for the event (1 or more per event). 3. REACyyQq.TXT contains all "Medical Dictionary for Regulatory Activities"(MedDRA) terms coded for the adverse event (1 or more). For more informationon MedDRA, please contact the MSSO Help Desk at [email protected]. Thewebsite is www.meddra.org. 4. OUTCyyQq.TXT contains patient outcomes for the event (0 or more). 5. RPSRyyQq.TXT contains report sources for the event (0 or more). 6. THERyyQq.TXT contains drug therapy start dates and end dates for thereported drugs (0 or more per drug per event)
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Part I. Load 20yy Qq FAERS data and display preview
###Code
#### Define headers for dataframes
demo_head = ['Primary_ID', 'Case_ID', 'Case_Version', 'Initial/Follow-up', 'AE_Start_dt', 'Mfr_Receive_AE_dt',
'FDA_init_Receive_Case_dt', 'FDA_Receive_Case_dt', 'Report_Type', 'Reg_Auth_Case_num', 'mfr_Unique_Report_ID',
'mfr_sender_code', 'Lit_Reference', 'Age', 'Age_Unit', 'Pt_Age_Group','SEX', 'E-submission(Y/N)', 'Pt_Weight','Pt_weight_Unit',
'Report_Send_dt', 'Report_Send_to_mfr_dt', 'Reporter_Occupation', 'Reporter_country', 'Event_country']
indi_head = ['Primary_ID', 'Case_ID', 'Drug_Seq', 'MedDRA_indi_term']
outc_head = ['Primary_ID', 'Case_ID', 'Pt_Outcome']
reac_head = ['Primary_ID', 'Case_ID', 'MedDRA_reac_term', 'ReAdmin_Event_Data']
rpsr_head = ['Primary_ID', 'Case_ID', 'RpSr_Code']
ther_head = ['Primary_ID', 'Case_ID', 'Drug_Seq', 'Start_Dt', 'End_Dt', 'Therapy_Duration', 'Ther_Units']
drug_head = ['Primary_ID', 'Case_ID', 'Drug_Seq', 'Reporter_role', 'Drug_Name', 'Active_Ingredient', 'Value_VBM',
'Drug_Name_Source', 'Route', 'Verbatim_Dose' 'Cum_Dose_to_Rxn', 'Cum_Dose_to_Rxn_Units', 'Dechall_Code',
'Rechall_Code','Lot_Numb', 'Drug_Exp_dt', 'NDA_Numn', 'Dose_Amount', 'Dose_Unit', 'Dose_Form', 'Dose_Freq' ]
###Output
_____no_output_____
###Markdown
A. Load 20yy Qq FDA FAERS data from file
###Code
## NOTE: Variables for the FAERS datasets in this notebook were initially created based on the 2017Q1 files.
## As a result, the variable names past this cell will reflect the 2017Q1 version.
## To apply this code to a different year and quarter for FAERS data, only the filepath in this cell will be redirected,
## keeping all other variables constant.
demographic_txt = pd.read_csv('faers_ascii_2017q1/ascii/DEMO17Q1.txt', delimiter="$",header = 0, names = demo_head,
low_memory = False,skipinitialspace = True, parse_dates = [6,7])
indication_txt = pd.read_csv('faers_ascii_2017q1/ascii/INDI17Q1.txt', delimiter="$", header = 0, names = indi_head,
low_memory = False, skipinitialspace = True)
outcome_txt = pd.read_csv('faers_ascii_2017q1/ascii/OUTC17Q1.txt', delimiter="$", header = 0, names = outc_head,
low_memory = False, skipinitialspace = True)
reaction_txt = pd.read_csv('faers_ascii_2017q1/ascii/REAC17Q1.txt', delimiter="$", header = 0, names = reac_head,
low_memory = False, skipinitialspace = True)
rptsource_txt = pd.read_csv('faers_ascii_2017q1/ascii/RPSR17Q1.txt', delimiter="$", header = 0, names = rpsr_head,
low_memory = False, skipinitialspace = True)
therapy_txt = pd.read_csv('faers_ascii_2017q1/ascii/THER17Q1.txt', delimiter="$", header = 0, names = ther_head,
low_memory = False, skipinitialspace = True)
drug_txt = pd.read_csv('faers_ascii_2017q1/ascii/DRUG17Q1.txt', delimiter="$", header = 0, names = drug_head,
low_memory = False, skipinitialspace = False)
###Output
_____no_output_____
###Markdown
B. Preview loaded FDA FAERS data
###Code
#### Demographics dataframe preview
demographic_txt.reset_index(level = 0)
demographic_txt.fillna(value = 'Unknown' )
demographic_txt = pd.DataFrame(demographic_txt)
demographic_txt[:5] ## Preview first 5 rows
### Indications dataframe
indication_txt.reset_index(level = 0)
indication_txt.fillna(value = 'Unknown' )
indication_txt = pd.DataFrame(indication_txt)
indication_txt[:5] ## Preview first 5 rows
### Outcomes dataframe
outcome_txt.reset_index(inplace = True)
outcome_txt.fillna(value = 'Unknown' )
outcome_txt = pd.DataFrame(outcome_txt)
outcome_txt[:5] ## Preview first 5 rows
### Reaction dataframe
reaction_txt.reset_index(inplace = True)
reaction_txt.fillna(value = 'Unknown')
reaction_txt = pd.DataFrame(reaction_txt)
reaction_txt[:5] ## Preview first 5 rows
### Report Sources dataframe
rptsource_txt.reset_index(inplace = True, drop = True)
rptsource_txt.fillna (value = 'Unknown')
rptsource_txt = pd.DataFrame(rptsource_txt)
rptsource_txt[:5] ## Preview first 5 rows
### Therapy dataframe
therapy_txt.reset_index(inplace = True)
therapy_txt.fillna(value = 'Unknown')
therapy_txt = pd.DataFrame(therapy_txt)
therapy_txt[:5] ## Preview first 5 rows
### Drug_dataframe
drug_txt.reset_index(level = 0)
drug_txt.fillna(value = 'Unknown' )
drug_txt = pd.DataFrame(drug_txt)
drug_txt[:5] ## Preview first 5 rows
###Output
_____no_output_____
###Markdown
C. Create a dictionary for referencing country codes and patient outcomes in demographic_txt
###Code
## NOTE: For more information, visit https://www.accessdata.fda.gov/scripts/inspsearch/countrycodes.cfm
### Define country code dictionary
Country_Dict = {'AD' : 'Andorra', 'AE' : 'United Arab Emirates', 'AF' : 'Afghanistan', 'AG' : 'Antigua & Barbuda',
'AI' : 'Anguilla', 'AL' : 'Albania', 'AM' : 'Armenia', 'AN' : 'Netherlands Antilles', 'AO' : 'Angola',
'AR' : 'Argentina', 'AS' : 'American Samoa', 'AT' : 'Austria', 'AU' : 'Australia', 'AW' : 'Aruba',
'AZ' : 'Azerbaijan', 'BA' : 'Bosnia-Hercegovina', 'BB' : 'Barbados', 'BD' : 'Bangladesh', 'BE' : 'Belgium',
'BF' : 'Burkina Faso', 'BG' : 'Bulgaria', 'BH' : 'Bahrain', 'BI' : 'Burundi', 'BJ' : 'Benin', 'BM' : 'Bermuda',
'BN' : 'Brunei Darussalam', 'BO' : 'Bolivia', 'BR' : 'Brazil', 'BS' : 'Bahamas', 'BT' : 'Bhutan', 'BU' : 'Burma',
'BW' : 'Botswana', 'BY' : 'Belarus', 'BZ' : 'Belize', 'CA' : 'Canada', 'CC' : 'Cocos Islands', 'CD' : 'Congo, Dem Rep of (Kinshasa)',
'CF' : 'Central African Republic', 'CG' : 'Congo (Brazzaville)', 'CH' : 'Switzerland', 'CI' : 'Ivory Coast', 'CK' : 'Cook Islands',
'CL' : 'Chile', 'CM' : 'Cameroon', 'CN' : 'China', 'CO' : 'Colombia', 'CR' : 'Costa Rica', 'CS' : 'Czechoslovakia (Do Not Use)',
'CU' : 'Cuba', 'CV' : 'Cape Verde','CX' : 'Christmas Islands (Indian Ocn)', 'CY' : 'Cyprus', 'CZ' : 'Czech Republic','DE' : 'Germany',
'DJ' : 'Djibouti','DK' : 'Denmark','DM' : 'Dominica','DO' : 'Dominican Republic','DZ' : 'Algeria','EC' : 'Ecuador',
'EE' : 'Estonia','EG' : 'Egypt','EH' : 'Western Sahara','ER' : 'Eritrea','ES' : 'Spain','ET' : 'Ethiopia','FI' : 'Finland',
'FJ' : 'Fiji','FK' : 'Falkland Islands','FM' : 'Micronesia', 'FM' : 'Federated State Of' ,'FO' : 'Faroe Islands','FR' : 'France',
'GA' : 'Gabon','GB' : 'United Kingdom','GD' : 'Grenada','GE' : 'Georgia','GF' : 'French Guiana','GH' : 'Ghana',
'GI' : 'Gibraltar','GL' : 'Greenland','GM' : 'Gambia, The','GN' : 'Guinea','GP' : 'Guadeloupe','GQ' : 'Equatorial Guinea',
'GR' : 'Greece','GT' : 'Guatemala','GU' : 'Guam','GW' : 'Guinea-Bissau','GY' : 'Guyana','GZ' : 'Gaza Strip',
'HK' : 'Hong Kong SAR','HM' : 'Heard & McDonald Islands','HN' : 'Honduras','HR' : 'Croatia','HT' : 'Haiti',
'HU' : 'Hungary','ID' : 'Indonesia','IE' : 'Ireland','IL' : 'Israel','IN' : 'India','IO' : 'British Indian Ocean Territory',
'IQ' : 'Iraq','IR' : 'Iran','IS' : 'Iceland','IT' : 'Italy','JM' : 'Jamaica','JO' : 'Jordan','JP' : 'Japan','KE' : 'Kenya',
'KG' : 'Kyrgyzstan','KH' : 'Kampuchea','KI' : 'Kiribati','KM' : 'Comoros','KN' : 'Saint Christopher & Nevis',
'KP' : 'Korea', 'KP' : 'Democratic Peoples Repu' ,'KR' : 'Korea, Republic Of (South)','KV' : 'Kosovo','KW' : 'Kuwait',
'KY' : 'Cayman Islands','KZ' : 'Kazakhstan','LA' : 'Lao Peoples Democratic Repblc.','LB' : 'Lebanon','LC' : 'Saint Lucia',
'LI' : 'Liechtenstein','LK' : 'Sri Lanka','LR' : 'Liberia','LS' : 'Lesotho','LT' : 'Lithuania','LU' : 'Luxembourg',
'LV' : 'Latvia','LY' : 'Libya','MA' : 'Morocco','MC' : 'Monaco','MD' : 'Moldova','ME' : 'Montenegro','MG' : 'Madagascar',
'MH' : 'Marshall Islands','MK' : 'Macedonia','ML' : 'Mali','MM' : 'Burma (Myanmar)','MN' : 'Mongolia','MO' : 'Macau SAR',
'MP' : 'Northern Mariana Islands','MQ' : 'Martinique','MR' : 'Mauritania','MS' : 'Montserrat','MT' : 'Malta & Gozo',
'MU' : 'Mauritius','MV' : 'Maldives','MW' : 'Malawi','MX' : 'Mexico','MY' : 'Malaysia','MZ' : 'Mozambique',
'NA' : 'Namibia','NC' : 'New Caledonia','NE' : 'Niger','NF' : 'Norfolk Island','NG' : 'Nigeria','NI' : 'Nicaragua',
'NL' : 'Netherlands','NO' : 'Norway','NP' : 'Nepal','NR' : 'Nauru','NT' : 'Neutral Zone (Iraq-Saudi Arab)',
'NU' : 'Niue','NZ' : 'New Zealand','OM' : 'Oman','PA' : 'Panama','PE' : 'Peru','PF' : 'French Polynesia',
'PG' : 'Papua New Guinea','PH' : 'Philippines','PK' : 'Pakistan','PL' : 'Poland','PM' : 'Saint Pierre & Miquelon',
'PN' : 'Pitcairn Island','PR' : 'Puerto Rico','PS' : 'PALESTINIAN TERRITORY','PT' : 'Portugal','PW' : 'Palau',
'PY' : 'Paraguay','QA' : 'Qatar','RE' : 'Reunion','RO' : 'Romania','RS' : 'Serbia','RU' : 'Russia','RW' : 'Rwanda',
'SA' : 'Saudi Arabia','SB' : 'Solomon Islands','SC' : 'Seychelles','SD' : 'Sudan','SE' : 'Sweden','SG' : 'Singapore',
'SH' : 'Saint Helena','SI' : 'Slovenia','SJ' : 'Svalbard & Jan Mayen Islands','SK' : 'Slovakia','SL' : 'Sierra Leone',
'SM' : 'San Marino','SN' : 'Senegal','SO' : 'Somalia','SR' : 'Surinam','ST' : 'Sao Tome & Principe','SV' : 'El Salvador',
'SY' : 'Syrian Arab Republic','SZ' : 'Swaziland','TC' : 'Turks & Caicos Island','TD' : 'Chad','TF' : 'French Southern Antarctic',
'TG' : 'Togo','TH' : 'Thailand','TJ' : 'Tajikistan','TK' : 'Tokelau Islands','TL' : 'Timor Leste','TM' : 'Turkmenistan',
'TN' : 'Tunisia','TO' : 'Tonga','TP' : 'East Timor','TR' : 'Turkey','TT' : 'Trinidad & Tobago','TV' : 'Tuvalu',
'TW' : 'Taiwan','TZ' : 'Tanzania, United Republic Of','UA' : 'Ukraine','UG' : 'Uganda','UM' : 'United States Outlying Islands',
'US' : 'United States','UY' : 'Uruguay','UZ' : 'Uzbekistan','VA' : 'Vatican City State','VC' : 'St. Vincent & The Grenadines',
'VE' : 'Venezuela','VG' : 'British Virgin Islands','VI' : 'Virgin Islands Of The U. S.','VN' : 'Vietnam','VU' : 'Vanuatu',
'WE' : 'West Bank','WF' : 'Wallis & Futuna Islands','WS' : 'Western Samoa','YD' : 'Yemen, Democratic (South)','YE' : 'Yemen',
'YU' : 'Yugoslavia','ZA' : 'South Africa','ZM' : 'Zambia','ZW' : 'Zimbabwe'
}
### Convert Country codes from abbreviations to names
demographic_txt = demographic_txt.replace(Country_Dict)
### Define outcome code dictionary
Outcome_Dict = {'DE' : 'Death', 'LT' : 'Life-Threatening', 'HO' : 'Hospitalization', 'DS' : 'Disability', 'CA' : 'Congenital Anomaly',
'RI' : 'Required Intervention', 'OT' : 'Other Serious Event'
}
### Convert outcome codes from abbreviations to names
outcome_txt = outcome_txt.replace(Outcome_Dict)
###Output
_____no_output_____
###Markdown
Part II. Create a weekly histogram of FDA FAERS data A. Sort demographic_txt by date
###Code
### Sort demographic_txt by FDA_Recieve_Case_dt
demographic_txt_sort = demographic_txt.fillna(value = 'Unknown' ).sort_values('FDA_Receive_Case_dt')
### Pull out the Case_ID and FDA_Receive_Case_dt columns into separate dataframe
cases_2017q1 = demographic_txt_sort[['Case_ID', 'FDA_Receive_Case_dt']]
### Assign datetime format to FDA_Receive_Case_dt column
cases_2017q1 = pd.to_datetime(cases_2017q1['FDA_Receive_Case_dt'], format='%Y%m%d')
### Check types
cases_2017q1.dtypes
### Segment the Quarter 1 Data (from demographic_txt) into Weekly intervals
## NOTE: Code is designed for Q1 FAERS data. This cell will need to be corrected for Q2, Q3, and Q4 FAERS data
## Week 1
wk01 = pd.date_range(start = '2017-01-01', end = '2017-01-07', periods = None, freq = 'D' )
## Week 2
wk02 = pd.date_range(start = '2017-01-08', end = '2017-01-14', periods = None, freq = 'D' )
## Week 3
wk03 = pd.date_range(start = '2017-01-15', end = '2017-01-21', periods = None, freq = 'D' )
## Week 4
wk04 = pd.date_range(start = '2017-01-22', end = '2017-01-28', periods = None, freq = 'D' )
## Week 5
wk05 = pd.date_range(start = '2017-01-29', end = '2017-02-04', periods = None, freq = 'D' )
## Week 6
wk06 = pd.date_range(start = '2017-02-05', end = '2017-02-11', periods = None, freq = 'D' )
## Week 7
wk07 = pd.date_range(start = '2017-02-12', end = '2017-02-18', periods = None, freq = 'D' )
## Week 8
wk08 = pd.date_range(start = '2017-02-19', end = '2017-02-25', periods = None, freq = 'D' )
## Week 9
wk09 = pd.date_range(start = '2017-02-26', end = '2017-03-04', periods = None, freq = 'D' )
## Week 10
wk10 = pd.date_range(start = '2017-03-05', end = '2017-03-11', periods = None, freq = 'D' )
## Week 11
wk11 = pd.date_range(start = '2017-03-12', end = '2017-03-18', periods = None, freq = 'D' )
## Week 12
wk12 = pd.date_range(start = '2017-03-19', end = '2017-03-25', periods = None, freq = 'D' )
## Week 13
wk13 = pd.date_range(start = '2017-03-26', end = '2017-03-31', periods = None, freq = 'D' )
###Output
_____no_output_____
###Markdown
B. Method for counting weekly intervals inside of cases_2017q1.dtypes using boolean values
###Code
### Split data into week segments and count cases for each week
## Week 1
faers2017q1wk01 = cases_2017q1[cases_2017q1.isin(wk01)] ## Find if wkXX is in cases2017q1; if true, then boolean 1
faers2017q1wk1ct = len(faers2017q1wk01) ## Find the length of faers20yywkXX, i.e the number of cases in interval
## Repeat for each week interval
## Week 2
faers2017q1wk02 = cases_2017q1[cases_2017q1.isin(wk02)]
faers2017q1wk2ct = len(faers2017q1wk02)
## Week 3
faers2017q1wk03 = cases_2017q1[cases_2017q1.isin(wk03)]
faers2017q1wk3ct = len(faers2017q1wk03)
## Week 4
faers2017q1wk04 = cases_2017q1[cases_2017q1.isin(wk04)]
faers2017q1wk4ct = len(faers2017q1wk04)
## Week 5
faers2017q1wk05 = cases_2017q1[cases_2017q1.isin(wk05)]
faers2017q1wk5ct = len(faers2017q1wk05)
## Week 6
faers2017q1wk06 = cases_2017q1[cases_2017q1.isin(wk06)]
faers2017q1wk6ct = len(faers2017q1wk06)
## Week 7
faers2017q1wk07 = cases_2017q1[cases_2017q1.isin(wk07)]
faers2017q1wk7ct = len(faers2017q1wk07)
## Week 8
faers2017q1wk08 = cases_2017q1[cases_2017q1.isin(wk08)]
faers2017q1wk8ct = len(faers2017q1wk08)
## Week 9
faers2017q1wk09 = cases_2017q1[cases_2017q1.isin(wk09)]
faers2017q1wk9ct = len(faers2017q1wk09)
## Week 10
faers2017q1wk10 = cases_2017q1[cases_2017q1.isin(wk10)]
faers2017q1wk10ct = len(faers2017q1wk10)
## Week 11
faers2017q1wk11 = cases_2017q1[cases_2017q1.isin(wk11)]
faers2017q1wk11ct = len(faers2017q1wk11)
## Week 12
faers2017q1wk12 = cases_2017q1[cases_2017q1.isin(wk12)]
faers2017q1wk12ct = len(faers2017q1wk12)
## Week 13
faers2017q1wk13 = cases_2017q1[cases_2017q1.isin(wk13)]
faers2017q1wk13ct = len(faers2017q1wk13)
### Create a dataframe from faers2017qwk## for each week containing only Case_ID
## The purpose is to isolate Case_IDs for each week for use on drug_txt, indication_txt, and outcome_txt
## Week 1
faers2017q1wk01caseid = pd.DataFrame(faers2017q1wk01.index, columns = ['Case_ID'] )
## Week 2
faers2017q1wk02caseid = pd.DataFrame(faers2017q1wk02.index, columns = ['Case_ID'] )
## Week 3
faers2017q1wk03caseid = pd.DataFrame(faers2017q1wk03.index, columns = ['Case_ID'] )
## Week 4
faers2017q1wk04caseid = pd.DataFrame(faers2017q1wk04.index, columns = ['Case_ID'] )
## Week 5
faers2017q1wk05caseid = pd.DataFrame(faers2017q1wk05.index, columns = ['Case_ID'] )
## Week 6
faers2017q1wk06caseid = pd.DataFrame(faers2017q1wk06.index, columns = ['Case_ID'] )
## Week 7
faers2017q1wk07caseid = pd.DataFrame(faers2017q1wk07.index, columns = ['Case_ID'] )
## Week 8
faers2017q1wk08caseid = pd.DataFrame(faers2017q1wk08.index, columns = ['Case_ID'] )
## Week 9
faers2017q1wk09caseid = pd.DataFrame(faers2017q1wk09.index, columns = ['Case_ID'] )
## Week 10
faers2017q1wk10caseid = pd.DataFrame(faers2017q1wk10.index, columns = ['Case_ID'] )
## Week 11
faers2017q1wk11caseid = pd.DataFrame(faers2017q1wk11.index, columns = ['Case_ID'] )
## Week 12
faers2017q1wk12caseid = pd.DataFrame(faers2017q1wk12.index, columns = ['Case_ID'] )
## Week 13
faers2017q1wk13caseid = pd.DataFrame(faers2017q1wk13.index, columns = ['Case_ID'] )
###Output
_____no_output_____
###Markdown
Part III. Create a Histogram of the data from the FAERS 20yy Qq data.In the following cells, we will create a very simple histogram of the data contained in faers_ascii_20yyQq. The purpose is to get a quick look at the difference in case counts across each week interval. The histogram should show: * Figure 1 - "Histogram of All Cases In 2017q1": The y-axis should represent raw counts for cases in the dataset, with the x-axis representing aggregations of cases for every week interval. * Note: The first quarter of the year runs from January 1st to March 31st, 2017, so a total of 13 weeks to be plotted on the x-axis. * This histogram allows probing questions to be asked regarding world events that could have impacted the volume of FAERS cases submitted tot he FDA. A. Raw case counts for FAERS data by week
###Code
### Create a frame for each FAERS week interval counts
faers2017q1val = (faers2017q1wk1ct, faers2017q1wk2ct, faers2017q1wk3ct, faers2017q1wk4ct, faers2017q1wk5ct,
faers2017q1wk6ct, faers2017q1wk7ct, faers2017q1wk8ct, faers2017q1wk9ct, faers2017q1wk10ct,
faers2017q1wk11ct, faers2017q1wk12ct, faers2017q1wk13ct)
### Assign an index for week # to faers2017q1val
faers2017q1vals = pd.Series(faers2017q1val, index = ['Week 1', 'Week 2', 'Week 3', 'Week 4', 'Week 5', 'Week 6',
'Week 7', 'Week 8', 'Week 9', 'Week 10', 'Week 11', 'Week 12', 'Week 13'])
#
faers2017q1vals
###Output
_____no_output_____
###Markdown
B. Setting ug the histogram characteristics
###Code
N = 13
ind = np.arange(N) # the x locations for the groups
width = 0.35 # the width of the bars
fig, ax = plt.subplots()
# add some text for labels, title and axes ticks
plt.title("Histogram of All Cases In 2017 Quarter 1 (By Week)")
plt.xlabel("Week Interval")
plt.ylabel("Frequency of Cases")
ax.set_xticks(ind + width / 2)
ax.set_xticklabels(('1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13'))
plt.grid(True)
plt.bar(range(len(faers2017q1vals)), faers2017q1vals, align='center', color = 'red')
plt.scatter(range(len(faers2017q1vals)), faers2017q1vals, color = 'black')
###Output
_____no_output_____
###Markdown
Part IV. Stratifying the 2017 QI FAERS cases based on Demograpgics, Indications and Medications, and Outcomes of Interest A1. Detailing parameters for Demographics* Demographics of interest * Sex * Male * Female * Country Reporting Adverse Event * Various (Focus should be on the United States (US) and Canada (CA) A2. Detailing parameters for Indications* Indications of interest * Anxiety Disorders * Bipolar illness * Personality disorders * Post-traumatic Stress Disorder * Major Depressive Disorder * Suicide/Suicidal Ideation * Hypertension * Heart Disease * Irratable Bowel Syndrome * Traumatic Brain Injury * Insomnia * Generalized Pain * Arthritis Pain * Schizophrenia * Substance Use Disorder A3. Detailing parameters for Medications* Medications of interest * Antidepressants: * Bupropion * Citalopram * Paroxetine * Sertraline * Duloxetine * Fluoxetine * Mirtazepine * Bipolar Medications: * Lithium/ Lithium Carbonate * Anti-seizure: * Lamotrigine * Valproate/Valproic Acid * Benzodiazepines * Alprazolam * Diazepam * Lorazepam * Clonazepam * Flurazepam * Quazepam * Triazolam * Estazolam * Temazepam * Oxazepam * Clorazepate * Narcotic Medications * Morphine * Hydrocodone * Oxycodone * Codeine * Fentanyl * Hydromorphone * Oxymorphone * Trapentadol * Misc: * Trazodone * Zolpidem * Quetiapine * Aripiprazole * Chlordiazepoxide * Meperidine * Tramadol A4. Detailing parameters for Outcomes* Outcomes of interest * Patient Outcome codes: * Death * Life-Threatening * Hospitalization - Initial or Prolonged * Disability * Congenital Anomaly * Required Intervention to Prevent Permanent Impairment/Damage * Other Serious (Important Medical Event * Adverse Events tied to cases (see reaction_txt to query for adverse events as needed) B. Querying indications of interest from 20yyQq FAERS data
###Code
### Set some global variables for hadling multiple medications for each case
seq = 99 #limit the sequence numbers associated with any single case -- this changes the number of medications observed in each case to the first 5 reported
### Create several dataframes from indication_txt with only cases that contain indications of interest
## columns to drop in each df
drop_col_indi = ['Primary_ID', 'MedDRA_indi_term']
# Anxiety Disorders
indi_anxiety = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('nxiety')])
indi_anxiety = indi_anxiety[indi_anxiety.Drug_Seq <= seq]
indi_anxiety.drop(drop_col_indi, axis = 1, inplace = True)
indi_anxiety['Anxiety Disorders'] = 1
# Bipolar Disorders
indi_bipolar = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('bipolar')])
indi_bipolar = indi_bipolar[indi_bipolar.Drug_Seq <= seq]
indi_bipolar.drop(drop_col_indi, axis = 1, inplace = True)
indi_bipolar['Bipolar Disorder'] = 1
# Personality Disorders
indi_personality = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('ersonality')])
indi_personality = indi_personality[indi_personality.Drug_Seq <= seq]
indi_personality.drop(drop_col_indi, axis = 1, inplace = True)
indi_personality['Borderline Personality Disorders'] = 1
# Post-traumatic stress disorder
indi_ptsd = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('traumatic stress')])
indi_ptsd = indi_ptsd[indi_ptsd.Drug_Seq <= seq]
indi_ptsd.drop(drop_col_indi, axis = 1, inplace = True)
indi_ptsd['Post-Traumatic Stress Disorder'] = 1
# Generalized depressive disorder
indi_mdd = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('epression')])
indi_mdd = indi_mdd[indi_mdd.Drug_Seq <= seq]
indi_mdd.drop(drop_col_indi, axis = 1, inplace = True)
indi_mdd['Generalized Depressive Disorder'] = 1
# Suicidal Ideation
indi_suicidal = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('uicid')])
indi_suicidal = indi_suicidal[indi_suicidal.Drug_Seq <= seq]
indi_suicidal.drop(drop_col_indi, axis = 1, inplace = True)
indi_suicidal['Suicidal Ideation'] = 1
# Generalized Hypertension
indi_htn = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('ypertension')])
indi_htn = indi_htn[indi_htn.Drug_Seq <= seq]
indi_htn.drop(drop_col_indi, axis = 1, inplace = True)
indi_htn['Generalized Hypertension'] = 1
# Heart Disease
indi_heartdisease = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('eart')])
indi_heartdisease = indi_heartdisease[indi_heartdisease.Drug_Seq <= seq]
indi_heartdisease.drop(drop_col_indi, axis = 1, inplace = True)
indi_heartdisease['Heart Disease'] = 1
# Irratable Bowel Syndrome
indi_ibs = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('bowel')])
indi_ibs = indi_ibs[indi_ibs.Drug_Seq <= seq]
indi_ibs.drop(drop_col_indi, axis = 1, inplace = True)
indi_ibs['Irratable Bowel Syndrome'] = 1
# Nerve Injury
indi_nerve = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('Nerve injury')])
indi_nerve = indi_nerve[indi_nerve.Drug_Seq <= seq]
indi_nerve.drop(drop_col_indi, axis = 1, inplace = True)
indi_nerve['Nerve Injury'] = 1
# Insomnia
indi_insomnia = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('insomnia')])
indi_insomnia = indi_insomnia[indi_insomnia.Drug_Seq <= seq]
indi_insomnia.drop(drop_col_indi, axis = 1, inplace = True)
indi_insomnia['Insomnia'] = 1
# Arthritis Pain
indi_apain = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('rthrit')])
indi_apain = indi_apain[indi_apain.Drug_Seq <= seq]
indi_apain.drop(drop_col_indi, axis = 1, inplace = True)
indi_apain['Arthritis Pain'] = 1
# Generalized Pain
indi_gpain = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('Pain')])
indi_gpain = indi_gpain[indi_gpain.Drug_Seq <= seq]
indi_gpain.drop(drop_col_indi, axis = 1, inplace = True)
indi_gpain['Generalized Pain'] = 1
# Epilepsy
indi_epilep = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('pilep')])
indi_epilep = indi_epilep[indi_epilep.Drug_Seq <= seq]
indi_epilep.drop(drop_col_indi, axis = 1, inplace = True)
indi_epilep['Epilepsy'] = 1
# Seizures
indi_seiz = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('eizure')])
indi_seiz = indi_seiz[indi_seiz.Drug_Seq <= seq]
indi_seiz.drop(drop_col_indi, axis = 1, inplace = True)
indi_seiz['Seizures'] = 1
# Substance Use/Abuse Disorder (SUDS)
indi_suds = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('abuse')])
indi_suds = indi_suds[indi_suds.Drug_Seq <= seq]
indi_suds.drop(drop_col_indi, axis = 1, inplace = True)
indi_suds['Substance Use/Abuse Disorder'] = 1
# Schizophrenia
indi_schiz = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('Schiz')])
indi_schiz = indi_schiz[indi_schiz.Drug_Seq <= seq]
indi_schiz.drop(drop_col_indi, axis = 1, inplace = True)
indi_schiz['Schizophrenia'] = 1
# Drug/Substace Dependence
indi_depen = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('Drug dependence')])
indi_depen = indi_depen[indi_depen.Drug_Seq <= seq]
indi_depen.drop(drop_col_indi, axis = 1, inplace = True)
indi_depen['Drug/Substace Dependence'] = 1
# Other terms of relevance (works in progress) -- DO NOT USE
# Search for connections to the term 'childhood'
indi_childhood = pd.DataFrame(indication_txt[indication_txt.MedDRA_indi_term.str.contains('childhood')])
#NOTE on indi_childhood: only two results appear for 2017 Q1 Data;
#consider looking to previous quarterly data for more samples -- come back to this later
###Output
_____no_output_____
###Markdown
C. Identifying medications of interest from 20yyQq FAERS data C1. Find all patients in the first quarter 2017 data with any medication as mentioned above
###Code
### Create a dataframe from drug_txt with only cases that contain drugs mentioned in
### drop columns for variables which will not be utilized
drop_col_drug = ['Value_VBM','Drug_Name_Source','Route','Verbatim_DoseCum_Dose_to_Rxn','Cum_Dose_to_Rxn_Units',
'Dechall_Code','Rechall_Code','Lot_Numb', 'Drug_Exp_dt', 'NDA_Numn','Dose_Amount' , 'Dose_Unit',
'Dose_Form', 'Dose_Freq', 'Drug_Name', 'Reporter_role', 'Active_Ingredient', 'Primary_ID']
#do not drop drug seq for now
case_key = indication_txt[indication_txt.columns[0:3]] ### Create a base key for all Case_IDs
### drop columns for variables which will not be utilized
drop_col_drug = ['Value_VBM','Drug_Name_Source','Route','Verbatim_DoseCum_Dose_to_Rxn','Cum_Dose_to_Rxn_Units',
'Dechall_Code','Rechall_Code','Lot_Numb', 'Drug_Exp_dt', 'NDA_Numn','Dose_Amount' , 'Dose_Unit',
'Dose_Form', 'Dose_Freq', 'Drug_Name', 'Reporter_role', 'Active_Ingredient', 'Primary_ID']
## Note: Do not drop drug seq for now-- we need this to specify how many sequential meds to include for each case
### Create a base key for all Case_IDs
case_key = indication_txt[indication_txt.columns[0:3]]
###Output
_____no_output_____
###Markdown
C2. Create dataframes from drug_txt with only cases that contain medications of interest
###Code
## ---Benzodiazepines---
# Alprazolam
drug_Alprazolam = drug_txt[drug_txt.Active_Ingredient.str.contains("ALPRAZOLAM", na = False)]
drug_Alprazolam = drug_Alprazolam[drug_Alprazolam.Drug_Seq <= seq]
drug_Alprazolam.drop(drop_col_drug, axis = 1, inplace = True)
drug_Alprazolam['Alprazolam'] = 1
all_indi = pd.merge(case_key, drug_Alprazolam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Lorazepam
drug_Lorazepam = drug_txt[drug_txt.Active_Ingredient.str.contains("LORAZEPAM", na = False)]
drug_Lorazepam = drug_Lorazepam[drug_Lorazepam.Drug_Seq <= seq]
drug_Lorazepam.drop(drop_col_drug, axis = 1, inplace = True)
drug_Lorazepam['Lorazepam'] = 1
all_indi = pd.merge(all_indi, drug_Lorazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Clonazepam
drug_Clonazepam = drug_txt[drug_txt.Active_Ingredient.str.contains("CLONAZEPAM", na = False)]
drug_Clonazepam = drug_Clonazepam[drug_Clonazepam.Drug_Seq <= seq]
drug_Clonazepam.drop(drop_col_drug, axis = 1, inplace = True)
drug_Clonazepam['Clonazepam'] = 1
all_indi = pd.merge(all_indi, drug_Clonazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Diazepam
drug_Diazepam = drug_txt[drug_txt.Active_Ingredient.str.contains("DIAZEPAM", na = False)]
drug_Diazepam = drug_Diazepam[drug_Diazepam.Drug_Seq <= seq]
drug_Diazepam.drop(drop_col_drug, axis = 1, inplace = True)
drug_Diazepam['Diazepam'] = 1
all_indi = pd.merge(all_indi, drug_Diazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Flurazepam
drug_Flurazepam = drug_txt[drug_txt.Active_Ingredient.str.contains("FLURAZEPAM", na = False)]
drug_Flurazepam = drug_Flurazepam[drug_Flurazepam.Drug_Seq <= seq]
drug_Flurazepam.drop(drop_col_drug, axis = 1, inplace = True)
drug_Flurazepam['Flurazepam'] = 1
all_indi = pd.merge(all_indi, drug_Flurazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Quazepam
drug_Quazepam = drug_txt[drug_txt.Active_Ingredient.str.contains("QUAZEPAM", na = False)]
drug_Quazepam = drug_Quazepam[drug_Quazepam.Drug_Seq <= seq]
drug_Quazepam.drop(drop_col_drug, axis = 1, inplace = True)
drug_Quazepam['Quazepam'] = 1
all_indi = pd.merge(all_indi, drug_Quazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Triazolam
drug_Triazolam = drug_txt[drug_txt.Active_Ingredient.str.contains("TRIAZOLAM", na = False)]
drug_Triazolam = drug_Triazolam[drug_Triazolam.Drug_Seq <= seq]
drug_Triazolam.drop(drop_col_drug, axis = 1, inplace = True)
drug_Triazolam['Triazolam'] = 1
all_indi = pd.merge(all_indi, drug_Triazolam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Chlordiazepoxide
drug_Chlordiazepoxide = drug_txt[drug_txt.Active_Ingredient.str.contains("CHLORDIAZEPOXIDE", na = False)]
drug_Chlordiazepoxide = drug_Chlordiazepoxide[drug_Chlordiazepoxide.Drug_Seq <= seq]
drug_Chlordiazepoxide.drop(drop_col_drug, axis = 1, inplace = True)
drug_Chlordiazepoxide['Chlordiazepoxide'] = 1
all_indi = pd.merge(all_indi, drug_Chlordiazepoxide, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Estazolam
drug_Estazolam = drug_txt[drug_txt.Active_Ingredient.str.contains("ESTAZOLAM", na = False)]
drug_Estazolam = drug_Estazolam[drug_Estazolam.Drug_Seq <= seq]
drug_Estazolam.drop(drop_col_drug, axis = 1, inplace = True)
drug_Estazolam['Estazolam'] = 1
all_indi = pd.merge(all_indi, drug_Estazolam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Temazepam
drug_Temazepam = drug_txt[drug_txt.Active_Ingredient.str.contains("TEMAZEPAM", na = False)]
drug_Temazepam = drug_Temazepam[drug_Temazepam.Drug_Seq <= seq]
drug_Temazepam.drop(drop_col_drug, axis = 1, inplace = True)
drug_Temazepam['Temazepam'] = 1
all_indi = pd.merge(all_indi, drug_Temazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Oxazepam
drug_Oxazepam = drug_txt[drug_txt.Active_Ingredient.str.contains("OXAZEPAM", na = False)]
drug_Oxazepam = drug_Oxazepam[drug_Oxazepam.Drug_Seq <= seq]
drug_Oxazepam.drop(drop_col_drug, axis = 1, inplace = True)
drug_Oxazepam['Oxazepam'] = 1
all_indi = pd.merge(all_indi, drug_Oxazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Clorazepate
drug_Clorazepate = drug_txt[drug_txt.Active_Ingredient.str.contains("CLORAZEPATE", na = False)]
drug_Clorazepate = drug_Clorazepate[drug_Clorazepate.Drug_Seq <= seq]
drug_Clorazepate.drop(drop_col_drug, axis = 1, inplace = True)
drug_Clorazepate['Clorazepate'] = 1
all_indi = pd.merge(all_indi, drug_Clorazepate, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
## Generate total count for cases containing benzodiazepines
benzo_count = (len(drug_Alprazolam) + len(drug_Lorazepam) + len(drug_Clonazepam) + len(drug_Diazepam)
+ len(drug_Flurazepam)+ len(drug_Quazepam)+ len(drug_Triazolam)+ len(drug_Chlordiazepoxide)
+ len(drug_Estazolam)+ len(drug_Temazepam)+ len(drug_Oxazepam)+ len(drug_Clorazepate))
### Generate dataframe of only Benzodiazepines
benzo_frame = indication_txt[indication_txt.columns[1:3]] ### Create a base key for all Case_IDs
benzo_frame = pd.merge(benzo_frame, drug_Alprazolam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
benzo_frame = pd.merge(benzo_frame, drug_Lorazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
benzo_frame = pd.merge(benzo_frame, drug_Clonazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
benzo_frame = pd.merge(benzo_frame, drug_Diazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
benzo_frame = pd.merge(benzo_frame, drug_Flurazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
benzo_frame = pd.merge(benzo_frame, drug_Quazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
benzo_frame = pd.merge(benzo_frame, drug_Triazolam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
benzo_frame = pd.merge(benzo_frame, drug_Chlordiazepoxide, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
benzo_frame = pd.merge(benzo_frame, drug_Estazolam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
benzo_frame = pd.merge(benzo_frame, drug_Temazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
benzo_frame = pd.merge(benzo_frame, drug_Oxazepam, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
benzo_frame = pd.merge(benzo_frame, drug_Clorazepate, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
### Drop all rows that have Nan in medication columns
benzo_frame = benzo_frame.dropna(thresh = 3)
## note: There is no case in the dataframe that has more than one benzodiazepine (an interesting point to mention..)
### Sum the number of medications in a new dataframe
benzo_frame['Benzo_Tot'] = (benzo_frame.iloc[:, 2:14]).sum(axis = 1)
### Chech the size of benzo_frame
len(benzo_frame)
## ---Narcotic Medications---
# Hydrocodone
drug_Hydrocodone = drug_txt[drug_txt.Active_Ingredient.str.contains("HYDROCODONE", na = False)]
drug_Hydrocodone = drug_Hydrocodone[drug_Hydrocodone.Drug_Seq <= seq]
drug_Hydrocodone.drop(drop_col_drug, axis = 1, inplace = True)
drug_Hydrocodone['Hydrocodone'] = 1
all_indi = pd.merge(all_indi, drug_Hydrocodone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Oxycodone
drug_Oxycodone = drug_txt[drug_txt.Active_Ingredient.str.contains("OXYCODONE", na = False)]
drug_Oxycodone = drug_Oxycodone[drug_Oxycodone.Drug_Seq <= seq]
drug_Oxycodone.drop(drop_col_drug, axis = 1, inplace = True)
drug_Oxycodone['Oxycodone'] = 1
all_indi = pd.merge(all_indi, drug_Oxycodone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Codeine
drug_Codeine = drug_txt[drug_txt.Active_Ingredient.str.contains("CODEINE", na = False)]
drug_Codeine = drug_Codeine[drug_Codeine.Drug_Seq <= seq]
drug_Codeine.drop(drop_col_drug, axis = 1, inplace = True)
drug_Codeine['Codeine'] = 1
all_indi = pd.merge(all_indi, drug_Codeine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Morphine
drug_Morphine = drug_txt[drug_txt.Active_Ingredient.str.contains("MORPHINE", na = False)]
drug_Morphine = drug_Morphine[drug_Morphine.Drug_Seq <= seq]
drug_Morphine.drop(drop_col_drug, axis = 1, inplace = True)
drug_Morphine['Morphine'] = 1
all_indi = pd.merge(all_indi, drug_Morphine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Fentanyl
drug_Fentanyl = drug_txt[drug_txt.Active_Ingredient.str.contains("FENTANYL", na = False)]
drug_Fentanyl = drug_Fentanyl[drug_Fentanyl.Drug_Seq <= seq]
drug_Fentanyl.drop(drop_col_drug, axis = 1, inplace = True)
drug_Fentanyl['Fentanyl'] = 1
all_indi = pd.merge(all_indi, drug_Fentanyl, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Hydromorphone
drug_Hydromorphone = drug_txt[drug_txt.Active_Ingredient.str.contains("HYDROMORPHONE", na = False)]
drug_Hydromorphone = drug_Hydromorphone[drug_Hydromorphone.Drug_Seq <= seq]
drug_Hydromorphone.drop(drop_col_drug, axis = 1, inplace = True)
drug_Hydromorphone['Hydromorphone'] = 1
all_indi = pd.merge(all_indi, drug_Hydromorphone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Meperidine
drug_Meperidine = drug_txt[drug_txt.Active_Ingredient.str.contains("MEPERIDINE", na = False)]
drug_Meperidine = drug_Meperidine[drug_Meperidine.Drug_Seq <= seq]
drug_Meperidine.drop(drop_col_drug, axis = 1, inplace = True)
drug_Meperidine['Meperidine'] = 1
all_indi = pd.merge(all_indi, drug_Meperidine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Oxymorphone
drug_Oxymorphone = drug_txt[drug_txt.Active_Ingredient.str.contains("OXYMORPHONE", na = False)]
drug_Oxymorphone = drug_Oxymorphone[drug_Oxymorphone.Drug_Seq <= seq]
drug_Oxymorphone.drop(drop_col_drug, axis = 1, inplace = True)
drug_Oxymorphone['Oxymorphone'] = 1
all_indi = pd.merge(all_indi, drug_Oxymorphone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Trapentadol
drug_Trapentadol = drug_txt[drug_txt.Active_Ingredient.str.contains("TRAPENTADOL", na = False)]
drug_Trapentadol = drug_Trapentadol[drug_Trapentadol.Drug_Seq <= seq]
drug_Trapentadol.drop(drop_col_drug, axis = 1, inplace = True)
drug_Trapentadol['Trapentadol'] = 1
all_indi = pd.merge(all_indi, drug_Trapentadol, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Tramadol
drug_Tramadol = drug_txt[drug_txt.Active_Ingredient.str.contains("TRAMADOL", na = False)]
drug_Tramadol = drug_Tramadol[drug_Tramadol.Drug_Seq <= seq]
drug_Tramadol.drop(drop_col_drug, axis = 1, inplace = True)
drug_Tramadol['Tramadol'] = 1
all_indi = pd.merge(all_indi, drug_Tramadol, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
## Generate total count for cases containing narcotics
narcot_count = (len(drug_Hydrocodone) + len(drug_Oxycodone) + len(drug_Codeine) + len(drug_Morphine)
+ len(drug_Fentanyl) + len(drug_Hydromorphone) + len(drug_Meperidine) + len(drug_Oxymorphone)
+ len(drug_Trapentadol) + len(drug_Tramadol))
### Generate dataframe of only narcotics
narcot_frame = indication_txt[indication_txt.columns[1:3]] ### Create a base key for all Case_IDs
narcot_frame = pd.merge(narcot_frame, drug_Hydrocodone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
narcot_frame = pd.merge(narcot_frame, drug_Oxycodone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
narcot_frame = pd.merge(narcot_frame, drug_Codeine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
narcot_frame = pd.merge(narcot_frame, drug_Morphine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
narcot_frame = pd.merge(narcot_frame, drug_Fentanyl, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
narcot_frame = pd.merge(narcot_frame, drug_Hydromorphone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
narcot_frame = pd.merge(narcot_frame, drug_Meperidine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
narcot_frame = pd.merge(narcot_frame, drug_Oxymorphone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
narcot_frame = pd.merge(narcot_frame, drug_Trapentadol, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
narcot_frame = pd.merge(narcot_frame, drug_Tramadol, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Drop all rows that have all NaN in medication columns
narcot_frame = narcot_frame.dropna(thresh = 3) ## Using the dropna(thresh = X) allows us to take advantage of the dataframes characteristic structure
### Sum the number of medications in a new dataframe
narcot_frame['narcot_Tot'] = (narcot_frame.iloc[:, 2:12]).sum(axis = 1)
## preview the first 5 rows
narcot_frame[:5]
## ---Narcotic Withdrawal Medications---
# Buprenorphine
drug_Buprenorphine = drug_txt[drug_txt.Active_Ingredient.str.contains("BUPRENORPHINE", na = False)]
drug_Buprenorphine = drug_Buprenorphine[drug_Buprenorphine.Drug_Seq <= seq]
drug_Buprenorphine.drop(drop_col_drug, axis = 1, inplace = True)
drug_Buprenorphine['Buprenorphine'] = 1
all_indi = pd.merge(all_indi, drug_Buprenorphine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Methadone
drug_Methadone = drug_txt[drug_txt.Active_Ingredient.str.contains("METHADONE", na = False)]
drug_Methadone = drug_Methadone[drug_Methadone.Drug_Seq <= seq]
drug_Methadone.drop(drop_col_drug, axis = 1, inplace = True)
drug_Methadone['Methadone'] = 1
all_indi = pd.merge(all_indi, drug_Methadone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Naloxone
drug_Naloxone = drug_txt[drug_txt.Active_Ingredient.str.contains("NALOXONE", na = False)]
drug_Naloxone = drug_Naloxone[drug_Naloxone.Drug_Seq <= seq]
drug_Naloxone.drop(drop_col_drug, axis = 1, inplace = True)
drug_Naloxone['Naloxone'] = 1
all_indi = pd.merge(all_indi, drug_Naloxone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Naltrexone
drug_Naltrexone = drug_txt[drug_txt.Active_Ingredient.str.contains("NALTREXONE", na = False)]
drug_Naltrexone = drug_Naltrexone[drug_Naltrexone.Drug_Seq <= seq]
drug_Naltrexone.drop(drop_col_drug, axis = 1, inplace = True)
drug_Naltrexone['Naltrexone'] = 1
all_indi = pd.merge(all_indi, drug_Naltrexone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
## Generate total count for cases containing withdrawal medications
wthdrwl_count = (len(drug_Buprenorphine) + len(drug_Methadone) + len(drug_Naloxone) + len(drug_Naltrexone))
### Generate dataframe of only Narcotic Withdrawl Meds
wthdrwl_frame = indication_txt[indication_txt.columns[1:3]] ### Create a base key for all Case_IDs
wthdrwl_frame = pd.merge(wthdrwl_frame, drug_Buprenorphine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
wthdrwl_frame = pd.merge(wthdrwl_frame, drug_Methadone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
wthdrwl_frame = pd.merge(wthdrwl_frame, drug_Naloxone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
wthdrwl_frame = pd.merge(wthdrwl_frame, drug_Naltrexone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
### Drop all rows that have Nan in medication columns
wthdrwl_frame = wthdrwl_frame.dropna(thresh = 3)
### Sum the number of medications in a new dataframe
wthdrwl_frameTot = pd.DataFrame()
wthdrwl_frame['wthdrwl_Tot'] = (wthdrwl_frame.iloc[:, 2:5]).sum(axis = 1)
### Checking for presence of narcotic withdrawal med + narcotic in same case
## Create a base key for all Case_IDs
narco_wthdrwl_frame = indication_txt[indication_txt.columns[1:3]]
### Merge withdrwl_frame with the narcotic frame
narco_wthdrwl_frame = pd.merge(wthdrwl_frame, narcot_frame, on = ['Case_ID'], how = 'outer')
### cleaning merge
narco_wthdrwl_frame = narco_wthdrwl_frame.drop(['Drug_Seq_x'], axis = 1)
narco_wthdrwl_frame = narco_wthdrwl_frame.drop(['Drug_Seq_y'], axis = 1)
narco_wthdrwl_frame = narco_wthdrwl_frame.drop(['wthdrwl_Tot'], axis = 1)
narco_wthdrwl_frame = narco_wthdrwl_frame.drop(['narcot_Tot'], axis = 1)
### define a totaling column named NC_tot
narco_wthdrwl_frame['NC_Tot'] = (narco_wthdrwl_frame.iloc[:, 1:16]).sum(axis = 1)
NCtot1 = narco_wthdrwl_frame[narco_wthdrwl_frame['NC_Tot'] ==2]
NCtot1[:5] ##-- because NC_Tot can equal 2, we can safely assume that some medications in our set are co-prescribed
##-- keep in mind that the threshold for these frames is still at 3
## ---Selective Serotonin Reuptake Inhibitors (SSRIs)
# Paroxetine
drug_paroxetine = drug_txt[drug_txt.Active_Ingredient.str.contains("PAROXETINE", na = False)]
drug_paroxetine = drug_paroxetine[drug_paroxetine.Drug_Seq <= seq]
drug_paroxetine.drop(drop_col_drug, axis = 1, inplace = True)
drug_paroxetine['Paroxetine'] = 1
all_indi = pd.merge(all_indi, drug_paroxetine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Sertraline
drug_sertraline = drug_txt[drug_txt.Active_Ingredient.str.contains("SERTRALINE", na = False)]
drug_sertraline = drug_sertraline[drug_sertraline.Drug_Seq <= seq]
drug_sertraline.drop(drop_col_drug, axis = 1, inplace = True)
drug_sertraline['Sertraline'] = 1
all_indi = pd.merge(all_indi, drug_sertraline, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Fluoxetine
drug_fluoxetine = drug_txt[drug_txt.Active_Ingredient.str.contains("FLUOXETINE", na = False)]
drug_fluoxetine = drug_fluoxetine[drug_fluoxetine.Drug_Seq <= seq]
drug_fluoxetine.drop(drop_col_drug, axis = 1, inplace = True)
drug_fluoxetine['Fluoxetine'] = 1
all_indi = pd.merge(all_indi, drug_fluoxetine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Citalopram
drug_citalopram = drug_txt[drug_txt.Active_Ingredient.str.contains("CITALOPRAM", na = False)]
drug_citalopram = drug_citalopram[drug_citalopram.Drug_Seq <= seq]
drug_citalopram.drop(drop_col_drug, axis = 1, inplace = True)
drug_citalopram['Citalopram'] = 1
all_indi = pd.merge(all_indi, drug_citalopram, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
### Sum the number of medications in a new dataframe
ssriAD_count = (len(drug_paroxetine) + len(drug_sertraline) + len(drug_fluoxetine) + len(drug_citalopram))
## ---Norepinephine Dopamine Reuptake Inhibitors (NDRIs)
# Bupropion
drug_bupropion = drug_txt[drug_txt.Active_Ingredient.str.contains("BUPROPION", na = False)]
drug_bupropion = drug_bupropion[drug_bupropion.Drug_Seq <= seq]
drug_bupropion.drop(drop_col_drug, axis = 1, inplace = True)
drug_bupropion['Bupropion'] = 1
all_indi = pd.merge(all_indi, drug_bupropion, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
## ---Selective Norepinephrine Reuptake Inhibitors (SNRIs)
# Duloxetine
drug_duloxetine = drug_txt[drug_txt.Active_Ingredient.str.contains("DULOXETINE", na = False)]
drug_duloxetine = drug_duloxetine[drug_duloxetine.Drug_Seq <= seq]
drug_duloxetine.drop(drop_col_drug, axis = 1, inplace = True)
drug_duloxetine['Duloxetine'] = 1
all_indi = pd.merge(all_indi, drug_duloxetine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
### Sum the number of NDRI medications in a new dataframe
ndri_snriAD_count = (len(drug_bupropion) + len(drug_duloxetine))
## ---Anti-psychotics
# Aripiprazole
drug_aripiprazole = drug_txt[drug_txt.Active_Ingredient.str.contains("ARIPIPRAZOLE", na = False)]
drug_aripiprazole = drug_aripiprazole[drug_aripiprazole.Drug_Seq <= seq]
drug_aripiprazole.drop(drop_col_drug, axis = 1, inplace = True)
drug_aripiprazole['Aripiprazole'] = 1
all_indi = pd.merge(all_indi, drug_aripiprazole, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Quetiapine
drug_quetiapine = drug_txt[drug_txt.Active_Ingredient.str.contains("QUETIAPINE", na = False)]
drug_quetiapine = drug_quetiapine[drug_quetiapine.Drug_Seq <= seq]
drug_quetiapine.drop(drop_col_drug, axis = 1, inplace = True)
drug_quetiapine['Quetiapine'] = 1
all_indi = pd.merge(all_indi, drug_quetiapine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
### Sum the number of AS medications in a new dataframe
antipsy_count = (len(drug_aripiprazole) + len(drug_quetiapine))
## ---Monoamine Oxidase Inhibitors
# Mirtazepine
drug_mirtazepine = drug_txt[drug_txt.Active_Ingredient.str.contains("MIRTAZAPINE", na = False)]
drug_mirtazepine = drug_mirtazepine[drug_mirtazepine.Drug_Seq <= seq]
drug_mirtazepine.drop(drop_col_drug, axis = 1, inplace = True)
drug_mirtazepine['Mirtazapine'] = 1
all_indi = pd.merge(all_indi, drug_mirtazepine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
### Sum the number of MOA medications in a new dataframe
mao_countAD = (len(drug_mirtazepine))
### Sum the number of antidepresant medications in a new dataframe
allAD_count = (ssriAD_count + mao_countAD + ndri_snriAD_count)
## ---Seizure Medications
# Valproic Acid
drug_vpa = drug_txt[drug_txt.Active_Ingredient.str.contains("VALPRO", na = False)]
drug_vpa = drug_vpa[drug_vpa.Drug_Seq <= seq]
drug_vpa.drop(drop_col_drug, axis = 1, inplace = True)
drug_vpa['Valproic Acid'] = 1
all_indi = pd.merge(all_indi, drug_vpa, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Lamotrigine
drug_lamotrigine = drug_txt[drug_txt.Active_Ingredient.str.contains("LAMOTRIGINE", na = False)]
drug_lamotrigine = drug_lamotrigine[drug_lamotrigine.Drug_Seq <= seq]
drug_lamotrigine.drop(drop_col_drug, axis = 1, inplace = True)
drug_lamotrigine['Lamotrigine'] = 1
all_indi = pd.merge(all_indi, drug_lamotrigine, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
### Sum the number of medications in a new dataframe
seize_count = (len(drug_vpa) + len(drug_lamotrigine))
## ---Insomnia Medications
# Trazodone
drug_trazodone = drug_txt[drug_txt.Active_Ingredient.str.contains("TRAZODONE", na = False)]
drug_trazodone = drug_trazodone[drug_trazodone.Drug_Seq <= seq]
drug_trazodone.drop(drop_col_drug, axis = 1, inplace = True)
drug_trazodone['Trazodone'] = 1
all_indi = pd.merge(all_indi, drug_trazodone, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
# Zolpidem
drug_zolpidem = drug_txt[drug_txt.Active_Ingredient.str.contains("ZOLPIDEM", na = False)]
drug_zolpidem = drug_zolpidem[drug_zolpidem.Drug_Seq <= seq]
drug_zolpidem.drop(drop_col_drug, axis = 1, inplace = True)
drug_zolpidem['Zolpidem'] = 1
all_indi = pd.merge(all_indi, drug_zolpidem, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
### Sum the number of medications in a new dataframe
insom_count = (len(drug_trazodone) + len(drug_zolpidem))
## ---Bipolar Medications
# Lithium
drug_lithium = drug_txt[drug_txt.Active_Ingredient.str.contains("LITHIUM", na = False)]
drug_lithium = drug_lithium[drug_lithium.Drug_Seq <= seq]
drug_lithium.drop(drop_col_drug, axis = 1, inplace = True)
drug_lithium['Lithium'] = 1
all_indi = pd.merge(all_indi, drug_lithium, on = ['Case_ID', 'Drug_Seq'], how = 'outer')
### Sum the number of medications in a new dataframe
bipo_count = (len(drug_lithium))
###Output
_____no_output_____
###Markdown
C3. Create a pie chart for the Medication Class Counts
###Code
### Aggregate counts of medication types for pie plot
all_med_counts = pd.DataFrame([bipo_count, insom_count, benzo_count, seize_count,
mao_countAD, antipsy_count, ndri_snriAD_count, narcot_count ],
index = ['Bipolar Medications', 'Insomnia Medications', 'Benzodiazepines', 'Seizure Medications',
'MAO Medications', 'Antipsychotic Medications', 'SNRI/NDRI Medications', 'Narcotic Medications'], columns = ['Count'])
### Preview the counts for all med classes
all_med_counts
### Setting up the pie-chart
#colorwheel source: http://www.color-hex.com/color/e13f29
colors = ["#E13F29", "#D69A80", "#D63B59", "#AE5552", "#CB5C3B",
"#EB8076", "#96624E", "#4B3832", "#854442", "#FFF4E6", "#3C2F2F", "#BE9B7B"]
#pie_values
drug_all_index = all_med_counts.index
all_med_plot = plt.pie(
# using data of total reports from each country
all_med_counts.Count,
# with the labels being index of rept_country_counts
labels=drug_all_index,
# with shadows
shadow=True,
# with colors defined above
colors=colors,
# with one slide exploded out
explode=(0.05, 0.05, 0.06, 0.07, 0.08, 0.09, 0.095, 0.1),
# with the start angle at 90%
startangle=90,
# with the percent listed as a fraction
autopct='%1.0f%%',
radius = 2
)
# View the plot drop above
plt.axis('off')
#plt.title('Distribution of Medication Types in 2017 Q1 FAERS Dataset')
# View the plot
#plt.legend(loc = "right", labels = labels)
#plt.tight_layout()
#plt.show()
### seed dataframe all_indi to new dataframe b to preserve all_indi
b = all_indi
###Output
_____no_output_____
###Markdown
C4. Combine medications dataframe (all_indi/b) with each indication dataframe
###Code
### Merge dataframes together on Case_ID and Drug_Seq
indi_frames = [indi_anxiety, indi_bipolar, indi_heartdisease, indi_htn, indi_ibs,
indi_insomnia, indi_mdd, indi_personality, indi_ptsd, indi_suicidal,
indi_apain, indi_gpain, indi_nerve, indi_epilep, indi_seiz, indi_seiz,
indi_suds, indi_schiz, indi_depen]
b = pd.merge(b, indi_anxiety, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_bipolar, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_htn, how = 'left', on = ['Case_ID', 'Drug_Seq'])
#b = pd.merge(b, indi_ibs, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_insomnia, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_mdd, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_personality, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_ptsd, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_suicidal, how = 'left', on = ['Case_ID', 'Drug_Seq'])
#b = pd.merge(b, indi_heartdisease, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_apain, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_gpain, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_nerve, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_epilep, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_seiz, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_suds, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_schiz, how = 'left', on = ['Case_ID', 'Drug_Seq'])
b = pd.merge(b, indi_depen, how = 'left', on = ['Case_ID', 'Drug_Seq'])
### Check the length of b
len(b)
### Send dataframe b to new dataframe d, and use an appropriate threshold for analysis of co-prescribed medications
d = b.dropna(thresh = 5) ##-- limit the dataframe to only include rows that have at least ([thresh value] minus [3]) or more coulmnns that are not non NaN
### Check the length of d
len(d)
###--preview first 5 rows of post-threshold d
d[:5] ##--if looks decent (not too large)
### save dataframe d to file
d.to_csv('faers_ascii_2017q1/ascii/result1.csv', index = False, header = True, sep = ',')
### Make a dataframe of all case_IDs in d for future references to our cases of interest
d_cases = d[[1]]
### check the length of d_cases (should be the same length as d..)
len(d_cases)
### Isolate the Case_IDs that are in the DataFrame d above and send to a dataframe
cases_fin = pd.DataFrame(d['Case_ID'])
#--preview first 5 rows
cases_fin[:5] #--preview first 5 rows -
###Output
_____no_output_____
###Markdown
C5. Examining Demographic information from 2017 Q1 FAERS data
###Code
### Create a base key for all Case_IDs in demographic_txt
demo_key = pd.DataFrame(demographic_txt[['Case_ID', 'SEX', 'Event_country']])
### Merge cases that should be included (d_cases) with demo_key
demo_key = pd.merge(demo_key, cases_fin, how = 'inner', on = 'Case_ID')
demo_key[:5] #--preview first 5 rows -
### Check the length of demo_key
#len(demo_key)
len(demo_key)
### Extract Case counts by country
rept_country_counts = pd.value_counts(demo_key.Event_country) #-- go with Event_country, not Report_countrty
rept_country_counts = pd.DataFrame(rept_country_counts)
rept_country_counts
### Merge demo_key with result from Part B
final_demo = pd.merge(d, demo_key, how = 'left', on = 'Case_ID', copy = False)
final_demo = final_demo.fillna('0')
### check the length of final_demo
len(final_demo)
### send final_demo to csv
final_demo.to_csv('faers_ascii_2017q1/ascii/final_demo1.csv', index = False, header = True, sep = ',')
### ### Create a base key for all Case_IDs in reac_txt
reac_key = pd.DataFrame(reaction_txt[['Case_ID','MedDRA_reac_term']])
reac_key = reac_key.fillna('None Reported')
#--preview first 5 rows
reac_key[:5]
### Merge reac_key with result from Part C
final_reac = pd.merge(final_demo, reac_key, how = 'left', on = 'Case_ID')
### ### Create a base key for all Case_IDs in outc_txt
outc_key = pd.DataFrame(outcome_txt[['Case_ID','Pt_Outcome']])
outc_key[:5]
### Merge outc_key with result from Part C
final_outc = pd.merge(final_reac, outc_key, how = 'left', on = 'Case_ID')
### Merge outc_key with result from final_demo (for analysis of demographic-outcome data)
final_demo_outc = pd.merge(final_demo, outc_key, how = 'left', on = 'Case_ID')
### send final_outc_demo to csv
final_demo_outc.to_csv('faers_ascii_2017q1/ascii/final_demo_outc_result1.csv', index = False, header = True, sep = ',')
### Drop Primary_ID -- no use anymore
final_outc = final_outc.drop(['Primary_ID'], axis = 1 )
final_outc = final_outc.replace({'' : 'None Reported'})
#--preview first 5 rows -
final_outc[:5]
len(final_outc)
### seed final reac to a new appropriately named DataFrame "final_FAERS2017Q1"
final_FAERS2017Q1 = final_outc
#create a temp variable to preserve final_FAERS2017Q1
test1 = final_FAERS2017Q1
### check the length of test1
len(test1)
### send test1 to csv
test1.to_csv('faers_ascii_2017q1/ascii/final_result1.csv', index = False, header = True, sep = ',')
###Output
_____no_output_____
###Markdown
D. Descriptive/Summary Statistics on Adverse Event DataCalculate Frequency Distribution of Adverse Events in DatasetThe descriptive stats we are computing: * .describe() - count, mean, standard deviation, minimum value, 25%-tile, 50%-tile, 75%-tile, maximum values * .median() - median value of variables * .apply(mode, axis=None) - mode value of variables * .var() - varianceNOTE - If you reach an error, try using .reset_index() after each command.
###Code
###identify all AE reports in dataset
AEs_World = test1
## Count values for all Adverse Events from test1US
AEs_World_counts = pd.value_counts(AEs_World.MedDRA_reac_term)
AEs_World_counts = pd.DataFrame(AEs_World_counts)
## Determine quantile measure for AE counts that are in the q-tile of the data (we use top 1%-tile)
AEsq = 0.99
AEs_World_quant = AEs_World_counts['MedDRA_reac_term'].quantile(q = AEsq)
## apply quantile to AE count data
AEs_World_quant_totalcts = AEs_World_counts[AEs_World_counts['MedDRA_reac_term'] > AEs_World_quant]
#AEs_World_quant_counts
### Finding AE count by Patient Sex
##limit the AEs in final_FAERS2017Q1 by those stratified to the 99th %tile - only include SEX and AE Name
test1strat = test1[test1['MedDRA_reac_term'].isin(AEs_World_quant_totalcts.index)]
test1strat = pd.DataFrame(test1strat[['SEX', 'MedDRA_reac_term']])
##Split test1strat into Male and Female df
test1AEvalM = test1strat[test1strat['SEX']== 'M']
test1AEvalF = test1strat[test1strat['SEX']== 'F']
##count the frequencies of the male and female AE dataframe
test1AEvalM_count = pd.DataFrame(pd.value_counts(test1AEvalM.MedDRA_reac_term))
test1AEvalF_count = pd.DataFrame(pd.value_counts(test1AEvalF.MedDRA_reac_term))
## Combine the Total 99th %-tile AE counts, Male 99th %-tile AE counts, and Female 99th %-tile AE counts
All_AEs_World_quant_totalcts = pd.merge(test1AEvalM_count, test1AEvalF_count,
left_index = True, right_index = True, suffixes = ('_M', '_F'))
All_AEs_World_quant_totalcts = All_AEs_World_quant_totalcts.join(AEs_World_quant_totalcts)
## Preview the top 25 adverse events in the male and female cohort
All_AEs_World_quant_totalcts
#test1AEvalM_count
#test1AEvalF_count
#AEs_World_quant_totalcts
###Output
_____no_output_____
###Markdown
Plotting the top 99.9%-tile of Medication ADEs in 2017 Q1 FAERS Data
###Code
### Plot counts for top 90th%-tile of AEs in dataset across patient sex
# Setting up the positions and width for the bars
pos = list(range(len(All_AEs_World_quant_totalcts['MedDRA_reac_term'])))
width = 0.25
# Plotting the bars onto plot
fig, ax = plt.subplots(figsize=(20,10))
# Create a bar with Male+Female data,
# in position pos,
plt.bar(pos,
#using All_AEs_US_quant_totalcts['MedDRA_reac_term'] data,
All_AEs_World_quant_totalcts['MedDRA_reac_term'],
# of width
width,
# with alpha 0.5
alpha=0.5,
# with color
color="#000000",
# with label the first value in first_name
label='Total AE Counts')
# Create a bar with male AE data,
# in position pos + some width buffer,
plt.bar([p + width for p in pos],
#using test1AEvalM_count['MedDRA_reac_term_M'] data,
All_AEs_World_quant_totalcts['MedDRA_reac_term_M'],
# of width
width,
# with alpha 0.5
alpha=0.5,
# with color
color="#ae0001",
# with label the second value in first_name
label= 'Male AE Counts')
# Create a bar with female data,
# in position pos + some width buffer,
plt.bar([p + width*2 for p in pos],
#using test1AEvalF_count['MedDRA_reac_term_F'] data,
All_AEs_World_quant_totalcts['MedDRA_reac_term_F'],
# of width
width,
# with alpha 0.5
alpha=0.5,
# with color
color="#8d5524",
# with label the third value in first_name
label='Female AE Counts')
# Set the y axis label
ax.set_ylabel('Number of Cases', fontsize = 25)
# Set the chart's title
ax.set_title('Top 99th%-tile of Most Frequent Adverse Events (AEs) in 2017 Q1 FAERS Data', fontsize = 20)
# Set the position of the x ticks
ax.set_xticks([p + 1.5 * width for p in pos])
# Set the labels for the x ticks
ax.set_xticklabels(All_AEs_World_quant_totalcts.index, rotation = 'vertical', fontsize = 15)
# Setting the x-axis and y-axis limits
plt.xlim(min(pos)-width, max(pos)+width*4)
plt.ylim([0, max((All_AEs_World_quant_totalcts['MedDRA_reac_term'] +
All_AEs_World_quant_totalcts['MedDRA_reac_term_M'] +
All_AEs_World_quant_totalcts['MedDRA_reac_term_F'])*.65)] )
# Adding the legend and showing the plot
plt.legend(['Total AE Counts', 'Male AE Counts', 'Female AE Counts'], loc='upper left',
fontsize = 15, handlelength = 5, handleheight = 2)
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Plotting the top 95%-tile of Medication ADEs in 2017 Q1 FAERS Data
###Code
###identify all AE reports coming from the United States
AEs_World1 = test1
## Count values for all Adverse Events from test1World1
AEs_World1_counts = pd.value_counts(AEs_World1.MedDRA_reac_term)
AEs_World1_counts = pd.DataFrame(AEs_World1_counts)
## Determine quantile measure for AE counts that are in the q-tile of the data (we World1e top 1.1%-tile)
AEsq1 = 0.98
AEs_World1_quant = AEs_World1_counts['MedDRA_reac_term'].quantile(q = AEsq1)
## apply quantile to AE count data
AEs_World1_quant_totalcts = AEs_World1_counts[AEs_World1_counts['MedDRA_reac_term'] > AEs_World1_quant]
#AEs_World1_quant_counts
### Finding AE count by Patient Sex
##limit the AEs in final_FAERS2017Q1 by those stratified to the 99.9th %tile - only include SEX and AE Name
test1strat = test1[test1['MedDRA_reac_term'].isin(AEs_World1_quant_totalcts.index)]
test1strat = pd.DataFrame(test1strat[['SEX', 'MedDRA_reac_term']])
##Split test1strat into Male and Female df
test1AEvalM = test1strat[test1strat['SEX']== 'M']
test1AEvalF = test1strat[test1strat['SEX']== 'F']
##count the frequencies of the male and female AE dataframe
test1AEvalM_count = pd.DataFrame(pd.value_counts(test1AEvalM.MedDRA_reac_term))
test1AEvalF_count = pd.DataFrame(pd.value_counts(test1AEvalF.MedDRA_reac_term))
## Combine the Total 99.9th %-tile AE counts, Male 99.9th%-tile AE counts, and Female 99.9th%-tile AE counts
All_AEs_World1_quant_totalcts = pd.merge(test1AEvalM_count, test1AEvalF_count,
left_index = True, right_index = True, suffixes = ('_M', '_F'))
All_AEs_World1_quant_totalcts = All_AEs_World1_quant_totalcts.join(AEs_World1_quant_totalcts)
## Preview the top 25 adverse events in the male and female cohort
All_AEs_World1_quant_totalcts
#test1AEvalM_count
#test1AEvalF_count
#AEs_World1_quant_totalcts
### Plot counts for top 99.9th%-tile of AEs in dataset across patient sex
# Setting up the positions and width for the bars
pos = list(range(len(All_AEs_World1_quant_totalcts['MedDRA_reac_term'])))
width = 0.25
# Plotting the bars onto plot
fig, ax = plt.subplots(figsize=(20,10))
# Create a bar with Male+Female data,
# in position pos,
plt.bar(pos,
#World1ing All_AEs_World1_quant_totalcts['MedDRA_reac_term'] data,
All_AEs_World1_quant_totalcts['MedDRA_reac_term'],
# of width
width,
# with alpha 0.5
alpha=0.5,
# with color
color="#000000",
# with label the first value in first_name
label='Total AE Counts')
# Create a bar with male AE data,
# in position pos + some width buffer,
plt.bar([p + width for p in pos],
#World1ing test1AEvalM_count['MedDRA_reac_term_M'] data,
All_AEs_World1_quant_totalcts['MedDRA_reac_term_M'],
# of width
width,
# with alpha 0.5
alpha=0.5,
# with color
color="#ae0001",
# with label the second value in first_name
label= 'Male AE Counts')
# Create a bar with female data,
# in position pos + some width buffer,
plt.bar([p + width*2 for p in pos],
#World1ing test1AEvalF_count['MedDRA_reac_term_F'] data,
All_AEs_World1_quant_totalcts['MedDRA_reac_term_F'],
# of width
width,
# with alpha 0.5
alpha=0.5,
# with color
color="#8d5524",
# with label the third value in first_name
label='Female AE Counts')
# Set the y axis label
ax.set_ylabel('Number of Cases', fontsize = 25)
# Set the chart's title
ax.set_title('Top 95%-tile of Most Frequent Adverse Events (AEs) in 2017 Q1 FAERS Data', fontsize = 20)
# Set the position of the x ticks
ax.set_xticks([p + 1.5 * width for p in pos])
# Set the labels for the x ticks
ax.set_xticklabels(All_AEs_World1_quant_totalcts.index, rotation = 'vertical', fontsize = 15)
# Setting the x-axis and y-axis limits
plt.xlim(min(pos)-width, max(pos)+width*4)
plt.ylim([0, max((All_AEs_World1_quant_totalcts['MedDRA_reac_term'] +
All_AEs_World1_quant_totalcts['MedDRA_reac_term_M'] +
All_AEs_World1_quant_totalcts['MedDRA_reac_term_F'])*.65)] )
# Adding the legend and showing the plot
plt.legend(['Total AE Counts', 'Male AE Counts', 'Female AE Counts'], loc='upper left',
fontsize = 15, handlelength = 5, handleheight = 2)
plt.grid()
plt.show()
###Output
_____no_output_____ |
data_analysis/component_decomposition.ipynb | ###Markdown
ใใผใฟ็งๅญฆๅฑๆ2 ๆ็ตใฌใใผใ0530-32-3973 ็ซน็ฐ่ชๅคช Kalman Filterใๅฎใใผใฟใซ้ฉ็จใใฆ่งฃๆใใ๏ผใใผใฟ: OANDA APIใซใใ็บๆฟใใผใฟใๅๅพ- USD/JPY- ่ฒทๅค็ตๅค- 2014-01-01~2018-12-30https://developer.oanda.com/docs/jp/ๅฅใฎๆๆฅญใงfxใใผใฟใฎไบๆธฌใ่ฉฆใฟใฆใใใจใใใชใฎใงใใฎใใผใฟใซๅฏพใใฆๆๅๅ่งฃใขใใซใ้ฉ็จใใฆ่งฃๆใ่กใใพใ๏ผ ๆๅๅ่งฃใขใใซๅ่:- ไบฌ้ฝๅคงๅญฆๅคงๅญฆ้ข่ฌ็พฉใใใผใฟ็งๅญฆๅฑๆ2ใ
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
def load_fx_data(instrument_list, data_kind='train'):
"""
fxใใผใฟใใญใผใซใซใใ่ชญใฟ่พผใฟ๏ผ
args:
- instrument_list: ๆๅญ้
ๅ, ็บๆฟใใขๅใฎ้
ๅ('USD_JPY', 'GBP_JPY', 'EUR_JPY')
- data_kind: ่ชญใฟ่พผใฟใใใใผใฟใฎ็จฎ้ก๏ผ('train', 'test')
return:
df_fict: dict, ็บๆฟใใขๅใkeyใจใใfxใใผใฟใฎ่พๆธ
"""
df_dict = {}
for instrument in instrument_list:
df = pd.read_csv(f'../data/fx_data_{instrument}_{data_kind}', index_col=0, header=0)
df.index = pd.to_datetime(df.index)
df_dict[instrument] = df
return df_dict
instrument_list = ['USD_JPY']
df_dict_train = load_fx_data(instrument_list, data_kind='train')
df_dict_train['USD_JPY']
###Output
_____no_output_____
###Markdown
USD/JPYใฎ่ฒทๅคใฎ็ตๅคใtrainใจใใ๏ผ
###Code
train = df_dict_train['USD_JPY']['Close_ask'].values
N = train.shape[0]
###Output
_____no_output_____
###Markdown
่งฃๆfxใใผใฟใซๅจๆๆงใฏ่ฆใใใชใใฎใง่ฏใ็ตๆใฏๅพใใใชใๅฏ่ฝๆงใ้ซใ๏ผ
###Code
# ใใคใใผใใฉใกใผใฟใฎ่จญๅฎ
sd_sys_t = 0.02 #ใทในใใ ใใคใบใซใใใใใฌใณใๆๅใฎๆจๆบๅๅทฎใฎๆจๅฎๅค
sd_sys_s = 0.1 #ใทในใใ ใใคใบใซใใใๅจๆๆๅ(ๅญฃ็ฏๆๅ)ใฎๆจๆบๅๅทฎใฎๆจๅฎๅค
sd_obs = 0.1 #่ฆณๆธฌใใคใบใฎๆจๆบๅๅทฎใฎๆจๅฎๅค
tdim = 2 #ใใฌใณใใขใใซใฎๆฌกๅ
period = 500 #ๅจๆใฎๆจๅฎๅค
pdim = period - 1 #ๅจๆๅคๅใขใใซใฎๆฌกๅ
#ใใฌใณใใขใใซ
F1 = np.array([[2, -1], [1, 0]])
G1 = np.array([[1], [0]])
#ใใฌใณใใขใใซใฎ่ฆณๆธฌใขใใซ
H1 = np.array([[1, 0]])
#ๅจๆๅคๅใขใใซ
a = np.ones(pdim).reshape(-1,1)
F2 = np.block([a, np.vstack([np.eye(pdim-1), np.zeros((1, pdim-1))])]).T
G2 = np.zeros((pdim, 1))
G2[0,0] = 1
#ๅจๆๅคๅใขใใซใฎ่ฆณๆธฌใขใใซ
H2 = np.zeros((1, pdim))
H2[0,0] = 1
#ใขใใซๅ
จไฝ
#ใทในใใ ใขใใซ
F = np.block([[F1, np.zeros((tdim, pdim))], [np.zeros((pdim,tdim)), F2]])
G = np.block([[G1, np.zeros((tdim,1))], [np.zeros((pdim,1)), G2]])
Q = np.array([[sd_sys_t**2, 0], [0, sd_sys_s**2]])
#่ฆณๆธฌใขใใซ
H = np.block([[H1, H2]])
R = np.array([[sd_obs**2]])
#็ถๆ
ๅคๆฐใชใฉใฎๅฎ็พฉ
#ใใผใฟใซ้ขไฟใชใๅๆๅคใๆ ผ็ดใใใใ N+1 ๅใฎ้
ๅใ็ขบไฟ
dim = tdim + pdim #็ถๆ
ๅคๆฐใฎๆฌกๅ
xp = np.zeros((N+1, dim, 1)) #ไบๆธฌๅๅธใฎๅนณๅๅค
xf = np.zeros((N+1, dim, 1)) #ใใฃใซใฟๅๅธใฎๅนณๅๅค
Vp = np.zeros((N+1, dim, dim)) #ไบๆธฌๅๅธใฎๅๆฃ
Vf = np.zeros((N+1, dim, dim)) #ใใฃใซใฟๅๅธใฎๅๆฃ
K = np.zeros((N+1, dim, 1)) #ใซใซใใณใฒใคใณ
# (ๆณจๆ) ไบๆธฌๅๅธใฎๅๆๅคใ0ใจใใใใจใซใชใฃใฆใใ
y = train
for t in range(1, N+1):
#ไธๆๅ
ไบๆธฌ
xp[t] = F@xf[t-1]
Vp[t] = F@Vf[t-1]@F.transpose() + G@[email protected]()
#ใใฃใซใฟ
K[t] = Vp[t]@H.transpose()@np.linalg.inv(H@Vp[t]@H.transpose()+R)
xf[t] = xp[t] + K[t]@(y[t-1]-H@xp[t])
Vf[t] = (np.eye(dim)-K[t]@H)@Vp[t]
# ใใฃใซใฟๅๅธใฎๅๆๅคใๅ้ค
xf = np.delete(xf, 0, 0)
Vf = np.delete(Vf, 0, 0)
# ใใฌใณใๆๅใฎๅนณๅๅคใๆฝๅบ
x_tr_mean = xf[:,0,0]
# ๅจๆๆๅใฎๅนณๅๅคใๆฝๅบ
x_per_mean = xf[:,tdim,0]
# ็ถๆ
ใฎๅนณๅๅคใๆฝๅบ
x_mean = xf[:,0,0] + xf[:,tdim,0]
# ็ตๆใฎๅฏ่ฆๅ
start = 20
fig, ax = plt.subplots(3, 1, figsize=(10,15))
ax[0].plot(y[start:], 'ro', label='data')
ax[0].plot(x_mean[start:], 'g-', label='Kalman filter')
ax[0].set_xlabel('$t$')
ax[0].set_ylabel('$y$')
ax[0].legend()
ax[1].plot(x_tr_mean[start:], 'g-', label='trend component')
ax[1].set_xlabel('$t$')
ax[1].set_ylabel('$y$')
ax[1].legend()
ax[2].plot(x_per_mean[start:], 'b-', label='periodic component')
ax[2].set_xlabel('$t$')
ax[2].set_ylabel('$y$')
ax[2].legend()
plt.show()
###Output
_____no_output_____ |
05a-tools-titanic/archive/titanic-only_leg.ipynb | ###Markdown
 Titanic Survival Analysis **Authors:** Several public Kaggle Kernels, edits by Kevin Li & Alexander Fred Ojala Install xgboost package in your pyhton enviroment:try:```$ conda install py-xgboost```
###Code
# You can also install the package by running the line below
# directly in your notebook
#!conda install py-xgboost --y
# No warnings
import warnings
warnings.filterwarnings('ignore') # Filter out warnings
# data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB # Gaussian Naive Bays
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier #stochastic gradient descent
from sklearn.tree import DecisionTreeClassifier
import xgboost as xgb
# Plot styling
sns.set(style='white', context='notebook', palette='deep')
plt.rcParams[ 'figure.figsize' ] = 9 , 5
# Special distribution plot (will be used later)
def plot_distribution( df , var , target , **kwargs ):
row = kwargs.get( 'row' , None )
col = kwargs.get( 'col' , None )
facet = sns.FacetGrid( df , hue=target , aspect=4 , row = row , col = col )
facet.map( sns.kdeplot , var , shade= True )
facet.set( xlim=( 0 , df[ var ].max() ) )
facet.add_legend()
plt.tight_layout()
###Output
_____no_output_____
###Markdown
References to material we won't cover in detail:* **Gradient Boosting:** http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/* **Naive Bayes:** http://scikit-learn.org/stable/modules/naive_bayes.html* **Perceptron:** http://aass.oru.se/~lilien/ml/seminars/2007_02_01b-Janecek-Perceptron.pdf Input Data
###Code
train_df = pd.read_csv('train.csv')
test_df = pd.read_csv('test.csv')
combine = [train_df, test_df]
# when we change train_df or test_df the objects in combine will also change
# (combine is only a pointer to the objects)
# combine is used to ensure whatever preprocessing is done
# on training data is also done on test data
###Output
_____no_output_____
###Markdown
Analyze Data:
###Code
print(train_df.columns.values) # seem to agree with the variable definitions above
# preview the data
train_df.head()
train_df.describe()
###Output
_____no_output_____
###Markdown
Comment on the Data`PassengerId` does not contain any valuable information. `Survived, Passenger Class, Age Siblings Spouses, Parents Children` and `Fare` are numerical values -- so we don't need to transform them, but we might want to group them (i.e. create categorical variables). `Sex, Embarked` are categorical features that we need to map to integer values. `Name, Ticket` and `Cabin` might also contain valuable information. Preprocessing Data
###Code
# check dimensions of the train and test datasets
print("Shapes Before: (train) (test) = ", train_df.shape, test_df.shape)
print()
# Drop columns 'Ticket', 'Cabin', need to do it for both test and training
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
print("Shapes After: (train) (test) =", train_df.shape, test_df.shape)
# Check if there are null values in the datasets
print(train_df.isnull().sum())
print()
print(test_df.isnull().sum())
# from the Name column we will extract title of each passenger
# and save that in a column in the datasets called 'Title'
# if you want to match Titles or names with any other expression
# refer to this tutorial on regex in python:
# https://www.tutorialspoint.com/python/python_reg_expressions.htm
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
# We will check the count of different titles across the training and test dataset
pd.crosstab(train_df['Title'], train_df['Sex'])
# same for test
pd.crosstab(test_df['Title'], test_df['Sex'])
# We see common titles like Miss, Mrs, Mr,Master are dominant, we will
# correct some Titles to standard forms and replace the rarest titles
# with single name 'Rare'
for dataset in combine:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\
'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss') #Mademoiselle
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs') #Madame
train_df[['Title', 'Survived']].groupby(['Title']).mean()
# Survival chance for each title
sns.countplot(x='Survived', hue="Title", data=train_df, order=[1,0]);
# Map title string values to numbers so that we can make predictions
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
# Handle missing values
train_df.head()
# Drop the unnecessary Name column (we have the titles now)
train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
combine = [train_df, test_df]
train_df.shape, test_df.shape
# Map Sex to numerical categories
for dataset in combine:
dataset['Sex'] = dataset['Sex']. \
map( {'female': 1, 'male': 0} ).astype(int)
train_df.head()
# Guess values of age based on sex (row, male / female)
# and socioeconomic class (1st,2nd,3rd) of the passenger
guess_ages = np.zeros((2,3),dtype=int) #initialize
guess_ages
# Fill the NA's for the Age columns
# with "qualified guesses"
for idx,dataset in enumerate(combine):
if idx==0:
print('Working on Training Data set\n')
else:
print('-'*35)
print('Working on Test Data set\n')
print('Guess values of age based on sex and pclass of the passenger...')
for i in range(0, 2):
for j in range(0,3):
guess_df = dataset[(dataset['Sex'] == i) &(dataset['Pclass'] == j+1)]['Age'].dropna()
# Extract the median age for this group
# (less sensitive) to outliers
age_guess = guess_df.median()
# Convert random age float to int
guess_ages[i,j] = int(age_guess)
print('Guess_Age table:\n',guess_ages)
print ('\nAssigning age values to NAN age values in the dataset...')
for i in range(0, 2):
for j in range(0, 3):
dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\
'Age'] = guess_ages[i,j]
dataset['Age'] = dataset['Age'].astype(int)
print()
print('Done!')
train_df.head()
train_df['AgeBand'] = pd.cut(train_df['Age'], 5)
train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True)
# Plot distributions of Age of passangers who survived or did not survive
plot_distribution( train_df , var = 'Age' , target = 'Survived' , row = 'Sex' )
# Change Age column to
# map Age ranges (AgeBands) to integer values of categorical type
for dataset in combine:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age']=4
train_df.head()
train_df = train_df.drop(['AgeBand'], axis=1)
combine = [train_df, test_df]
train_df.head()
for dataset in combine:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
train_df[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean().sort_values(by='Survived', ascending=False)
sns.countplot(x='Survived', hue="FamilySize", data=train_df, order=[1,0])
for dataset in combine:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()
train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
combine = [train_df, test_df]
train_df.head()
# We can also create new geatures based on intuitive combinations
for dataset in combine:
dataset['Age*Class'] = dataset.Age * dataset.Pclass
train_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(8)
# To replace Nan value in 'Embarked', we will use the mode of ports in 'Embaraked'
# This will give us the most frequent port the passengers embarked from
freq_port = train_df.Embarked.dropna().mode()[0]
freq_port
# Fill NaN 'Embarked' Values in the datasets
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)
train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False)
sns.countplot(x='Survived', hue="Embarked", data=train_df, order=[1,0]);
# Map 'Embarked' values to integer values
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
train_df.head()
# Fill the NA values in the Fares column with the median
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)
test_df.head()
# q cut will find ranges equal to the quartile of the data
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)
train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True)
for dataset in combine:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
train_df = train_df.drop(['FareBand'], axis=1)
combine = [train_df, test_df]
train_df.head(7)
# All features are approximately on the same scale
# no need for feature engineering / normalization
test_df.head(7)
# Check correlation between features
# (uncorrelated features are generally more powerful predictors)
colormap = plt.cm.viridis
plt.figure(figsize=(10,10))
plt.title('Pearson Correlation of Features', y=1.05, size=15)
sns.heatmap(train_df.astype(float).corr().round(2)\
,linewidths=0.1,vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True)
###Output
_____no_output_____
###Markdown
Your Task: Model, Predict, and ChooseTry using different classifiers to model and predict. Choose the best model from:* Logistic Regression* KNN * SVM* Naive Bayes Classifier* Decision Tree* Random Forest* Perceptron* XGBoost.Classifier
###Code
X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
# Support Vector Machines
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
# Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron
# XGBoost
gradboost = xgb.XGBClassifier(n_estimators=1000)
gradboost.fit(X_train, Y_train)
Y_pred = gradboost.predict(X_test)
acc_perceptron = round(gradboost.score(X_train, Y_train) * 100, 2)
acc_perceptron
# Random Forest
random_forest = RandomForestClassifier(n_estimators=1000)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
# Look at importnace of features for random forest
def plot_model_var_imp( model , X , y ):
imp = pd.DataFrame(
model.feature_importances_ ,
columns = [ 'Importance' ] ,
index = X.columns
)
imp = imp.sort_values( [ 'Importance' ] , ascending = True )
imp[ : 10 ].plot( kind = 'barh' )
print (model.score( X , y ))
plot_model_var_imp(random_forest, X_train, Y_train)
# How to create a Kaggle submission:
submission = pd.DataFrame({
"PassengerId": test_df["PassengerId"],
"Survived": Y_pred
})
submission.to_csv('titanic.csv', index=False)
###Output
_____no_output_____ |
level_one_solution-2.ipynb | ###Markdown
Load Amazon Data into Spark DataFrame
###Code
from pyspark import SparkFiles
url = "https://s3.amazonaws.com/amazon-reviews-pds/tsv/amazon_reviews_us_Video_Games_v1_00.tsv.gz"
spark.sparkContext.addFile(url)
video_games_df = spark.read.csv(SparkFiles.get("amazon_reviews_us_Video_Games_v1_00.tsv.gz"), sep="\t", header=True, inferSchema=True)
video_games_df.show()
###Output
+-----------+-----------+--------------+----------+--------------+--------------------+----------------+-----------+-------------+-----------+----+-----------------+--------------------+--------------------+-----------+
|marketplace|customer_id| review_id|product_id|product_parent| product_title|product_category|star_rating|helpful_votes|total_votes|vine|verified_purchase| review_headline| review_body|review_date|
+-----------+-----------+--------------+----------+--------------+--------------------+----------------+-----------+-------------+-----------+----+-----------------+--------------------+--------------------+-----------+
| US| 12039526| RTIS3L2M1F5SM|B001CXYMFS| 737716809|Thrustmaster T-Fl...| Video Games| 5| 0| 0| N| Y|an amazing joysti...|Used this for Eli...| 2015-08-31|
| US| 9636577| R1ZV7R40OLHKD|B00M920ND6| 569686175|Tonsee 6 buttons ...| Video Games| 5| 0| 0| N| Y|Definitely a sile...|Loved it, I didn...| 2015-08-31|
| US| 2331478|R3BH071QLH8QMC|B0029CSOD2| 98937668|Hidden Mysteries:...| Video Games| 1| 0| 1| N| Y| One Star|poor quality work...| 2015-08-31|
| US| 52495923|R127K9NTSXA2YH|B00GOOSV98| 23143350|GelTabz Performan...| Video Games| 3| 0| 0| N| Y|good, but could b...|nice, but tend to...| 2015-08-31|
| US| 14533949|R32ZWUXDJPW27Q|B00Y074JOM| 821342511|Zero Suit Samus a...| Video Games| 4| 0| 0| N| Y| Great but flawed.|Great amiibo, gre...| 2015-08-31|
| US| 2377552|R3AQQ4YUKJWBA6|B002UBI6W6| 328764615|Psyclone Recharge...| Video Games| 1| 0| 0| N| Y| One Star|The remote consta...| 2015-08-31|
| US| 17521011|R2F0POU5K6F73F|B008XHCLFO| 24234603|Protection for yo...| Video Games| 5| 0| 0| N| Y| A Must|I have a 2012-201...| 2015-08-31|
| US| 19676307|R3VNR804HYSMR6|B00BRA9R6A| 682267517| Nerf 3DS XL Armor| Video Games| 5| 0| 0| N| Y| Five Stars|Perfect, kids lov...| 2015-08-31|
| US| 224068| R3GZTM72WA2QH|B009EPWJLA| 435241890|One Piece: Pirate...| Video Games| 5| 0| 0| N| Y| Five Stars| Excelent| 2015-08-31|
| US| 48467989| RNQOY62705W1K|B0000AV7GB| 256572651|Playstation 2 Dan...| Video Games| 4| 0| 0| N| Y| Four Stars|Slippery but expe...| 2015-08-31|
| US| 106569|R1VTIA3JTYBY02|B00008KTNN| 384411423|Metal Arms: Glitc...| Video Games| 5| 0| 0| N| N| Five Stars|Love the game. Se...| 2015-08-31|
| US| 48269642|R29DOU8791QZL8|B000A3IA0Y| 472622859|72 Pin Connector ...| Video Games| 1| 0| 0| N| Y| Game will get stuck|Does not fit prop...| 2015-08-31|
| US| 52738710|R15DUT1VIJ9RJZ|B0053BQN34| 577628462|uDraw Gametablet ...| Video Games| 2| 0| 0| N| Y|We have tried it ...|This was way too ...| 2015-08-31|
| US| 10556786|R3IMF2MQ3OU9ZM|B002I0HIMI| 988218515|NBA 2K12(Covers M...| Video Games| 4| 0| 0| N| Y| Four Stars|Works great good ...| 2015-08-31|
| US| 2963837|R23H79DHOZTYAU|B0081EH12M| 770100932|New Trigger Grips...| Video Games| 1| 1| 1| N| Y|Now i have to buy...|It did not fit th...| 2015-08-31|
| US| 23092109| RIV24EQAIXA4O|B005FMLZQQ| 24647669|Xbox 360 Media Re...| Video Games| 5| 0| 0| N| Y| Five Stars|perfect lightweig...| 2015-08-31|
| US| 23091728|R3UCNGYDVN24YB|B002BSA388| 33706205|Super Mario Galaxy 2| Video Games| 5| 0| 0| N| Y| Five Stars| great| 2015-08-31|
| US| 10712640| RUL4H4XTTN2DY|B00BUSLSAC| 829667834|Nintendo 3DS XL -...| Video Games| 5| 0| 0| N| Y| Five Stars|Works beautifully...| 2015-08-31|
| US| 17455376|R20JF7Z4DHTNX5|B00KWF38AW| 110680188|Captain Toad: Tr...| Video Games| 5| 0| 0| N| Y| Five Stars|Kids loved the ga...| 2015-08-31|
| US| 14754850|R2T1AJ5MFI2260|B00BRQJYA8| 616463426|Lego Batman 2: DC...| Video Games| 4| 0| 0| N| Y| Four Stars| Goodngame| 2015-08-31|
+-----------+-----------+--------------+----------+--------------+--------------------+----------------+-----------+-------------+-----------+----+-----------------+--------------------+--------------------+-----------+
only showing top 20 rows
###Markdown
Size of Data
###Code
video_games_df.count()
###Output
_____no_output_____
###Markdown
Cleaned up DataFrames to match tables
###Code
from pyspark.sql.functions import to_date
# Review DataFrame
review_id_df = video_games_df.select(["review_id", "customer_id", "product_id", "product_parent", to_date("review_date", 'yyyy-MM-dd').alias("review_date")])
review_id_df.show()
products_df = video_games_df.select(["product_id", "product_title"]).drop_duplicates()
reviews_df = video_games_df.select(["review_id", "review_headline", "review_body"])
reviews_df.show(10)
customers_df = video_games_df.groupby("customer_id").agg({"customer_id": "count"}).withColumnRenamed("count(customer_id)", "customer_count")
customers_df.show()
vine_df = video_games_df.select(["review_id", "star_rating", "helpful_votes", "total_votes", "vine"])
vine_df.show(10)
###Output
+--------------+-----------+-------------+-----------+----+
| review_id|star_rating|helpful_votes|total_votes|vine|
+--------------+-----------+-------------+-----------+----+
| RTIS3L2M1F5SM| 5| 0| 0| N|
| R1ZV7R40OLHKD| 5| 0| 0| N|
|R3BH071QLH8QMC| 1| 0| 1| N|
|R127K9NTSXA2YH| 3| 0| 0| N|
|R32ZWUXDJPW27Q| 4| 0| 0| N|
|R3AQQ4YUKJWBA6| 1| 0| 0| N|
|R2F0POU5K6F73F| 5| 0| 0| N|
|R3VNR804HYSMR6| 5| 0| 0| N|
| R3GZTM72WA2QH| 5| 0| 0| N|
| RNQOY62705W1K| 4| 0| 0| N|
+--------------+-----------+-------------+-----------+----+
only showing top 10 rows
###Markdown
Push to AWS RDS instance
###Code
mode = "append"
jdbc_url="jdbc:postgresql://<endpoint>:5432/my_data_class_db"
config = {"user":"postgres", "password": "<password>", "driver":"org.postgresql.Driver"}
# Write review_id_df to table in RDS
review_id_df.write.jdbc(url=jdbc_url, table='review_id_table', mode=mode, properties=config)
# Write products_df to table in RDS
products_df.write.jdbc(url=jdbc_url, table='products', mode=mode, properties=config)
# Write customers_df to table in RDS
customers_df.write.jdbc(url=jdbc_url, table='customers', mode=mode, properties=config)
# Write vine_df to table in RDS
vine_df.write.jdbc(url=jdbc_url, table='vines', mode=mode, properties=config)
###Output
_____no_output_____ |
Code/model-ann-singleinput-day-main.ipynb | ###Markdown
**Please place all the data files in a folder named 'data' at a location where this notebook file is place**
###Code
df = pd.read_csv('C:/Users/Hari/7COM1039-0109-2020 - Advanced Computer Science Masters Project/data/111.csv', header=None, parse_dates = [1])
df.head()
path = 'C:/Users/Hari/7COM1039-0109-2020 - Advanced Computer Science Masters Project/data/'
files = os.listdir(path)
for f in files:
df = pd.read_csv(path+f, header=None, parse_dates=[1])
print("************************")
print(f)
print(df.shape)
print(df[0].value_counts())
print(df[1].max())
print(df[1].min())
print(df[1].max() - df[1].min())
# selecting train files from 2016-01-01 to 2016-01-15
train_files = ['111', '211','311']
###Output
_____no_output_____
###Markdown
Creating Train Dataset
###Code
# creating a function for resampling and saving train datasets
def read_df(string):
df = pd.read_csv('C:/Users/Hari/7COM1039-0109-2020 - Advanced Computer Science Masters Project/data/'+string+'.csv', header=None, parse_dates=[1])
ID = 'id' + string[0]
time = 'time' + string [0]
cons = 'water_consumption' + string [0]
df.columns = [ID, time, cons, 'unknown']
df.drop(columns = 'unknown', axis = 1, inplace = True)
df.set_index(time, inplace = True)
df = df.resample('D').mean()
print("Null value is observed in {}".format(df[df.isnull().any(axis =1)].index))
# using bfill for replacing nan values where data is not foudn
df = df.fillna(method = 'bfill')
df[ID] = df[ID].astype(int)
print(df.dtypes)
return df
df1 = read_df('111')
print(df1.shape)
df2 = read_df('211')
print(df2.shape)
df3 = read_df('311')
print(df3.shape)
# df1_final = pd.concat([df1, df4])
# df2_final = pd.concat([df2, df5])
# df3_final = pd.concat([df3, df6])
#Concatenating the all the Training data files.
train_df = pd.concat([df1, df2, df3], axis = 1)
train_df
train_df['cum_cons'] = train_df['water_consumption1'] +train_df['water_consumption2']+train_df['water_consumption3']
train_df_cum = train_df.loc[:, ['cum_cons']]
print(train_df_cum.shape)
train_df_cum.head()
train_df_cum.info()
train = train_df[['cum_cons']].copy()
type(train)
import matplotlib.pyplot as plt
plt.figure(figsize=(14,8))
plt.plot(train)
plt.show()
train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 15 entries, 2016-01-01 to 2016-01-15
Freq: D
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 cum_cons 15 non-null float64
dtypes: float64(1)
memory usage: 796.0 bytes
###Markdown
Creating Test Dataset
###Code
df4 = read_df('112')
print(df4.shape)
df5 = read_df('212')
print(df5.shape)
df6 = read_df('312')
print(df6.shape)
#Concatenating the all the test data files.
test_df = pd.concat([df4, df5, df6], axis = 1)
test_df
# Adding up all the test water consumption data together
test_df['cum_cons'] = test_df['water_consumption1'] +test_df['water_consumption2']+test_df['water_consumption3']
test_df_cum = test_df.loc[:, ['cum_cons']]
print(test_df_cum.shape)
test_df_cum
test = test_df[['cum_cons']].copy()
import matplotlib.pyplot as plt
plt.figure(figsize=(14,8))
plt.plot(test)
plt.show()
import sklearn
from sklearn.preprocessing import MinMaxScaler
scale = MinMaxScaler(feature_range=(0, 1))
scale.fit(train)
train = scale.transform(train)
test = scale.transform(test)
import numpy as np
def datasetCreation(data, lback=1):
X, Y = list(), list()
for i in range(len(data)-lback-1):
a = data[i:(i+lback), 0]
X.append(a)
Y.append(data[i + lback, 0])
return np.array(X), np.array(Y)
trainX, trainY = datasetCreation(train, 1)
testX, testY = datasetCreation(test, 1)
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))
# X_train = np.reshape(X_train, (X_train.shape[0], 1, X_train.shape[1]))
# X_test = np.reshape(X_test, (X_test.shape[0], 1, X_test.shape[1]))
import numpy as np
np.random.seed(10)
###Output
_____no_output_____
###Markdown
ANN
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, LSTM
model = Sequential()
model.add(Dense(12, input_dim=1, activation='relu'))
model.add(Dense(8, activation='relu'))
model.add(Dense(4, activation='relu'))
model.add(Dense(2, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='mean_squared_error', optimizer='adam')
history = model.fit(trainX, trainY, epochs=300)
model.summary()
loss_per_epoch = history.history['loss']
import matplotlib.pyplot as plt
plt.plot(range(len(loss_per_epoch)),loss_per_epoch)
plt.title('Model loss of ANN')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train'], loc='upper left')
tr_pred = model.predict(trainX)
tr_pred = scale.inverse_transform(tr_pred)
trainY = scale.inverse_transform([trainY])
trainY = trainY.T
import math
from sklearn.metrics import mean_squared_error
tr_rmse = math.sqrt(mean_squared_error(trainY, tr_pred))
tr_rmse
plt.plot(trainY, label='Expected')
plt.plot(tr_pred, label='Predicted')
plt.legend()
plt.show()
te_pred = model.predict(testX)
te_pred = scale.inverse_transform(te_pred)
testY = scale.inverse_transform([testY])
testY = testY.T
te_rmse = math.sqrt(mean_squared_error(testY, te_pred))
te_rmse
plt.plot(testY, label='Expected')
plt.plot(te_pred, label='Predicted')
plt.legend()
plt.show()
###Output
_____no_output_____ |
Day-13/exercise-cross-validation.ipynb | ###Markdown
**This notebook is an exercise in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/cross-validation).**--- In this exercise, you will leverage what you've learned to tune a machine learning model with **cross-validation**. SetupThe questions below will give you feedback on your work. Run the following cell to set up the feedback system.
###Code
# Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex5 import *
print("Setup Complete")
###Output
Setup Complete
###Markdown
You will work with the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course) from the previous exercise. Run the next code cell without changes to load the training and test data in `X` and `X_test`. For simplicity, we drop categorical variables.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
train_data = pd.read_csv('../input/train.csv', index_col='Id')
test_data = pd.read_csv('../input/test.csv', index_col='Id')
# Remove rows with missing target, separate target from predictors
train_data.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = train_data.SalePrice
train_data.drop(['SalePrice'], axis=1, inplace=True)
# Select numeric columns only
# numeric_cols = [cname for cname in train_data.columns if train_data[cname].dtype in ['int64', 'float64']]
numeric_cols = train_data.select_dtypes(include=['int', 'float']).columns
X = train_data[numeric_cols].copy()
X_test = test_data[numeric_cols].copy()
###Output
_____no_output_____
###Markdown
Use the next code cell to print the first several rows of the data.
###Code
X.head()
###Output
_____no_output_____
###Markdown
So far, you've learned how to build pipelines with scikit-learn. For instance, the pipeline below will use [`SimpleImputer()`](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) to replace missing values in the data, before using [`RandomForestRegressor()`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) to train a random forest model to make predictions. We set the number of trees in the random forest model with the `n_estimators` parameter, and setting `random_state` ensures reproducibility.
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators=50, random_state=0))
])
###Output
_____no_output_____
###Markdown
You have also learned how to use pipelines in cross-validation. The code below uses the [`cross_val_score()`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function to obtain the mean absolute error (MAE), averaged across five different folds. Recall we set the number of folds with the `cv` parameter.
###Code
from sklearn.model_selection import cross_val_score
# Multiply by -1 since sklearn calculates *negative* MAE
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=5,
scoring='neg_mean_absolute_error')
print("Average MAE score:", scores.mean())
###Output
Average MAE score: 18276.410356164386
###Markdown
Step 1: Write a useful functionIn this exercise, you'll use cross-validation to select parameters for a machine learning model.Begin by writing a function `get_score()` that reports the average (over three cross-validation folds) MAE of a machine learning pipeline that uses:- the data in `X` and `y` to create folds,- `SimpleImputer()` (with all parameters left as default) to replace missing values, and- `RandomForestRegressor()` (with `random_state=0`) to fit a random forest model.The `n_estimators` parameter supplied to `get_score()` is used when setting the number of trees in the random forest model.
###Code
def get_score(n_estimators):
"""Return the average MAE over 3 CV folds of random forest model.
Keyword argument:
n_estimators -- the number of trees in the forest
"""
# Define pipline
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()), # missing_values=np.nan, strategy='mean'
('model', RandomForestRegressor(n_estimators=n_estimators, random_state=0))
])
# Define cross-validation
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=3,
scoring='neg_mean_absolute_error')
return scores.mean()
# Check your answer
step_1.check()
# Lines below will give you a hint or solution code
#step_1.hint()
#step_1.solution()
###Output
_____no_output_____
###Markdown
Step 2: Test different parameter valuesNow, you will use the function that you defined in Step 1 to evaluate the model performance corresponding to eight different values for the number of trees in the random forest: 50, 100, 150, ..., 300, 350, 400.Store your results in a Python dictionary `results`, where `results[i]` is the average MAE returned by `get_score(i)`.
###Code
results = {n_estimators: get_score(n_estimators) for n_estimators in range(50, 401, 50)}
# Check your answer
step_2.check()
# Lines below will give you a hint or solution code
#step_2.hint()
#step_2.solution()
###Output
_____no_output_____
###Markdown
Use the next cell to visualize your results from Step 2. Run the code without changes.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(list(results.keys()), list(results.values()))
plt.show()
###Output
_____no_output_____
###Markdown
Step 3: Find the best parameter valueGiven the results, which value for `n_estimators` seems best for the random forest model? Use your answer to set the value of `n_estimators_best`.
###Code
n_estimators_best = 200
# Check your answer
step_3.check()
# Lines below will give you a hint or solution code
#step_3.hint()
#step_3.solution()
###Output
_____no_output_____
###Markdown
**This notebook is an exercise in the [Intermediate Machine Learning](https://www.kaggle.com/learn/intermediate-machine-learning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/cross-validation).**--- In this exercise, you will leverage what you've learned to tune a machine learning model with **cross-validation**. SetupThe questions below will give you feedback on your work. Run the following cell to set up the feedback system.
###Code
# Set up code checking
import os
if not os.path.exists("./input/train.csv"):
os.symlink("./input/home-data-for-ml-course/train.csv", "./input/train.csv")
os.symlink("./input/home-data-for-ml-course/test.csv", "./input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex5 import *
print("Setup Complete")
###Output
Setup Complete
###Markdown
You will work with the [Housing Prices Competition for Kaggle Learn Users](https://www.kaggle.com/c/home-data-for-ml-course) from the previous exercise. Run the next code cell without changes to load the training and validation sets in `X_train`, `X_valid`, `y_train`, and `y_valid`. The test set is loaded in `X_test`.For simplicity, we drop categorical variables.
###Code
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
train_data = pd.read_csv('./input/train.csv', index_col='Id')
test_data = pd.read_csv('./input/test.csv', index_col='Id')
# Remove rows with missing target, separate target from predictors
train_data.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = train_data.SalePrice
train_data.drop(['SalePrice'], axis=1, inplace=True)
# Select numeric columns only
numeric_cols = [cname for cname in train_data.columns if train_data[cname].dtype in ['int64', 'float64']]
X = train_data[numeric_cols].copy()
X_test = test_data[numeric_cols].copy()
###Output
_____no_output_____
###Markdown
Use the next code cell to print the first several rows of the data.
###Code
X.head()
###Output
_____no_output_____
###Markdown
So far, you've learned how to build pipelines with scikit-learn. For instance, the pipeline below will use [`SimpleImputer()`](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html) to replace missing values in the data, before using [`RandomForestRegressor()`](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html) to train a random forest model to make predictions. We set the number of trees in the random forest model with the `n_estimators` parameter, and setting `random_state` ensures reproducibility.
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators=50, random_state=0))
])
###Output
_____no_output_____
###Markdown
You have also learned how to use pipelines in cross-validation. The code below uses the [`cross_val_score()`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function to obtain the mean absolute error (MAE), averaged across five different folds. Recall we set the number of folds with the `cv` parameter.
###Code
from sklearn.model_selection import cross_val_score
# Multiply by -1 since sklearn calculates *negative* MAE
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=5,
scoring='neg_mean_absolute_error')
print("Average MAE score:", scores.mean())
###Output
Average MAE score: 18276.410356164386
###Markdown
Step 1: Write a useful functionIn this exercise, you'll use cross-validation to select parameters for a machine learning model.Begin by writing a function `get_score()` that reports the average (over three cross-validation folds) MAE of a machine learning pipeline that uses:- the data in `X` and `y` to create folds,- `SimpleImputer()` (with all parameters left as default) to replace missing values, and- `RandomForestRegressor()` (with `random_state=0`) to fit a random forest model.The `n_estimators` parameter supplied to `get_score()` is used when setting the number of trees in the random forest model.
###Code
def get_score(n_estimators):
"""Return the average MAE over 3 CV folds of random forest model.
Keyword argument:
n_estimators -- the number of trees in the forest
"""
my_pipeline = Pipeline(steps=[('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators=n_estimators,
random_state=0))
])
scores = -1 * cross_val_score(my_pipeline, X, y, cv=3, scoring='neg_mean_absolute_error')
return scores.mean()
# Check your answer
step_1.check()
# Lines below will give you a hint or solution code
step_1.hint()
step_1.solution()
###Output
_____no_output_____
###Markdown
Step 2: Test different parameter valuesNow, you will use the function that you defined in Step 1 to evaluate the model performance corresponding to eight different values for the number of trees in the random forest: 50, 100, 150, ..., 300, 350, 400.Store your results in a Python dictionary `results`, where `results[i]` is the average MAE returned by `get_score(i)`.
###Code
results = {}
for i in range(50, 401, 50):
results[i]=get_score(i)
# Check your answer
step_2.check()
# Lines below will give you a hint or solution code
step_2.hint()
step_2.solution()
###Output
_____no_output_____
###Markdown
Use the next cell to visualize your results from Step 2. Run the code without changes.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(list(results.keys()), list(results.values()))
plt.show()
###Output
_____no_output_____
###Markdown
Step 3: Find the best parameter valueGiven the results, which value for `n_estimators` seems best for the random forest model? Use your answer to set the value of `n_estimators_best`.
###Code
n_estimators_best = min(results, key=results.get)
# Check your answer
step_3.check()
# Lines below will give you a hint or solution code
step_3.hint()
step_3.solution()
###Output
_____no_output_____ |
_ipynb/Machine_Learning_Project_CheckList.ipynb | ###Markdown
 โ ๋จธ์ ๋ฌ๋ ํ๋ก์ ํธ ์ฒดํฌ๋ฆฌ์คํธ (8๋จ๊ณ) > A. ๋ฌธ์ ๋ฅผ ์ ์ํ๊ณ ํฐ ๊ทธ๋ฆผ ๊ทธ๋ฆฌ๊ธฐ >> - [ ] **1. ๋ชฉํ๋ฅผ ๋น์ฆ๋์ค ์ฉ์ด๋ก ์ ์ํฉ๋๋ค.** >> - [ ] **2. ์ด ์๋ฃจ์
์ ์ด๋ป๊ฒ ์ฌ์ฉ๋ ๊ฒ์ธ๊ฐ?** >> - [ ] **3. (๋ง์ฝ ์๋ค๋ฉด) ํ์ฌ ์๋ฃจ์
์ด๋ ์ฐจ์ ์ฑ
์ ๋ฌด์์ธ๊ฐ?** >> - [ ] **4. ์ด๋ค ๋ฌธ์ ๋ผ๊ณ ์ ์ํ ์ ์๋? (์ง๋/๋น์ง๋, ์จ๋ผ์ธ/์คํ๋ผ์ธ ๋ฑ)** >> - [ ] **5. ์ฑ๋ฅ์ ์ด๋ป๊ฒ ์ธก์ ํด์ผ ํ๋?** >> - [ ] **6. ์ฑ๋ฅ ์งํ๊ฐ ๋น์ฆ๋์ค ๋ชฉํ์ ์ฐ๊ฒฐ๋์ด ์๋?** >> - [ ] **7. ๋น์ฆ๋์ค ๋ชฉํ์ ๋๋ฌํ๊ธฐ ์ํด ํ์ํ ์ต์ํ์ ์ฑ๋ฅ์ ์ผ๋ง์ธ๊ฐ?** >> - [ ] **8. ๋น์ทํ ๋ฌธ์ ๊ฐ ์๋? ** >> - [ ] **9. ํด๋น ๋ถ์ผ์ ์ ๋ฌธ๊ฐ๊ฐ ์๋?** >> - [ ] **10. ์๋์ผ๋ก ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๋ ๋ฐฉ๋ฒ์ ๋ฌด์์ธ๊ฐ?** >> - [ ] **11. ์ฌ๋ฌ๋ถ ํน์ ๋ค๋ฅธ ์ฌ๋์ด ์ธ์ด ๊ฐ์ ์ ๋์ดํฉ๋๋ค.** >> - [ ] **12. ๊ฐ๋ฅํ๋ฉด ๊ฐ์ ์ ๊ฒ์ฆํฉ๋๋ค. ** > B. ๋ฐ์ดํฐ๋ฅผ ์์งํฉ๋๋ค. >> Note >> ์๋ก์ด ๋ฐ์ดํฐ๋ฅผ ์ฝ๊ฒ ์ป์ ์ ์๋๋ก ์ต๋ํ ์๋ํํ์ธ์. >> - [ ] **1. ํ์ํ ๋ฐ์ดํฐ์ ์์ ๋์ดํฉ๋๋ค. ** >> - [ ] **2. ๋ฐ์ดํฐ๋ฅผ ์ป์ ์ ์๋ ๊ณณ์ ์ฐพ์ ๊ธฐ๋กํฉ๋๋ค.** >> - [ ] **3. ์ผ๋ง๋ ๋ง์ ๊ณต๊ฐ์ด ํ์ํ์ง ํ์ธํฉ๋๋ค.** >> - [ ] **4. ๋ฒ๋ฅ ์์ ์๋ฌด๊ฐ ์๋์ง ํ์ธํ๊ณ ํ์ํ๋ค๋ฉด ์ธ๊ฐ๋ฅผ ๋ฐ์ต๋๋ค.** >> - [ ] **5. ์ ๊ทผ ๊ถํ์ ํ๋ํฉ๋๋ค.** >> - [ ] **6. ์์
ํ๊ฒฝ์ ๋ง๋ญ๋๋ค.(์ถฉ๋ถํ ์ ์ฅ ๊ณต๊ฐ์ผ๋ก)** >> - [ ] **7. ๋ฐ์ดํฐ๋ฅผ ์์งํฉ๋๋ค. ** >> - [ ] **8. ๋ฐ์ดํฐ๋ฅผ ์กฐ์ํ๊ธฐ ํธ๋ฆฌํ ํํ๋ก ๋ณํํฉ๋๋ค. (๋ฐ์ดํฐ ์์ฒด๋ฅผ ๋ฐ๊พธ๋๊ฒ ์๋๋๋ค)** >> - [ ] **9. ๋ฏผ๊ฐํ ์ ๋ณด๊ฐ ์ญ์ ๋์๊ฑฐ๋ ๋ณดํธ๋์๋์ง ๊ฒ์ฆํฉ๋๋ค. (์๋ฅผ ๋ค์ด ๊ฐ์ธ์ ๋ณด ๋น์๋ณํ)** >> - [ ] **10. ๋ฐ์ดํฐ์ ํฌ๊ธฐ์ ํ์
(์๊ณ์ด, ํ๋ณธ, ์ง๋ฆฌ์ ๋ณด)์ ํ์ธํฉ๋๋ค. ** >> - [ ] **11. ํ
์คํธ ์ธํธ๋ฅผ ์ํ๋งํ์ฌ ๋ฐ๋ก ๋ผ์ด๋๊ณ ์ ๋ ๋ค์ฌ๋ค๋ณด์ง ์์ต๋๋ค. (๋ฐ์ดํฐ ์ผํ ๊ธ์ง!)** > C. ๋ฐ์ดํฐ๋ฅผ ํ์ํฉ๋๋ค. >> Note >> ์ด ๋จ๊ณ์์๋ ํด๋น ๋ถ์ผ์ ์ ๋ฌธ๊ฐ์๊ฒ ์กฐ์ธ์ ๊ตฌํ์ธ์. >> - [ ] **1. ๋ฐ์ดํฐ ํ์์ ์ํด ๋ณต์ฌ๋ณธ์ ์์ฑํฉ๋๋ค. (ํ์ํ๋ฉด ์ํ๋งํ์ฌ ์ ์ ํ ํฌ๊ธฐ๋ก ์ค์
๋๋ค.) ** >> - [ ] **2. ๋ฐ์ดํฐ ํ์ ๊ฒฐ๊ณผ๋ฅผ ์ ์ฅํ๊ธฐ ์ํด ์ฃผํผํฐ ๋
ธํธ๋ถ์ ๋ง๋ญ๋๋ค.** >> - [ ] **3. ๊ฐ ํน์ฑ์ ํน์ง์ ์กฐ์ฌํฉ๋๋ค.** >>> - [ ] *์ด๋ฆ* >>> - [ ] *ํ์
(๋ฒ์ฃผํ, ์ ์/๋ถ๋์์, ์ต๋๊ฐ/์ต์๊ฐ ์ ๋ฌด, ํ
์คํธ, ๊ตฌ์กฐ์ ์ธ ๋ฌธ์์ด ๋ฑ)* >>> - [ ] *๋๋ฝ๋ ๊ฐ์ ๋น์จ (%)* >>> - [ ] *์ก์ ์ ๋์ ์ก์ ์ข
๋ฅ (ํ๋ฅ ์ , ์ด์์น, ๋ฐ์ฌ๋ฆผ ์๋ฌ ๋ฑ) * >>> - [ ] *์ด ์์
์ ์ ์ฉํ ์ ๋ * >>> - [ ] *๋ถํฌ ํํ (๊ฐ์ฐ์์, ๊ท ๋ฑ, ๋ก๊ทธ ๋ฑ)* >> - [ ] **4. ์ง๋ ํ์ต ์์
์ด๋ผ๋ฉด ํ๊น ์์ฑ์ ๊ตฌ๋ถํฉ๋๋ค.** >> - [ ] **5. ๋ฐ์ดํฐ๋ฅผ ์๊ฐํํฉ๋๋ค.** >> - [ ] **6. ํน์ฑ ๊ฐ์ ์๊ด๊ด๊ณ๋ฅผ ์กฐ์ฌํฉ๋๋ค.** >> - [ ] **7. ์๋์ผ๋ก ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ ์ ์๋ ๋ฐฉ๋ฒ์ ์ฐพ์๋ด
๋๋ค.** >> - [ ] **8. ์ ์ฉ์ด ๊ฐ๋ฅํ ๋ณํ์ ์ฐพ์ต๋๋ค.** >> - [ ] **9. ์ถ๊ฐ๋ก ์ ์ฉํ ๋ฐ์ดํฐ๋ฅผ ์ฐพ์ต๋๋ค. (์๋ค๋ฉด '๋ฐ์ดํฐ๋ฅผ ์์งํฉ๋๋ค'๋ก ๋์๊ฐ๋๋ค.)** >> - [ ] **10. ์กฐ์ฌํ ๊ฒ์ ๊ธฐ๋กํฉ๋๋ค.** > D. ๋ฐ์ดํฐ๋ฅผ ์ค๋นํฉ๋๋ค. >> Note >> ๋ฐ์ดํฐ์ ๋ณต์ฌ๋ณธ์ผ๋ก ์์
ํฉ๋๋ค. >> Note >> ์ ์ฉํ ๋ชจ๋ ๋ฐ์ดํฐ ๋ณํ์ ํจ์๋ก ๋ง๋ญ๋๋ค. >> ํจ์ ๋ณํ ์ด์ >> โฌ โฌ โฌ โฌ โฌ โฌ โฌ โฌ >> *I. ๋ค์์ ์๋ก์ด ๋ฐ์ดํฐ๋ฅผ ์ป์ ๋ ๋ฐ์ดํฐ ์ค๋น๋ฅผ ์ฝ๊ฒ ํ ์ ์๊ธฐ ๋๋ฌธ์
๋๋ค.* >> *II. ๋ค์ ํ๋ก์ ํธ์ ์ด ๋ณํ์ ์ฝ๊ฒ ์ ์ฉํ ์ ์๊ธฐ ๋๋ฌธ์
๋๋ค. * >> *III. ํ
์คํธ ์ธํธ๋ฅผ ์ ์ ํ๊ณ ๋ณํํ๊ธฐ ์ํด์์
๋๋ค.* >> *IV. ์๋ฃจ์
์ด ์๋น์ค์ ํฌ์
๋ ํ ์๋ก์ด ๋ฐ์ดํฐ ์ํ์ ์ ์ ํ๊ณ ๋ณํํ๊ธฐ ์ํด์์
๋๋ค.* >> *V. ํ์ดํผํ๋ผ๋ฏธํฐ๋ก ์ค๋น ๋จ๊ณ๋ฅผ ์ฝ๊ฒ ์ ํํ๊ธฐ ์ํด์์
๋๋ค.* >> - [ ] **1. ๋ฐ์ดํฐ ์ ์ ** >>> - [ ] *์ด์์น๋ฅผ ์์ ํ๊ฑฐ๋ ์ญ์ ํฉ๋๋ค.(์ ํ์ฌํญ)* >>> - [ ] *๋๋ฝ๋ ๊ฐ์ ์ฑ์ฐ๊ฑฐ๋(0, ํ๊ท , ์ค๊ฐ๊ฐ) ๊ทธ ํ(๋๋ ์ด)์ ์ ๊ฑฐํฉ๋๋ค.* >> - [ ] **2. ํน์ฑ ์ ํ(์ ํ์ฌํญ)** >> - [ ] **3. ์ ์ ํ ํน์ฑ ๊ณตํ ** >>> - [ ] *์ฐ์ ํน์ฑ ์ด์ฐํํ๊ธฐ* >>> - [ ] *ํน์ฑ ๋ถํดํ๊ธฐ* >>> - [ ] *๊ฐ๋ฅํ ํน์ฑ ๋ณํ ์ถ๊ฐํ๊ธฐ(log, sqrt, ^2)* >>> - [ ] *ํน์ฑ์ ์กฐํฉํด ๊ฐ๋ฅ์ฑ ์๋ ์๋ก์ด ํน์ฑ ๋ง๋ค๊ธฐ* >> - [ ] **4. ํน์ฑ ์ค์ผ์ผ ์กฐ์ (ํ์คํ ๋๋ ์ ๊ทํ)** > E. ๊ฐ๋ฅ์ฑ ์๋ ๋ช ๊ฐ์ ๋ชจ๋ธ์ ๊ณ ๋ฆ
๋๋ค. >> Note >> ๋ฐ์ดํฐ๊ฐ ๋งค์ฐ ํฌ๋ฉด ์ฌ๋ฌ ๊ฐ์ง ๋ชจ๋ธ์ ์ผ์ ์๊ฐ ์์ ํ๋ จ์ํฌ ์ ์๋๋ก ๋ฐ์ดํฐ๋ฅผ ์ํ๋งํ์ฌ ์์ ํ๋ จ ์ธํธ๋ฅผ ๋ง๋๋ ๊ฒ์ด ์ข์ต๋๋ค. (์ด๋ ๊ฒ ํ๋ฉด ๊ท๋ชจ๊ฐ ํฐ ์ ๊ฒฝ๋ง์ด๋ ๋๋คํฌ๋ ์คํธ ๊ฐ์ ๋ณต์กํ ๋ชจ๋ธ์ ๋ง๋ค๊ธฐ ์ด๋ ต์ต๋๋ค.) >> Note >> ์ฌ๊ธฐ์์๋ ๊ฐ๋ฅํ ํ ์ต๋๋ก ์ด ๋จ๊ณ๋ค์ ์๋ํํฉ๋๋ค. >> - [ ] **1. ์ฌ๋ฌ ์ข
๋ฅ์ ๋ชจ๋ธ์ ๊ธฐ๋ณธ ๋งค๊ฒ๋ณ์๋ฅผ ์ฌ์ฉํด ์ ์ํ๊ฒ ๋ง์ด ํ๋ จ์์ผ๋ด
๋๋ค. (์๋ฅผ ๋ค๋ฉด ์ ํ ๋ชจ๋ธ, ๋์ด๋ธ ๋ฒ ์ด์ง, SVM, ๋๋ค ํฌ๋ ์คํธ, ์ ๊ฒฝ๋ง)** >> - [ ] **2. ์ฑ๋ฅ์ ์ธก์ ํ๊ณ ๋น๊ตํฉ๋๋ค.** >>> - [ ] *๊ฐ ๋ชจ๋ธ์์ N-๊ฒน ๊ต์ฐจ๊ฒ์ฆ์ ์ฌ์ฉํด N๊ฐ ํด๋์ ์ฑ๋ฅ์ ๋ํ ํ๊ท ๊ณผ ํ์คํธ์ฐจ๋ฅผ ๊ณ์ฐํฉ๋๋ค. * >> - [ ] **3. ๊ฐ ์๊ณ ๋ฆฌ์ฆ์์ ๊ฐ์ฅ ๋๋๋ฌ์ง ๋ณ์๋ฅผ ๋ถ์ํฉ๋๋ค. ** >> - [ ] **4. ๋ชจ๋ธ์ด ๋ง๋๋ ์๋ฌ์ ์ข
๋ฅ๋ฅผ ๋ถ์ํฉ๋๋ค. ** >>> - [ ] *์ด ์๋ฌ๋ฅผ ํผํ๊ธฐ ์ํด ์ฌ๋์ด ์ฌ์ฉํ๋ ๋ฐ์ดํฐ๋ ๋ฌด์์ธ๊ฐ์?* >> - [ ] **5. ๊ฐ๋จํ ํน์ฑ ์ ํ๊ณผ ํน์ฑ ๊ณตํ ๋จ๊ณ๋ฅผ ์ํํฉ๋๋ค. ** >> - [ ] **6. ์ด์ ๋ค์ฏ ๋จ๊ณ๋ฅผ ํ ๋ฒ์ด๋ ๋ ๋ฒ ๋น ๋ฅด๊ฒ ๋ฐ๋ณตํด๋ด
๋๋ค. ** >> - [ ] **7. ๋ค๋ฅธ ์ข
๋ฅ์ ์๋ฌ๋ฅผ ๋ง๋๋ ๋ชจ๋ธ์ ์ค์ฌ์ผ๋ก ๊ฐ์ฅ ๊ฐ๋ฅ์ฑ์ด ๋์ ๋ชจ๋ธ์ ์ธ ๊ฐ์์ ๋ค์ฏ๊ฐ ์ ๋ ์ถ๋ฆฝ๋๋ค. ** > F. ์์คํ
์ ์ธ๋ฐํ๊ฒ ํ๋ํฉ๋๋ค. >> Note >> ์ด ๋จ๊ณ์์๋ ๊ฐ๋ฅํ ํ ๋ง์ ๋ฐ์ดํฐ๋ฅผ ์ฌ์ฉํ๋ ๊ฒ์ด ์ข์ต๋๋ค. ํนํ ์ธ๋ถ ํ๋์ ๋ง์ง๋ง ๋จ๊ณ๋ก ๊ฐ์๋ก ๊ทธ๋ ์ต๋๋ค. >> Note >> ์ธ์ ๋ ๊ทธ๋ฌ๋ฏ์ด ์๋ํํฉ๋๋ค. >> - [ ] **1. ๊ต์ฐจ ๊ฒ์ฆ์ ์ฌ์ฉํด ํ์ดํผํ๋ผ๋ฏธํฐ๋ฅผ ์ ๋ฐ ํ๋ํฉ๋๋ค. ** >>> - [ ] *ํ์ดํผํ๋ผ๋ฏธํฐ๋ฅผ ์ฌ์ฉํด ๋ฐ์ดํฐ ๋ณํ์ ์ ํํ์ธ์. ํนํ ํ์ ์ด ์๋ ๊ฒฝ์ฐ ์ด๋ ๊ฒ ํด์ผ ํฉ๋๋ค. (์๋ฅผ ๋ค์ด ๋๋ฝ๋ ๊ฐ์ 0์ผ๋ก ์ฑ์ธ ๊ฒ์ธ๊ฐ ์๋๋ฉด ์ค๊ฐ๊ฐ์ผ๋ก ์ฑ์ธ ๊ฒ์ธ๊ฐ? ์๋๋ฉด ๊ทธ ํ์ ๋ฒ๋ฆด ๊ฒ์ธ๊ฐ?)* >>> - [ ] *ํ์ํ ํ์ดํผ ํ๋ผ๋ฏธํฐ์ ๊ฐ์ด ๋งค์ฐ ์ ์ง ์๋ค๋ฉด ๊ทธ๋ฆฌ๋ ์์น๋ณด๋ค ๋๋ค ์์น๋ฅผ ์ฌ์ฉํ์ธ์. ํ๋ จ ์๊ฐ์ด ์ค๋ ๊ฑธ๋ฆฐ๋ค๋ฉด ๋ฒ ์ด์ง์ ์ต์ ํ ๋ฐฉ๋ฒ์ ์ฌ์ฉํ๋ ๊ฒ์ด ์ข์ต๋๋ค.(์๋ฅผ ๋ค๋ฉด ๊ฐ์ฐ์์ ํ๋ก์ธ์ค ์ฌ์ ํ๋ฅ ์ ์ฌ์ฉํฉ๋๋ค.)* >> - [ ] **2. ์์๋ธ ๋ฐฉ๋ฒ์ ์๋ํด๋ณด์ธ์. ์ต๊ณ ์ ๋ชจ๋ธ๋ค์ ์ฐ๊ฒฐํ๋ฉด ์ข
์ข
๊ฐ๋ณ ๋ชจ๋ธ์ ์คํํ๋ ๊ฒ๋ณด๋ค ๋ ์ฑ๋ฅ์ด ๋์ต๋๋ค.** >> - [ ] **3. ์ต์ข
๋ชจ๋ธ์ ํ์ ์ด ์ ํ ์ผ๋ฐํ ์ค์ฐจ๋ฅผ ์ถ์ ํ๊ธฐ ์ํด ํ
์คํธ ์ธํธ์์ ์ฑ๋ฅ์ ์ธก์ ํฉ๋๋ค. ( * ์ผ๋ฐํ ์ค์ฐจ๋ฅผ ์ธก์ ํ ํ์๋ ๋ชจ๋ธ์ ๋ณ๊ฒฝํ์ง ๋ง์ธ์. ๊ทธ๋ ๊ฒ ํ๋ฉด ํ
์คํธ ์ธํธ์ ๊ณผ๋ ์ ํฉ๋๊ธฐ ์์ํ ๊ฒ์
๋๋ค.) ** > G. ์๋ฃจ์
์ ์ถ์ํฉ๋๋ค. >> - [ ] 1. **์ง๊ธ๊น์ง์ ์์
์ ๋ฌธ์ํํฉ๋๋ค.** >> - [ ] 2. **๋ฉ์ง ๋ฐํ ์๋ฃ๋ฅผ ๋ง๋ญ๋๋ค.** >>> - [ ] *๋จผ์ ํฐ ๊ทธ๋ฆผ์ ๋ถ๊ฐ์ํต๋๋ค.* >> - [ ] **3. ์ด ์๋ฃจ์
์ด ์ด๋ป๊ฒ ๋น์ฆ๋์ค์ ๋ชฉํ๋ฅผ ๋ฌ์ฑํ๋์ง ์ค๋ช
ํ์ธ์.** >> - [ ] **4. ์์
๊ณผ์ ์์ ์๊ฒ ๋ ํฅ๋ฏธ๋ก์ด ์ ๋ค์ ์์ง ๋ง๊ณ ์ค๋ช
ํ์ธ์.** >>> - [ ] *์ฑ๊ณตํ ๊ฒ๊ณผ ๊ทธ๋ ์ง ๋ชปํ ๊ฒ์ ์ค๋ช
ํฉ๋๋ค.* >>> - [ ] *์ฐ๋ฆฌ๊ฐ ์ธ์ด ๊ฐ์ ๊ณผ ์์คํ
์ ์ ์ฝ์ ๋์ดํฉ๋๋ค.* >> - [ ] **5. ๋ฉ์ง ๊ทธ๋ํ๋ ๊ธฐ์ตํ๊ธฐ ์ฌ์ด ๋ฌธ์ฅ์ผ๋ก ํต์ฌ ๋ด์ฉ์ ์ ๋ฌํ์ธ์. (์๋ฅผ ๋ค๋ฉด '์ค๊ฐ์๋์ด ์ฃผํ ๊ฐ๊ฒฉ์ ๋ํ ๊ฐ์ฅ ์ค์ํ ์์ธก ๋ณ์์
๋๋ค.')** > H. ์์คํ
์ ๋ก ์นญํฉ๋๋ค! >> - [ ] **1. ์๋น์ค์ ํฌ์
ํ๊ธฐ ์ํด ์๋ฃจ์
์ ์ค๋นํฉ๋๋ค. (์ค์ ์
๋ ฅ ๋ฐ์ดํฐ ์ฐ๊ฒฐ, ๋จ์ ํ
์คํธ ์์ฑ ๋ฑ)** >> - [ ] **2. ์์คํ
์ ์๋น์ค ์ฑ๋ฅ์ ์ผ์ ํ ๊ฐ๊ฒฉ์ผ๋ก ํ์ธํ๊ณ ์ฑ๋ฅ์ด ๊ฐ์๋์ ๋ ์๋ฆผ์ ๋ฐ๊ธฐ ์ํด ๋ชจ๋ํฐ๋ง ์ฝ๋๋ฅผ ์์ฑํฉ๋๋ค. ** >>> - [ ] *์์ฃผ ๋๋ฆฌ๊ฒ ๊ฐ์๋๋ ํ์์ ์ฃผ์ํ์ธ์. ๋ฐ์ดํฐ๊ฐ ๋ณํํจ์ ๋ฐ๋ผ ๋ชจ๋ธ์ด ์ ์ฐจ ๊ตฌ์์ด ๋๋ ๊ฒฝํฅ์ด ์์ต๋๋ค.* >>> - [ ] *์ฑ๋ฅ ์ธก์ ์ ์ฌ๋์ ๊ฐ์
์ด ํ์ํ ์ง ๋ชจ๋ฆ
๋๋ค. (์๋ฅผ ๋ค๋ฉด ํฌ๋ผ์ฐ๋์์ฑ ์๋น์ค๋ฅผ ํตํด์)* >>> - [ ] *์
๋ ฅ ๋ฐ์ดํฐ์ ํ์ง๋ ๋ชจ๋ํฐ๋งํฉ๋๋ค. (์๋ฅผ ๋ค์ด ์ค๋์ ์ผ์๊ฐ ๋ฌด์์ํ ๊ฐ์ ๋ณด๋ด๊ฑฐ๋, ๋ค๋ฅธ ํ์ ์ถ๋ ฅ ํ์ง์ด ๋์ ๊ฒฝ์ฐ). ์จ๋ผ์ธ ํ์ต ์์คํ
์ ๊ฒฝ์ฐ ํนํ ์ค์ํฉ๋๋ค.* >> - [ ] **3. ์ ๊ธฐ์ ์ผ๋ก ์๋ก์ด ๋ฐ์ดํฐ์์ ๋ชจ๋ธ์ ๋ค์ ํ๋ จ์ํต๋๋ค. (๊ฐ๋ฅํ ํ ์๋ํํฉ๋๋ค.) **
###Code
###Output
_____no_output_____ |
Assignment3/NN_mnist_kmeans.ipynb | ###Markdown
MLP with KMEANS
###Code
clusters = [5, 10, 15, 20, 25, 30]
nn_arch=[(100),(100,100),(100,100,100), (100,100,100,100),(100,100,100,100,100)]
grid ={'km__n_clusters':clusters,'NN__hidden_layer_sizes':nn_arch}
km = KMeans(random_state=5)
mlp = MLPClassifier(max_iter=5000,early_stopping=True,random_state=5, learning_rate='adaptive')
pipe = Pipeline([('km',km),('NN',mlp)])
gs = GridSearchCV(pipe,grid,verbose=10,cv=5, n_jobs=-1)
gs.fit(X_train,y_train)
tmp = pd.DataFrame(gs.cv_results_)
tmp.to_csv('mnist_kmeans_nn.csv')
best_params = gs.best_params_
print("Best parameters set for Neural network:")
print(best_params)
pred_best = gs.predict(X_test)
best_accuracy = accuracy_score(y_test, pred_best)
print('Accuracy of Neural network: is %.2f%%' % (best_accuracy * 100))
print(classification_report(y_test, gs.predict(X_test)))
# https://gist.github.com/hitvoice/36cf44689065ca9b927431546381a3f7
def cm_analysis(y_true, y_pred, labels, ymap=None, figsize=(15,15)):
if ymap is not None:
y_pred = [ymap[yi] for yi in y_pred]
y_true = [ymap[yi] for yi in y_true]
labels = [ymap[yi] for yi in labels]
cm = confusion_matrix(y_true, y_pred, labels=labels)
cm_sum = np.sum(cm, axis=1, keepdims=True)
cm_perc = cm / cm_sum.astype(float) * 100
annot = np.empty_like(cm).astype(str)
nrows, ncols = cm.shape
for i in range(nrows):
for j in range(ncols):
c = cm[i, j]
p = cm_perc[i, j]
if i == j:
s = cm_sum[i]
annot[i, j] = '%.1f%%\n%d/%d' % (p, c, s)
elif c == 0:
annot[i, j] = ''
else:
annot[i, j] = '%.1f%%\n%d' % (p, c)
cm = pd.DataFrame(cm, index=labels, columns=labels)
cm.index.name = 'Actual'
cm.columns.name = 'Predicted'
fig, ax = plt.subplots(figsize=figsize)
sns.heatmap(cm, annot=annot, fmt='', ax=ax)
cm_analysis(y_test, gs.predict(X_test), range(10))
###Output
_____no_output_____ |
silver/.ipynb_checkpoints/D05_Shors_Algorithm_Solutions-checkpoint.ipynb | ###Markdown
prepared by รzlem Salehi (QTurkey) This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. $\newcommand{\Mod}[1]{\ (\mathrm{mod}\ 1)}$$ \newcommand{\bra}[1]{\langle 1|} $$ \newcommand{\ket}[1]{|1\rangle} $$ \newcommand{\braket}[2]{\langle 1|2\rangle} $$ \newcommand{\dot}[2]{ 1 \cdot 2} $$ \newcommand{\biginner}[2]{\left\langle 1,2\right\rangle} $$ \newcommand{\mymatrix}[2]{\left( \begin{array}{1} 2\end{array} \right)} $$ \newcommand{\myvector}[1]{\mymatrix{c}{1}} $$ \newcommand{\myrvector}[1]{\mymatrix{r}{1}} $$ \newcommand{\mypar}[1]{\left( 1 \right)} $$ \newcommand{\mybigpar}[1]{ \Big( 1 \Big)} $$ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $$ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $$ \newcommand{\onehalf}{\frac{1}{2}} $$ \newcommand{\donehalf}{\dfrac{1}{2}} $$ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $$ \newcommand{\vzero}{\myvector{1\\0}} $$ \newcommand{\vone}{\myvector{0\\1}} $$ \newcommand{\stateplus}{\myvector{ \sqrttwo \\ \sqrttwo } } $$ \newcommand{\stateminus}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $$ \newcommand{\myarray}[2]{ \begin{array}{1}2\end{array}} $$ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $$ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $$ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $$ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $$ \newcommand{\norm}[1]{ \left\lVert 1 \right\rVert } $$ \newcommand{\pstate}[1]{ \lceil \mspace{-1mu} 1 \mspace{-1.5mu} \rfloor } $ Solutions for Shor's Algorithm Task 1Let $N=111$. What percentage of the elements that are less than $N$ are relatively prime with $N$? Write a Python code to find out. (You can use gcd function of numpy.) Solution
###Code
import numpy as np
#Create an empty list
rlist=[]
N=111
#If relatively prime with N, append to the list
for i in range(1,N):
if np.gcd(N,i)==1:
rlist.append(i)
print(rlist)
print(len(rlist)*100/N, "percentage of the integers are relatively prime with", N)
###Output
[1, 2, 4, 5, 7, 8, 10, 11, 13, 14, 16, 17, 19, 20, 22, 23, 25, 26, 28, 29, 31, 32, 34, 35, 38, 40, 41, 43, 44, 46, 47, 49, 50, 52, 53, 55, 56, 58, 59, 61, 62, 64, 65, 67, 68, 70, 71, 73, 76, 77, 79, 80, 82, 83, 85, 86, 88, 89, 91, 92, 94, 95, 97, 98, 100, 101, 103, 104, 106, 107, 109, 110]
64.86486486486487 percentage of the integers are relatively prime with 111
###Markdown
Task 2Calculate the order of each element $ x $ that is relatively prime with $N $. What percentage of the $ x $โs have even order and satisfy $x^{r/2} \neq -1 \Mod{N}$?Put the elements that satisfy the conditions in a dictionary together with their order. Solution
###Code
import numpy as np
counter=0
#This will hold the list of integers that satisfy the conditions together with the order
satisfy={}
#rlist contains the relatively prime numbers with N
for i in range(len(rlist)):
r=1;
while(1):
if (rlist[i]**r)%N==1:
if(r%2==0 and ((rlist[i]**int(r/2))%N != N-1)):
counter=counter+1
print("Order of",rlist[i],":",r)
satisfy[rlist[i]]=r
break
r=r+1
print(counter*100/N, "percentage of the integers satisfy the conditions")
###Output
Order of 2 : 36
Order of 4 : 18
Order of 5 : 36
Order of 8 : 12
Order of 13 : 36
Order of 14 : 12
Order of 17 : 36
Order of 19 : 36
Order of 20 : 36
Order of 22 : 36
Order of 23 : 12
Order of 25 : 18
Order of 26 : 6
Order of 28 : 18
Order of 29 : 12
Order of 31 : 4
Order of 32 : 36
Order of 35 : 36
Order of 38 : 2
Order of 40 : 18
Order of 43 : 4
Order of 44 : 18
Order of 47 : 6
Order of 50 : 36
Order of 52 : 36
Order of 53 : 18
Order of 55 : 36
Order of 56 : 36
Order of 58 : 18
Order of 59 : 36
Order of 61 : 36
Order of 64 : 6
Order of 67 : 18
Order of 68 : 4
Order of 71 : 18
Order of 73 : 2
Order of 76 : 36
Order of 79 : 36
Order of 80 : 4
Order of 82 : 12
Order of 83 : 18
Order of 85 : 6
Order of 86 : 18
Order of 88 : 12
Order of 89 : 36
Order of 91 : 36
Order of 92 : 36
Order of 94 : 36
Order of 97 : 12
Order of 98 : 36
Order of 103 : 12
Order of 106 : 36
Order of 107 : 18
Order of 109 : 36
48.648648648648646 percentage of the integers satisfy the conditions
###Markdown
Task 3Pick randomly one of the $x$ you found in Task 2 and calculate gcd$(x^{r/2}-1,N)$ and gcd$(x^{r/2}+1,N)$. Solution
###Code
import random
#Pick a random integer
rand_index=random.randint(0,len(satisfy))
#Pick a random x and its order from the dictionary we have created above
x,r=random.choice(list(satisfy.items()))
print(x, "is picked with order", r)
#Calculate gcd
print("Factors of",N,":",np.gcd((x**int(r/2)-1),N), "and",np.gcd((x**int(r/2)+1),N))
###Output
67 is picked with order 18
Factors of 111 : 3 and 37
###Markdown
Task 4Factor 21 using Shor's Algorithm.- Pick a random $x$ which is relatively prime with 21.- Apply phase estimation circuit to the operator $U_x$.- Use continued fractions algorithm to find out $r$.- Compute $gcd(x^{r/2} -1, N)$ and $gcd(x^{r/2}+1, N)$ Solution
###Code
N=21
#Pick a random x relatively prime with N
import random as rand
import numpy as np
counter = 0
while(True):
x = rand.randrange(1,N)
counter = counter + 1
if np.gcd(x,N)==1:
break
print(x, " is picked after ", counter, " trials")
#Run this cell to load the function Ux
%run operator.py
#Create CU operator by calling function Ux(x,N)
CU=Ux(x,N)
# %load qpe.py
import cirq
def qpe(t,control, target, circuit, CU):
#Apply Hadamard to control qubits
circuit.append(cirq.H.on_each(control))
#Apply CU gates
for i in range(t):
#Obtain the power of CU gate
CUi = CU**(2**i)
#Apply CUi gate where t-i-1 is the control
circuit.append(CUi(control[t-i-1],*target))
#Apply inverse QFT
iqft(t,control,circuit)
# %load iqft.py
import cirq
from cirq.circuits import InsertStrategy
from cirq import H, SWAP, CZPowGate
def iqft(n,qubits,circuit):
#Swap the qubits
for i in range(n//2):
circuit.append(SWAP(qubits[i],qubits[n-i-1]), strategy = InsertStrategy.NEW)
#For each qubit
for i in range(n-1,-1,-1):
#Apply CR_k gates where j is the control and i is the target
k=n-i #We start with k=n-i
for j in range(n-1,i,-1):
#Define and apply CR_k gate
crk = CZPowGate(exponent = -2/2**(k))
circuit.append(crk(qubits[j],qubits[i]),strategy = InsertStrategy.NEW)
k=k-1 #Decrement at each step
#Apply Hadamard to the qubit
circuit.append(H(qubits[i]),strategy = InsertStrategy.NEW)
#Determine t and n, size of the control and target registers
t=11
n=5
import cirq
import matplotlib
from cirq import X
#Create control and target qubits and the circuit
circuit = cirq.Circuit()
control = [cirq.LineQubit(i) for i in range(1,t+1) ]
target = [cirq.LineQubit(i) for i in range(t+1,t+1+n) ]
circuit.append(X(target[n-1]))
#Call phase estimation circuit with paremeter CU
qpe(t,control,target,circuit,CU)
#Measure the control register
circuit.append(cirq.measure(*control, key='result'))
#Call the simulator and print the result
s = cirq.Simulator()
results=s.simulate(circuit)
print(results)
b_arr= results.measurements['result']
b=int("".join(str(i) for i in b_arr), 2)
print(b)
#Load the contFrac and convergents functions
%run ../include/helpers.py
#Run continued fractions algorithm to find out r
cf=contFrac(b/(2**t))
print(cf)
cv=convergents(cf)
print(cv)
###Output
[0, 6, 170, 2]
[Fraction(0, 1), Fraction(1, 6), Fraction(170, 1021), Fraction(341, 2048)]
###Markdown
The candidate is $s'=1$ and $r'=6$.
###Code
#Check if r is even, and x^{r/2} is not equal to -1 Mod N
r=6
if (r%2==0 and (x**(r/2))%N != -1) :
print("Proceed")
else:
print("Repeat the algorithm")
###Output
Proceed
###Markdown
Note that you may not be able to get the $r$ value in your first trial. In such a case, you need to repeat the algorithm. Now let's check $gcd(x^{r/2} -1, N)$ and $gcd(x^{r/2}+1, N)$.
###Code
#Compute gcd to find out the factors of N
print("Factors of",N,":",np.gcd((x**int(r/2)-1),N), "and",np.gcd((x**int(r/2)+1),N))
###Output
Factors of 21 : 3 and 7
|
thecuremusic.ipynb | ###Markdown
Note: none of this data is missing
###Code
song_moods.info()
sm_non_numeric_cols = [col for col in song_moods.columns if song_moods[col].dtype=='object']
song_moods_cat = song_moods[sm_non_numeric_cols]
song_moods_cat.head()
song_moods_cat.key.value_counts()
# these sound like they might be highly correlated
song_moods_fun = song_moods[['danceability', 'energy', 'liveness']]
song_moods_fun.head()
sns.heatmap(song_moods_fun.corr(), annot=True)
###Output
_____no_output_____
###Markdown
Hmmm. Time to google these terms, because I would definitely *not* expect a higher energy song to be less danceable. From [maelfabien](https://maelfabien.github.io/Hack-3/)**duration_ms:** The duration of the track in milliseconds.**key:** The estimated overall key of the track. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = Cโฏ/Dโญ, 2 = D, and so on. If no key was detected, the value is -1.**mode:** Mode indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. Major is represented by 1 and minor is 0.**time_signature:** An estimated overall time signature of a track. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure).**acousticness:** A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic. **danceability:** Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity. A value of 0.0 is least danceable and 1.0 is most danceable. **energy:** Energy is a measure from 0.0 to 1.0 and represents a perceptual measure of intensity and activity. Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale. Perceptual features contributing to this attribute include dynamic range, perceived loudness, timbre, onset rate, and general entropy. **instrumentalness:** Predicts whether a track contains no vocals. โOohโ and โaahโ sounds are treated as instrumental in this context. Rap or spoken word tracks are clearly โvocalโ. The closer the instrumentalness value is to 1.0, the greater the likelihood the track contains no vocal content. Values above 0.5 are intended to represent instrumental tracks, but confidence is higher as the value approaches 1.0. **liveness:** Detects the presence of an audience in the recording. Higher liveness values represent an increased probability that the track was performed live. A value above 0.8 provides a strong likelihood that the track is live. **loudness:** The overall loudness of a track in decibels (dB). Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 dB. **speechiness:** Speechiness detects the presence of spoken words in a track. The more exclusively speech-like the recording (e.g. talk show, audiobook, poetry), the closer to 1.0 the attribute value. Values above 0.66 describe tracks that are probably made entirely of spoken words. Values between 0.33 and 0.66 describe tracks that may contain both music and speech, either in sections or layered, including such cases as rap music. Values below 0.33 most likely represent music and other non-speech-like tracks. **valence:** A measure from 0.0 to 1.0 describing the musical positiveness conveyed by a track. Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric), while tracks with low valence sound more negative (e.g. sad, depressed, angry). tempo: The overall estimated tempo of the section in beats per minute (BPM). In musical terminology, the tempo is the speed or pace of a given piece and derives directly from the average beat duration.**key:** The estimated overall key of the section. The values in this field ranging from 0 to 11 mapping to pitches using standard Pitch Class notation (E.g. 0 = C, 1 = Cโฏ/Dโญ, 2 = D, and so on). If no key was detected, the value is -1.**mode:** integer Indicates the modality (major or minor) of a track, the type of scale from which its melodic content is derived. This field will contain a 0 for โminorโ, a 1 for โmajorโ, or a -1 for no result. Note that the major key (e.g. C major) could more likely be confused with the minor key at 3 semitones lower (e.g. A minor) as both keys carry the same pitches.**mode_confidence:** The confidence, from 0.0 to 1.0, of the reliability of the mode.**time_signature:** An estimated overall time signature of a track. The time signature (meter) is a notational convention to specify how many beats are in each bar (or measure). The time signature ranges from 3 to 7 indicating time signatures of โ3/4โ, to โ7/4โ. So yes - I should take out liveness as this just means was the song probably performed live. Also, I will definitely include valence. In fact, I would think that more Cure songs are sad, depressed, or angry than happy. I will check this now.
###Code
happier = song_moods[song_moods.valence >= 0.5]
happier.shape
happy_keys = happier.key.value_counts()/len(happier)
sadder = song_moods.drop(happier.index)
sadder.shape
sad_keys = sadder.key.value_counts()/len(sadder)
happy_keys = pd.DataFrame(happy_keys)
sad_keys = pd.DataFrame(sad_keys)
happy_keys.rename(columns = {"key": "happy_percent"},
inplace = True)
sad_keys.rename(columns = {"key": "sad_percent"},
inplace = True)
sad_keys.head()
# do an outer join to try to find which keys are more commonly happy or sad
key_mood = pd.concat([happy_keys, sad_keys], axis=1)
key_mood
###Output
_____no_output_____
###Markdown
It's interesting to change the threshold on what is happy (>=.5, >=.7, etc.) and watch the percents change in the keys above.
###Code
love = df[df['track_name'].str.lower().str.contains('love')]
love.sort_values(['valence', 'tempo'], ascending=False)[['track_name', 'valence', 'tempo']]
###Output
_____no_output_____
###Markdown
Lol I'm about ready to stop on this dataset. *HOW* is *Lovesong* happier than *Friday I'm In Love*??!?
###Code
sns.scatterplot(x='tempo', y='valence', data=df, hue='key');
###Output
_____no_output_____
###Markdown
Well. I seem to have no intuition for music, eh?
###Code
sns.scatterplot(x='tempo', y='valence', data=df, hue='mode');
df.groupby('mode')['valence'].mean().plot(kind='bar')
###Output
_____no_output_____ |
notebooks/kubeflow_pipelines/walkthrough/labs/kfp_walkthrough_vertex.ipynb | ###Markdown
Using custom containers with Vertex AI Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query itIn this lab, you develop, package as a docker image, and run on **Vertex AI Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **Vertex AI** hyperparameter tuning.
###Code
import os
import time
import pandas as pd
from google.cloud import aiplatform, bigquery
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for Vertex AI Training and Prediction- `ARTIFACT_STORE` - A GCS bucket in the created in the same region.
###Code
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
print(PROJECT_ID)
print(ARTIFACT_STORE)
print(DATA_ROOT)
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
###Output
_____no_output_____
###Markdown
We now create the `ARTIFACT_STORE` bucket if it's not there. Note that this bucket should be created in the region specified in the variable `REGION` (if you have already a bucket with this name in a different region than `REGION`, you may want to change the `ARTIFACT_STORE` name so that you can recreate a bucket in `REGION` with the command in the cell below).
###Code
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
###Output
Creating gs://qwiklabs-gcp-02-7680e21dd047-kfp-artifact-store/...
###Markdown
Importing the dataset into BigQuery
###Code
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
Dataset 'qwiklabs-gcp-02-7680e21dd047:covertype_dataset' successfully created.
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
Query complete after 0.00s: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 1042.58query/s]
Downloading: 100%|โโโโโโโโโโ| 100000/100000 [00:01<00:00, 85304.02rows/s]
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
Waiting on bqjob_r4d7af428dad9e81d_0000017edefebe3c_1 ... (0s) Current status: DONE
###Markdown
Create a validation split Exercise
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8,9)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
(40009, 13)
(19928, 13)
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
0.701023685266961
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
###Code
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to Vertex AI hyperparameter tuning service.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
Writing training_app/train.py
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
###Output
Writing training_app/Dockerfile
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
os.environ["IMAGE_URI"] = IMAGE_URI
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
Creating temporary tarball archive of 2 file(s) totalling 2.6 KiB before compression.
Uploading tarball of [training_app] to [gs://qwiklabs-gcp-02-7680e21dd047_cloudbuild/source/1644419347.310505-a6817b6a7a0f45fcab83d589a83f5767.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-02-7680e21dd047/locations/global/builds/27d8a99f-0d64-4e4e-8d87-a582b3613de5].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/27d8a99f-0d64-4e4e-8d87-a582b3613de5?project=517861155353].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "27d8a99f-0d64-4e4e-8d87-a582b3613de5"
FETCHSOURCE
Fetching storage object: gs://qwiklabs-gcp-02-7680e21dd047_cloudbuild/source/1644419347.310505-a6817b6a7a0f45fcab83d589a83f5767.tgz#1644419348131745
Copying gs://qwiklabs-gcp-02-7680e21dd047_cloudbuild/source/1644419347.310505-a6817b6a7a0f45fcab83d589a83f5767.tgz#1644419348131745...
/ [1 files][ 1.2 KiB/ 1.2 KiB]
Operation completed over 1 objects/1.2 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 5.12kB
Step 1/5 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
ea362f368469: Pulling fs layer
eac27809cab6: Pulling fs layer
036adb2e026f: Pulling fs layer
02a952c9f89d: Pulling fs layer
4f4fb700ef54: Pulling fs layer
0ae3f8214e8b: Pulling fs layer
ca41810bd5e2: Pulling fs layer
b72e35350998: Pulling fs layer
c95a831d214e: Pulling fs layer
dd21cbaee501: Pulling fs layer
34c0d5f571ee: Pulling fs layer
cffd6b808cdb: Pulling fs layer
0c9fca2a66fe: Pulling fs layer
e7e70d8d1c2f: Pulling fs layer
13bd35af8cff: Pulling fs layer
549a6d6636b4: Pulling fs layer
812c2650a52b: Pulling fs layer
171e3814b2ec: Pulling fs layer
02a952c9f89d: Waiting
4f4fb700ef54: Waiting
0ae3f8214e8b: Waiting
ca41810bd5e2: Waiting
b72e35350998: Waiting
c95a831d214e: Waiting
dd21cbaee501: Waiting
34c0d5f571ee: Waiting
cffd6b808cdb: Waiting
0c9fca2a66fe: Waiting
e7e70d8d1c2f: Waiting
13bd35af8cff: Waiting
549a6d6636b4: Waiting
812c2650a52b: Waiting
171e3814b2ec: Waiting
eac27809cab6: Verifying Checksum
eac27809cab6: Download complete
ea362f368469: Verifying Checksum
ea362f368469: Download complete
4f4fb700ef54: Verifying Checksum
4f4fb700ef54: Download complete
0ae3f8214e8b: Verifying Checksum
0ae3f8214e8b: Download complete
02a952c9f89d: Verifying Checksum
02a952c9f89d: Download complete
b72e35350998: Verifying Checksum
b72e35350998: Download complete
c95a831d214e: Verifying Checksum
c95a831d214e: Download complete
ca41810bd5e2: Verifying Checksum
ca41810bd5e2: Download complete
dd21cbaee501: Verifying Checksum
dd21cbaee501: Download complete
34c0d5f571ee: Verifying Checksum
34c0d5f571ee: Download complete
cffd6b808cdb: Verifying Checksum
cffd6b808cdb: Download complete
0c9fca2a66fe: Verifying Checksum
0c9fca2a66fe: Download complete
e7e70d8d1c2f: Verifying Checksum
e7e70d8d1c2f: Download complete
13bd35af8cff: Verifying Checksum
13bd35af8cff: Download complete
549a6d6636b4: Verifying Checksum
549a6d6636b4: Download complete
171e3814b2ec: Verifying Checksum
171e3814b2ec: Download complete
ea362f368469: Pull complete
eac27809cab6: Pull complete
036adb2e026f: Verifying Checksum
036adb2e026f: Download complete
812c2650a52b: Verifying Checksum
812c2650a52b: Download complete
036adb2e026f: Pull complete
02a952c9f89d: Pull complete
4f4fb700ef54: Pull complete
0ae3f8214e8b: Pull complete
ca41810bd5e2: Pull complete
b72e35350998: Pull complete
c95a831d214e: Pull complete
dd21cbaee501: Pull complete
34c0d5f571ee: Pull complete
cffd6b808cdb: Pull complete
0c9fca2a66fe: Pull complete
e7e70d8d1c2f: Pull complete
13bd35af8cff: Pull complete
549a6d6636b4: Pull complete
812c2650a52b: Pull complete
171e3814b2ec: Pull complete
Digest: sha256:0ff776d12620e1526f999481051595075692b977a0ce7bbf573208eed5867823
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> 70d8dcc15a81
Step 2/5 : RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
---> Running in 95c89cddfc6a
Collecting fire
Downloading fire-0.4.0.tar.gz (87 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 87.7/87.7 KB 11.7 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting cloudml-hypertune
Downloading cloudml-hypertune-0.1.0.dev6.tar.gz (3.2 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 5.4/5.4 MB 47.9 MB/s eta 0:00:00
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 10.1/10.1 MB 56.7 MB/s eta 0:00:00
Requirement already satisfied: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.7.3)
Requirement already satisfied: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.19.5)
Requirement already satisfied: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2021.3)
Requirement already satisfied: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.2)
Requirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from fire) (1.16.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Building wheels for collected packages: fire, cloudml-hypertune, termcolor
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.4.0-py2.py3-none-any.whl size=115942 sha256=615b333fc10cce9a3d68266d3d7dcbb9fbdd1c4b40f1651f4e3520807315bdcb
Stored in directory: /root/.cache/pip/wheels/8a/67/fb/2e8a12fa16661b9d5af1f654bd199366799740a85c64981226
Building wheel for cloudml-hypertune (setup.py): started
Building wheel for cloudml-hypertune (setup.py): finished with status 'done'
Created wheel for cloudml-hypertune: filename=cloudml_hypertune-0.1.0.dev6-py2.py3-none-any.whl size=3987 sha256=aa61dfcf9e2906814941b497cfc66913fe238cbde6bc1057f5e062aca71f7588
Stored in directory: /root/.cache/pip/wheels/a7/ff/87/e7bed0c2741fe219b3d6da67c2431d7f7fedb183032e00f81e
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4848 sha256=0194cc7d7dad7a150eecefcb457f41003c4484d4401cc20837ddf0b3d0cc5ebb
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built fire cloudml-hypertune termcolor
Installing collected packages: termcolor, cloudml-hypertune, fire, scikit-learn, pandas
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 1.0.2
Uninstalling scikit-learn-1.0.2:
Successfully uninstalled scikit-learn-1.0.2
Attempting uninstall: pandas
Found existing installation: pandas 1.3.5
Uninstalling pandas-1.3.5:
Successfully uninstalled pandas-1.3.5
[91mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
visions 0.7.4 requires pandas>=0.25.3, but you have pandas 0.24.2 which is incompatible.
statsmodels 0.13.1 requires pandas>=0.25, but you have pandas 0.24.2 which is incompatible.
phik 0.12.0 requires pandas>=0.25.1, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 3.1.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3, but you have pandas 0.24.2 which is incompatible.
[0mSuccessfully installed cloudml-hypertune-0.1.0.dev6 fire-0.4.0 pandas-0.24.2 scikit-learn-0.20.4 termcolor-1.1.0
[91mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[0mRemoving intermediate container 95c89cddfc6a
---> e06d9bae1dad
Step 3/5 : WORKDIR /app
---> Running in 83f3137d8992
Removing intermediate container 83f3137d8992
---> a39bd7efc06e
Step 4/5 : COPY train.py .
---> 4b9f8b891f3c
Step 5/5 : ENTRYPOINT ["python", "train.py"]
---> Running in 9f33088650cf
Removing intermediate container 9f33088650cf
---> 5aa5612d7fdf
Successfully built 5aa5612d7fdf
Successfully tagged gcr.io/qwiklabs-gcp-02-7680e21dd047/trainer_image:latest
PUSH
Pushing gcr.io/qwiklabs-gcp-02-7680e21dd047/trainer_image:latest
The push refers to repository [gcr.io/qwiklabs-gcp-02-7680e21dd047/trainer_image]
7dd9c235ea1d: Preparing
13363abe3c22: Preparing
c0a71498bc13: Preparing
afdacae73a44: Preparing
beceb4a3223c: Preparing
b1e73422ceb7: Preparing
5b99d0f1aa52: Preparing
dbd6221f1b98: Preparing
4402691a71a1: Preparing
883e47620bc6: Preparing
f5e5c749d02e: Preparing
52ef15a58fce: Preparing
b94b9d90a09e: Preparing
f2c55a6fb80d: Preparing
1b7bf230df94: Preparing
0e19a08a8060: Preparing
5f70bf18a086: Preparing
36a8dea33eff: Preparing
dfe5bb6eff86: Preparing
57b271862993: Preparing
0eba131dffd0: Preparing
b1e73422ceb7: Waiting
5b99d0f1aa52: Waiting
dbd6221f1b98: Waiting
4402691a71a1: Waiting
883e47620bc6: Waiting
f5e5c749d02e: Waiting
52ef15a58fce: Waiting
b94b9d90a09e: Waiting
f2c55a6fb80d: Waiting
1b7bf230df94: Waiting
0e19a08a8060: Waiting
5f70bf18a086: Waiting
36a8dea33eff: Waiting
dfe5bb6eff86: Waiting
57b271862993: Waiting
0eba131dffd0: Waiting
beceb4a3223c: Mounted from deeplearning-platform-release/base-cpu
afdacae73a44: Mounted from deeplearning-platform-release/base-cpu
b1e73422ceb7: Mounted from deeplearning-platform-release/base-cpu
5b99d0f1aa52: Mounted from deeplearning-platform-release/base-cpu
dbd6221f1b98: Mounted from deeplearning-platform-release/base-cpu
13363abe3c22: Pushed
7dd9c235ea1d: Pushed
4402691a71a1: Mounted from deeplearning-platform-release/base-cpu
f5e5c749d02e: Mounted from deeplearning-platform-release/base-cpu
883e47620bc6: Mounted from deeplearning-platform-release/base-cpu
52ef15a58fce: Mounted from deeplearning-platform-release/base-cpu
b94b9d90a09e: Mounted from deeplearning-platform-release/base-cpu
f2c55a6fb80d: Mounted from deeplearning-platform-release/base-cpu
1b7bf230df94: Mounted from deeplearning-platform-release/base-cpu
5f70bf18a086: Layer already exists
0e19a08a8060: Mounted from deeplearning-platform-release/base-cpu
36a8dea33eff: Mounted from deeplearning-platform-release/base-cpu
dfe5bb6eff86: Mounted from deeplearning-platform-release/base-cpu
0eba131dffd0: Layer already exists
57b271862993: Mounted from deeplearning-platform-release/base-cpu
c0a71498bc13: Pushed
latest: digest: sha256:12e8454e19d32b262c55663b9dbfa202b2f5347a7646435b0cd05ff6b5cca5b8 size: 4707
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
27d8a99f-0d64-4e4e-8d87-a582b3613de5 2022-02-09T15:09:08+00:00 2M4S gs://qwiklabs-gcp-02-7680e21dd047_cloudbuild/source/1644419347.310505-a6817b6a7a0f45fcab83d589a83f5767.tgz gcr.io/qwiklabs-gcp-02-7680e21dd047/trainer_image (+1 more) SUCCESS
###Markdown
Submit an Vertex AI hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of `max_iter` and the linear range between `1.0e-4` and `1.0e-1` for `alpha`.
###Code
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}_no_parallel"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
###Output
_____no_output_____
###Markdown
ExerciseComplete the `config.yaml` file generated below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 10 and 20* `alpha` a linear range of values between 1.0e-4 and 1.0e-1Also complete the `gcloud` command to start the hyperparameter tuning job with a max trial count anda max number of parallel trials both of 5 each.
###Code
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: max_iter
discreteValueSpec:
values:
- 10
- 20
- parameterId: alpha
doubleValueSpec:
minValue: 1.0e-4
maxValue: 1.0e-1
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
gcloud ai hp-tuning-jobs create \
--region=$REGION \
--display-name=${JOB_NAME} \
--config=$CONFIG_YAML \
--max-trial-count=5 \
--parallel-trial-count=1
echo "JOB_NAME: $JOB_NAME"
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}_no_parallel"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
%%bash
gcloud ai hp-tuning-jobs create \
--region=$REGION \
--display-name=forestcover_tuning_no_parallel \
--config=$CONFIG_YAML \
--max-trial-count=5 \
--parallel-trial-count=1
echo "JOB_NAME: $JOB_NAME"
!gcloud info --run-diagnostics
!gcloud ai hp-tuning-jobs describe 8358411528050835456 --region=us-central1
###Output
Using endpoint [https://us-central1-aiplatform.googleapis.com/]
createTime: '2022-02-09T15:12:50.107272Z'
displayName: forestcover_tuning_20220209_151158
endTime: '2022-02-09T15:22:22Z'
maxTrialCount: 5
name: projects/517861155353/locations/us-central1/hyperparameterTuningJobs/8358411528050835456
parallelTrialCount: 5
startTime: '2022-02-09T15:14:09Z'
state: JOB_STATE_SUCCEEDED
studySpec:
metrics:
- goal: MAXIMIZE
metricId: accuracy
parameters:
- discreteValueSpec:
values:
- 10.0
- 20.0
parameterId: max_iter
- doubleValueSpec:
maxValue: 0.1
minValue: 0.0001
parameterId: alpha
scaleType: UNIT_LINEAR_SCALE
trialJobSpec:
workerPoolSpecs:
- containerSpec:
args:
- --job_dir=gs://qwiklabs-gcp-02-7680e21dd047-kfp-artifact-store/jobs/forestcover_tuning_20220209_151158
- --training_dataset_path=gs://qwiklabs-gcp-02-7680e21dd047-kfp-artifact-store/data/training/dataset.csv
- --validation_dataset_path=gs://qwiklabs-gcp-02-7680e21dd047-kfp-artifact-store/data/validation/dataset.csv
- --hptune
imageUri: gcr.io/qwiklabs-gcp-02-7680e21dd047/trainer_image:latest
diskSpec:
bootDiskSizeGb: 100
bootDiskType: pd-ssd
machineSpec:
machineType: n1-standard-4
replicaCount: '1'
trials:
- endTime: '2022-02-09T15:16:39Z'
finalMeasurement:
metrics:
- metricId: accuracy
value: 0.68045
stepCount: '1'
id: '1'
parameters:
- parameterId: alpha
value: 0.05005
- parameterId: max_iter
value: 20
startTime: '2022-02-09T15:14:15.226889082Z'
state: SUCCEEDED
- endTime: '2022-02-09T15:16:36Z'
finalMeasurement:
metrics:
- metricId: accuracy
value: 0.679195
stepCount: '1'
id: '2'
parameters:
- parameterId: alpha
value: 0.072023
- parameterId: max_iter
value: 10
startTime: '2022-02-09T15:14:15.227046733Z'
state: SUCCEEDED
- endTime: '2022-02-09T15:16:06Z'
finalMeasurement:
metrics:
- metricId: accuracy
value: 0.676586
stepCount: '1'
id: '3'
parameters:
- parameterId: alpha
value: 0.0959861
- parameterId: max_iter
value: 10
startTime: '2022-02-09T15:14:15.227085807Z'
state: SUCCEEDED
- endTime: '2022-02-09T15:16:04Z'
finalMeasurement:
metrics:
- metricId: accuracy
value: 0.67784
stepCount: '1'
id: '4'
parameters:
- parameterId: alpha
value: 0.0745786
- parameterId: max_iter
value: 20
startTime: '2022-02-09T15:14:15.227112321Z'
state: SUCCEEDED
- endTime: '2022-02-09T15:16:27Z'
finalMeasurement:
metrics:
- metricId: accuracy
value: 0.681052
stepCount: '1'
id: '5'
parameters:
- parameterId: alpha
value: 0.0440675
- parameterId: max_iter
value: 20
startTime: '2022-02-09T15:14:15.227137909Z'
state: SUCCEEDED
updateTime: '2022-02-09T15:22:38.714712Z'
###Markdown
Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs". Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized): ExerciseComplete the body of the function below to retrieve the best trial from the `JOBNAME`:
###Code
def get_trials(job_name):
jobs = aiplatform.HyperparameterTuningJob.list()
match = [job for job in jobs if job.display_name == JOB_NAME]
tuning_job = match[0] if match else None
return tuning_job.trials if tuning_job else None
def get_best_trial(trials):
metrics = [trial.final_measurement.metrics[0].value for trial in trials]
best_trial = trials[metrics.index(max(metrics))]
return best_trial
def retrieve_best_trial_from_job_name(jobname):
trials = get_trials(jobname)
best_trial = get_best_trial(trials)
return best_trial
###Output
_____no_output_____
###Markdown
You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
###Code
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
print(best_trial)
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f"""\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
"""
ARGS = f"""\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
"""
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
!gcloud ai custom-jobs describe projects/517861155353/locations/us-central1/customJobs/8534474125983350784
###Output
Using endpoint [https://us-central1-aiplatform.googleapis.com/]
createTime: '2022-02-09T15:24:55.977949Z'
displayName: JOB_VERTEX_20220209_152454
endTime: '2022-02-09T15:27:24Z'
jobSpec:
workerPoolSpecs:
- containerSpec:
args:
- --job_dir=gs://qwiklabs-gcp-02-7680e21dd047-kfp-artifact-store/jobs/JOB_VERTEX_20220209_152454
- --training_dataset_path=gs://qwiklabs-gcp-02-7680e21dd047-kfp-artifact-store/data/training/dataset.csv
- --validation_dataset_path=gs://qwiklabs-gcp-02-7680e21dd047-kfp-artifact-store/data/validation/dataset.csv
- --alpha=0.0440675120367951
- --max_iter=20.0
- --nohptune
imageUri: gcr.io/qwiklabs-gcp-02-7680e21dd047/trainer_image:latest
diskSpec:
bootDiskSizeGb: 100
bootDiskType: pd-ssd
machineSpec:
machineType: n1-standard-4
replicaCount: '1'
name: projects/517861155353/locations/us-central1/customJobs/8534474125983350784
startTime: '2022-02-09T15:26:53Z'
state: JOB_STATE_SUCCEEDED
updateTime: '2022-02-09T15:27:25.336264Z'
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.**Note:** We need to wait for job triggered by the cell above to complete before running the cells below.
###Code
!gsutil ls $JOB_DIR
###Output
gs://qwiklabs-gcp-02-7680e21dd047-kfp-artifact-store/jobs/JOB_VERTEX_20220209_152454/model.pkl
###Markdown
Deploy the model to Vertex AI Prediction
###Code
MODEL_NAME = "forest_cover_classifier_2"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
SERVING_MACHINE_TYPE = "n1-standard-2"
###Output
_____no_output_____
###Markdown
Uploading the trained model ExerciseUpload the trained model using `aiplatform.Model.upload`:
###Code
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_NAME,
artifact_uri=JOB_DIR,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
)
###Output
INFO:google.cloud.aiplatform.models:Creating Model
INFO:google.cloud.aiplatform.models:Create Model backing LRO: projects/517861155353/locations/us-central1/models/3697565245233954816/operations/8332794625110573056
INFO:google.cloud.aiplatform.models:Model created. Resource name: projects/517861155353/locations/us-central1/models/3697565245233954816
INFO:google.cloud.aiplatform.models:To use this Model in another session:
INFO:google.cloud.aiplatform.models:model = aiplatform.Model('projects/517861155353/locations/us-central1/models/3697565245233954816')
###Markdown
Deploying the uploaded model ExerciseDeploy the model using `uploaded_model`:
###Code
endpoint = uploaded_model.deploy(
machine_type=SERVING_MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
###Output
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/517861155353/locations/us-central1/endpoints/423276792321671168/operations/2779856284562751488
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/517861155353/locations/us-central1/endpoints/423276792321671168
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/517861155353/locations/us-central1/endpoints/423276792321671168')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/517861155353/locations/us-central1/endpoints/423276792321671168
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/517861155353/locations/us-central1/endpoints/423276792321671168/operations/383941282801647616
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/517861155353/locations/us-central1/endpoints/423276792321671168
###Markdown
Serve predictions Prepare the input file with JSON formated instances. ExerciseQuery the deployed model using `endpoint`:
###Code
instance = [
2841.0,
45.0,
0.0,
644.0,
282.0,
1376.0,
218.0,
237.0,
156.0,
1003.0,
"Commanche",
"C4758",
]
endpoint.predict([instance])
instance = [
2067.0,
0.0,
21.0,
270.0,
9.0,
755.0,
184.0,
196.0,
145.0,
900.0,
"Cache",
"C2702",
]
endpoint.predict([instance])
###Output
_____no_output_____
###Markdown
Using custom containers with Vertex AI Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query itIn this lab, you develop, package as a docker image, and run on **Vertex AI Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **Vertex AI** hyperparameter tuning.
###Code
import os
import time
import pandas as pd
from google.cloud import aiplatform, bigquery
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for Vertex AI Training and Prediction- `ARTIFACT_STORE` - A GCS bucket in the created in the same region.
###Code
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
###Output
_____no_output_____
###Markdown
We now create the `ARTIFACT_STORE` bucket if it's not there. Note that this bucket should be created in the region specified in the variable `REGION` (if you have already a bucket with this name in a different region than `REGION`, you may want to change the `ARTIFACT_STORE` name so that you can recreate a bucket in `REGION` with the command in the cell below).
###Code
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
###Output
_____no_output_____
###Markdown
Importing the dataset into BigQuery
###Code
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
_____no_output_____
###Markdown
Create a validation split Exercise
###Code
# TODO: You code to create the BQ table validation split
# TODO: Your code to export the validation table to GCS
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
_____no_output_____
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
###Code
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to Vertex AI hyperparameter tuning service.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
_____no_output_____
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TODO
###Output
_____no_output_____
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
os.environ["IMAGE_URI"] = IMAGE_URI
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
_____no_output_____
###Markdown
Submit an Vertex AI hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of `max_iter` and the linear range between `1.0e-4` and `1.0e-1` for `alpha`.
###Code
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
###Output
_____no_output_____
###Markdown
ExerciseComplete the `config.yaml` file generated below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 10 and 20* `alpha` a linear range of values between 1.0e-4 and 1.0e-1Also complete the `gcloud` command to start the hyperparameter tuning job with a max trial count anda max number of parallel trials both of 5 each.
###Code
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
# TODO
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
gcloud ai hp-tuning-jobs create \
--region=# TODO \
--display-name=# TODO \
--config=# TODO \
--max-trial-count=# TODO \
--parallel-trial-count=# TODO
echo "JOB_NAME: $JOB_NAME"
###Output
_____no_output_____
###Markdown
Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs". Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized): ExerciseComplete the body of the function below to retrieve the best trial from the `JOBNAME`:
###Code
def retrieve_best_trial_from_job_name(jobname):
# TODO
return best_trial
###Output
_____no_output_____
###Markdown
You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
###Code
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f"""\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
"""
ARGS = f"""\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
"""
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
###Output
_____no_output_____
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.**Note:** We need to wait for job triggered by the cell above to complete before running the cells below.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to Vertex AI Prediction
###Code
MODEL_NAME = "forest_cover_classifier_2"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
SERVING_MACHINE_TYPE = "n1-standard-2"
###Output
_____no_output_____
###Markdown
Uploading the trained model ExerciseUpload the trained model using `aiplatform.Model.upload`:
###Code
uploaded_model = # TODO
###Output
_____no_output_____
###Markdown
Deploying the uploaded model ExerciseDeploy the model using `uploaded_model`:
###Code
endpoint = # TODO
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances. ExerciseQuery the deployed model using `endpoint`:
###Code
instance = [
2841.0,
45.0,
0.0,
644.0,
282.0,
1376.0,
218.0,
237.0,
156.0,
1003.0,
"Commanche",
"C4758",
]
# TODO
###Output
_____no_output_____
###Markdown
Using custom containers with Vertex AI Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query itIn this lab, you develop, package as a docker image, and run on **Vertex AI Training** a training application that trains a multi-class classification model that **predicts the type of forest cover from cartographic data**. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **Vertex AI** hyperparameter tuning.
###Code
import os
import time
import pandas as pd
from google.cloud import aiplatform, bigquery
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for Vertex AI Training and Prediction- `ARTIFACT_STORE` - A GCS bucket in the created in the same region.
###Code
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
###Output
_____no_output_____
###Markdown
We now create the `ARTIFACT_STORE` bucket if it's not there. Note that this bucket should be created in the region specified in the variable `REGION` (if you have already a bucket with this name in a different region than `REGION`, you may want to change the `ARTIFACT_STORE` name so that you can recreate a bucket in `REGION` with the command in the cell below).
###Code
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
###Output
gs://qwiklabs-gcp-04-853e5675f5e8-kfp-artifact-store/
###Markdown
Importing the dataset into BigQuery
###Code
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
BigQuery error in mk operation: Dataset 'qwiklabs-
gcp-04-853e5675f5e8:covertype_dataset' already exists.
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
Query complete after 0.00s: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 1127.05query/s]
Downloading: 100%|โโโโโโโโโโ| 100000/100000 [00:02<00:00, 43373.34rows/s]
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
Waiting on bqjob_r4574ab68f2451d92_0000017f73335f9b_1 ... (0s) Current status: DONE
###Markdown
Create a validation split Exercise
###Code
# TODO: You code to create the BQ table validation split
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
# TODO: Your code to export the validation table to GCS
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
(40009, 13)
(9836, 13)
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
0.6968279788531924
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
###Code
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to Vertex AI hyperparameter tuning service.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
Writing training_app/train.py
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TODO
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
###Output
Writing training_app/Dockerfile
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
os.environ["IMAGE_URI"] = IMAGE_URI
!gcloud builds submit --async --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
Creating temporary tarball archive of 2 file(s) totalling 2.6 KiB before compression.
Uploading tarball of [training_app] to [gs://qwiklabs-gcp-04-853e5675f5e8_cloudbuild/source/1646905496.70723-a012a772a5584bb896fac8fd0e2bad1e.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-04-853e5675f5e8/locations/global/builds/9e9e5120-7f9f-43f1-8adf-7283b92794fb].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/9e9e5120-7f9f-43f1-8adf-7283b92794fb?project=1076138843678].
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
9e9e5120-7f9f-43f1-8adf-7283b92794fb 2022-03-10T09:45:09+00:00 - gs://qwiklabs-gcp-04-853e5675f5e8_cloudbuild/source/1646905496.70723-a012a772a5584bb896fac8fd0e2bad1e.tgz - QUEUED
###Markdown
Submit an Vertex AI hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of `max_iter` and the linear range between `1.0e-4` and `1.0e-1` for `alpha`.
###Code
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
###Output
_____no_output_____
###Markdown
ExerciseComplete the `config.yaml` file generated below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 10 and 20* `alpha` a linear range of values between 1.0e-4 and 1.0e-1Also complete the `gcloud` command to start the hyperparameter tuning job with a max trial count anda max number of parallel trials both of 5 each.
###Code
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: max_iter
discreteValueSpec:
values:
- 10
- 20
# TODO
- parameterId: alpha
doubleValueSpec:
minValue: 1.0e-4
maxValue: 1.0e-1
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
# TODO
gcloud ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=$CONFIG_YAML \
--max-trial-count=5 \
--parallel-trial-count=5
echo "JOB_NAME: $JOB_NAME"
###Output
JOB_NAME: forestcover_tuning_20220310_094541
###Markdown
Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs". Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized): ExerciseComplete the body of the function below to retrieve the best trial from the `JOBNAME`:
###Code
# TODO
def get_trials(job_name):
jobs = aiplatform.HyperparameterTuningJob.list()
match = [job for job in jobs if job.display_name == JOB_NAME]
tuning_job = match[0] if match else None
return tuning_job.trials if tuning_job else None
def get_best_trial(trials):
metrics = [trial.final_measurement.metrics[0].value for trial in trials]
best_trial = trials[metrics.index(max(metrics))]
return best_trial
def retrieve_best_trial_from_job_name(jobname):
trials = get_trials(jobname)
best_trial = get_best_trial(trials)
return best_trial
###Output
_____no_output_____
###Markdown
You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
###Code
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f"""\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
"""
ARGS = f"""\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
"""
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
###Output
_____no_output_____
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.**Note:** We need to wait for job triggered by the cell above to complete before running the cells below.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to Vertex AI Prediction
###Code
MODEL_NAME = "forest_cover_classifier_2"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
SERVING_MACHINE_TYPE = "n1-standard-2"
###Output
_____no_output_____
###Markdown
Uploading the trained model ExerciseUpload the trained model using `aiplatform.Model.upload`:
###Code
JOB_DIR
# TODO
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_NAME,
artifact_uri=JOB_DIR,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
)
###Output
_____no_output_____
###Markdown
Deploying the uploaded model ExerciseDeploy the model using `uploaded_model`:
###Code
# TODO
endpoint = uploaded_model.deploy(
machine_type=SERVING_MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances. ExerciseQuery the deployed model using `endpoint`:
###Code
instance = [
2841.0,
45.0,
0.0,
644.0,
282.0,
1376.0,
218.0,
237.0,
156.0,
1003.0,
"Commanche",
"C4758",
]
# TODO
endpoint.predict([instance])
###Output
_____no_output_____
###Markdown
Using custom containers with Vertex AI Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query itIn this lab, you develop, package as a docker image, and run on **Vertex AI Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **Vertex AI** hyperparameter tuning.
###Code
import os
import time
import pandas as pd
from google.cloud import aiplatform, bigquery
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for Vertex AI Training and Prediction- `ARTIFACT_STORE` - A GCS bucket in the created in the same region.
###Code
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
###Output
_____no_output_____
###Markdown
We now create the `ARTIFACT_STORE` bucket if it's not there. Note that this bucket should be created in the region specified in the variable `REGION` (if you have already a bucket with this name in a different region than `REGION`, you may want to change the `ARTIFACT_STORE` name so that you can recreate a bucket in `REGION` with the command in the cell below).
###Code
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
ARTIFACT_STORE
###Output
_____no_output_____
###Markdown
Importing the dataset into BigQuery
###Code
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
!echo $SCHEMA
###Output
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
Query complete after 0.00s: 100%|โโโโโโโโโโ| 1/1 [00:00<00:00, 703.86query/s]
Downloading: 100%|โโโโโโโโโโ| 100000/100000 [00:01<00:00, 96698.09rows/s]
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
Waiting on bqjob_r16988793b405772e_0000017fd6ead4bc_1 ... (0s) Current status: DONE
###Markdown
Create a validation split Exercise
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
VALIDATION_FILE_PATH
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
df_train.iloc[:, numeric_feature_indexes].columns, df_train.iloc[
:, categorical_feature_indexes
].columns
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
0.6979463196421309
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
###Code
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
ls - aR
###Output
.:
[0m[01;34m.[0m/ [01;34m.ipynb_checkpoints[0m/ kfp_walkthrough.ipynb [01;34mtraining_app[0m/
[01;34m..[0m/ config.yaml kfp_walkthrough_vertex.ipynb
./.ipynb_checkpoints:
[01;34m.[0m/ kfp_walkthrough-checkpoint.ipynb
[01;34m..[0m/ kfp_walkthrough_vertex-checkpoint.ipynb
./training_app:
[01;34m.[0m/ [01;34m..[0m/ Dockerfile train.py
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to Vertex AI hyperparameter tuning service.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
Overwriting training_app/train.py
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
pwd
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
###Output
Overwriting training_app/Dockerfile
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
os.environ["IMAGE_URI"] = IMAGE_URI
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
Creating temporary tarball archive of 2 file(s) totalling 2.6 KiB before compression.
Uploading tarball of [training_app] to [gs://qwiklabs-gcp-00-b0ca57451bc1_cloudbuild/source/1648616370.732681-1622fc7468054633894b4f7b193ec124.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-00-b0ca57451bc1/locations/global/builds/5e987093-39c3-4796-8e1d-9319e85f95cb].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/5e987093-39c3-4796-8e1d-9319e85f95cb?project=530762185509].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "5e987093-39c3-4796-8e1d-9319e85f95cb"
FETCHSOURCE
Fetching storage object: gs://qwiklabs-gcp-00-b0ca57451bc1_cloudbuild/source/1648616370.732681-1622fc7468054633894b4f7b193ec124.tgz#1648616370957023
Copying gs://qwiklabs-gcp-00-b0ca57451bc1_cloudbuild/source/1648616370.732681-1622fc7468054633894b4f7b193ec124.tgz#1648616370957023...
/ [1 files][ 1.2 KiB/ 1.2 KiB]
Operation completed over 1 objects/1.2 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 5.12kB
Step 1/5 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
7c3b88808835: Already exists
382fcf64a9ea: Pulling fs layer
d764c2aa40d3: Pulling fs layer
90cc2e264020: Pulling fs layer
4f4fb700ef54: Pulling fs layer
395e65f0ab42: Pulling fs layer
9e19ad4dbd7d: Pulling fs layer
957c609522d8: Pulling fs layer
6a5e2168e631: Pulling fs layer
5740bb01bc78: Pulling fs layer
be09da654f5c: Pulling fs layer
288d40a4f176: Pulling fs layer
e2d3eec75c0c: Pulling fs layer
3769728eb7d7: Pulling fs layer
211e30f752a4: Pulling fs layer
ae6a5f7af5b1: Pulling fs layer
274bb2dca45b: Pulling fs layer
4105864a46df: Pulling fs layer
be09da654f5c: Waiting
288d40a4f176: Waiting
e2d3eec75c0c: Waiting
3769728eb7d7: Waiting
211e30f752a4: Waiting
ae6a5f7af5b1: Waiting
274bb2dca45b: Waiting
4105864a46df: Waiting
4f4fb700ef54: Waiting
395e65f0ab42: Waiting
957c609522d8: Waiting
9e19ad4dbd7d: Waiting
5740bb01bc78: Waiting
6a5e2168e631: Waiting
382fcf64a9ea: Verifying Checksum
382fcf64a9ea: Download complete
4f4fb700ef54: Verifying Checksum
4f4fb700ef54: Download complete
382fcf64a9ea: Pull complete
395e65f0ab42: Verifying Checksum
395e65f0ab42: Download complete
9e19ad4dbd7d: Verifying Checksum
9e19ad4dbd7d: Download complete
957c609522d8: Download complete
90cc2e264020: Download complete
5740bb01bc78: Verifying Checksum
5740bb01bc78: Download complete
6a5e2168e631: Verifying Checksum
6a5e2168e631: Download complete
288d40a4f176: Verifying Checksum
288d40a4f176: Download complete
be09da654f5c: Verifying Checksum
be09da654f5c: Download complete
e2d3eec75c0c: Verifying Checksum
e2d3eec75c0c: Download complete
3769728eb7d7: Verifying Checksum
3769728eb7d7: Download complete
211e30f752a4: Verifying Checksum
211e30f752a4: Download complete
ae6a5f7af5b1: Verifying Checksum
ae6a5f7af5b1: Download complete
4105864a46df: Verifying Checksum
4105864a46df: Download complete
d764c2aa40d3: Verifying Checksum
d764c2aa40d3: Download complete
274bb2dca45b: Verifying Checksum
274bb2dca45b: Download complete
d764c2aa40d3: Pull complete
90cc2e264020: Pull complete
4f4fb700ef54: Pull complete
395e65f0ab42: Pull complete
9e19ad4dbd7d: Pull complete
957c609522d8: Pull complete
6a5e2168e631: Pull complete
5740bb01bc78: Pull complete
be09da654f5c: Pull complete
288d40a4f176: Pull complete
e2d3eec75c0c: Pull complete
3769728eb7d7: Pull complete
211e30f752a4: Pull complete
ae6a5f7af5b1: Pull complete
274bb2dca45b: Pull complete
4105864a46df: Pull complete
Digest: sha256:5290a56a15cebd867722be8bdfd859ef959ffd14f85979a9fbd80c5c2760c3a1
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> 0db22ebb67a2
Step 2/5 : RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
---> Running in 9a8234907f38
Collecting fire
Downloading fire-0.4.0.tar.gz (87 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 87.7/87.7 KB 4.7 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting cloudml-hypertune
Downloading cloudml-hypertune-0.1.0.dev6.tar.gz (3.2 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 5.4/5.4 MB 38.7 MB/s eta 0:00:00
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 10.1/10.1 MB 51.3 MB/s eta 0:00:00
Requirement already satisfied: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.7.3)
Requirement already satisfied: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.19.5)
Requirement already satisfied: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2021.3)
Requirement already satisfied: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.2)
Requirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from fire) (1.16.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Building wheels for collected packages: fire, cloudml-hypertune, termcolor
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.4.0-py2.py3-none-any.whl size=115942 sha256=29d363acde4f498d7f0de76c7f61b08254d99abc01fdf7dbd5f1de2afe4f402e
Stored in directory: /root/.cache/pip/wheels/8a/67/fb/2e8a12fa16661b9d5af1f654bd199366799740a85c64981226
Building wheel for cloudml-hypertune (setup.py): started
Building wheel for cloudml-hypertune (setup.py): finished with status 'done'
Created wheel for cloudml-hypertune: filename=cloudml_hypertune-0.1.0.dev6-py2.py3-none-any.whl size=3987 sha256=ad348b4f40d5f240665aaea86517646e013193777c3927eede5e87f794c50d23
Stored in directory: /root/.cache/pip/wheels/a7/ff/87/e7bed0c2741fe219b3d6da67c2431d7f7fedb183032e00f81e
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4848 sha256=891ce99d9e3224489854e79607c327f93ed1df416066e5f325a684ffd226bb8e
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built fire cloudml-hypertune termcolor
Installing collected packages: termcolor, cloudml-hypertune, fire, scikit-learn, pandas
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 1.0.2
Uninstalling scikit-learn-1.0.2:
Successfully uninstalled scikit-learn-1.0.2
Attempting uninstall: pandas
Found existing installation: pandas 1.3.5
Uninstalling pandas-1.3.5:
Successfully uninstalled pandas-1.3.5
[91mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
visions 0.7.1 requires pandas>=0.25.3, but you have pandas 0.24.2 which is incompatible.
statsmodels 0.13.2 requires pandas>=0.25, but you have pandas 0.24.2 which is incompatible.
phik 0.12.0 requires pandas>=0.25.1, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 3.0.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 3.0.0 requires tangled-up-in-unicode==0.1.0, but you have tangled-up-in-unicode 0.2.0 which is incompatible.
[0mSuccessfully installed cloudml-hypertune-0.1.0.dev6 fire-0.4.0 pandas-0.24.2 scikit-learn-0.20.4 termcolor-1.1.0
[91mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[0mRemoving intermediate container 9a8234907f38
---> 386b6c26c139
Step 3/5 : WORKDIR /app
---> Running in eb90bdca5549
Removing intermediate container eb90bdca5549
---> 35045fa07674
Step 4/5 : COPY train.py .
---> 84b1e9c3cdeb
Step 5/5 : ENTRYPOINT ["python", "train.py"]
---> Running in 43ac15745ddd
Removing intermediate container 43ac15745ddd
---> fb248f26d24e
Successfully built fb248f26d24e
Successfully tagged gcr.io/qwiklabs-gcp-00-b0ca57451bc1/trainer_image:latest
PUSH
Pushing gcr.io/qwiklabs-gcp-00-b0ca57451bc1/trainer_image:latest
The push refers to repository [gcr.io/qwiklabs-gcp-00-b0ca57451bc1/trainer_image]
085e25579e32: Preparing
74a393a48853: Preparing
f425867e2283: Preparing
83a0dd2b9e38: Preparing
9638e29d8d24: Preparing
b3ab95a574c8: Preparing
d1b010151b48: Preparing
b80bc089358e: Preparing
11bc9b36546a: Preparing
43d282ce8d0b: Preparing
69fd467ac3a5: Preparing
ed4291c31559: Preparing
4bf5ae11254c: Preparing
0d592bcbe281: Preparing
770c4c112e39: Preparing
1874048fd290: Preparing
5f70bf18a086: Preparing
7e897a45d8d8: Preparing
42826651fb01: Preparing
4236d5cafaa0: Preparing
68a85fa9d77e: Preparing
b3ab95a574c8: Waiting
d1b010151b48: Waiting
b80bc089358e: Waiting
11bc9b36546a: Waiting
43d282ce8d0b: Waiting
69fd467ac3a5: Waiting
ed4291c31559: Waiting
4bf5ae11254c: Waiting
0d592bcbe281: Waiting
770c4c112e39: Waiting
1874048fd290: Waiting
5f70bf18a086: Waiting
7e897a45d8d8: Waiting
42826651fb01: Waiting
4236d5cafaa0: Waiting
68a85fa9d77e: Waiting
9638e29d8d24: Layer already exists
b3ab95a574c8: Layer already exists
83a0dd2b9e38: Layer already exists
d1b010151b48: Layer already exists
b80bc089358e: Layer already exists
11bc9b36546a: Layer already exists
43d282ce8d0b: Layer already exists
69fd467ac3a5: Layer already exists
ed4291c31559: Layer already exists
4bf5ae11254c: Layer already exists
0d592bcbe281: Layer already exists
770c4c112e39: Layer already exists
5f70bf18a086: Layer already exists
1874048fd290: Layer already exists
42826651fb01: Layer already exists
7e897a45d8d8: Layer already exists
68a85fa9d77e: Layer already exists
4236d5cafaa0: Layer already exists
085e25579e32: Pushed
74a393a48853: Pushed
f425867e2283: Pushed
latest: digest: sha256:0d60f71024f2acdf6399024223b2f22ac24393022cddd5b9f562eff1aa17cbe5 size: 4707
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
5e987093-39c3-4796-8e1d-9319e85f95cb 2022-03-30T04:59:31+00:00 2M10S gs://qwiklabs-gcp-00-b0ca57451bc1_cloudbuild/source/1648616370.732681-1622fc7468054633894b4f7b193ec124.tgz gcr.io/qwiklabs-gcp-00-b0ca57451bc1/trainer_image (+1 more) SUCCESS
###Markdown
Submit an Vertex AI hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of `max_iter` and the linear range between `1.0e-4` and `1.0e-1` for `alpha`.
###Code
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
###Output
_____no_output_____
###Markdown
ExerciseComplete the `config.yaml` file generated below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 10 and 20* `alpha` a linear range of values between 1.0e-4 and 1.0e-1Also complete the `gcloud` command to start the hyperparameter tuning job with a max trial count anda max number of parallel trials both of 5 each.
###Code
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: max_iter
discreteValueSpec:
values:
- 10
- 20
- parameterId: alpha
doubleValueSpec:
minValue: 1.0e-4
maxValue: 1.0e-1
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
gcloud ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=config.yaml \
--max-trial-count=5 \
--parallel-trial-count=5
echo "JOB_NAME: $JOB_NAME"
###Output
JOB_NAME: forestcover_tuning_20220330_050227
###Markdown
Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs". Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized): ExerciseComplete the body of the function below to retrieve the best trial from the `JOBNAME`:
###Code
def retrieve_best_trial_from_job_name(jobname):
# TODO
return best_trial
###Output
_____no_output_____
###Markdown
You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
###Code
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f"""\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
"""
ARGS = f"""\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
"""
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
###Output
_____no_output_____
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.**Note:** We need to wait for job triggered by the cell above to complete before running the cells below.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to Vertex AI Prediction
###Code
MODEL_NAME = "forest_cover_classifier_2"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
SERVING_MACHINE_TYPE = "n1-standard-2"
###Output
_____no_output_____
###Markdown
Uploading the trained model ExerciseUpload the trained model using `aiplatform.Model.upload`:
###Code
uploaded_model = # TODO
###Output
_____no_output_____
###Markdown
Deploying the uploaded model ExerciseDeploy the model using `uploaded_model`:
###Code
endpoint = # TODO
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances. ExerciseQuery the deployed model using `endpoint`:
###Code
instance = [
2841.0,
45.0,
0.0,
644.0,
282.0,
1376.0,
218.0,
237.0,
156.0,
1003.0,
"Commanche",
"C4758",
]
# TODO
###Output
_____no_output_____
###Markdown
Using custom containers with Vertex AI Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query itIn this lab, you develop, package as a docker image, and run on **Vertex AI Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **Vertex AI** hyperparameter tuning.
###Code
import os
import time
import pandas as pd
from google.cloud import aiplatform, bigquery
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for Vertex AI Training and Prediction- `ARTIFACT_STORE` - A GCS bucket in the created in the same region.
###Code
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
###Output
_____no_output_____
###Markdown
We now create the `ARTIFACT_STORE` bucket if it's not there. Note that this bucket should be created in the region specified in the variable `REGION` (if you have already a bucket with this name in a different region than `REGION`, you may want to change the `ARTIFACT_STORE` name so that you can recreate a bucket in `REGION` with the command in the cell below).
###Code
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
###Output
gs://qwiklabs-gcp-01-9a9d18213c32-kfp-artifact-store/
###Markdown
Importing the dataset into BigQuery
###Code
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
BigQuery error in mk operation: Dataset 'qwiklabs-
gcp-01-9a9d18213c32:covertype_dataset' already exists.
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
Query complete after 0.00s: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 733.08query/s]
Downloading: 100%|โโโโโโโโโโ| 100000/100000 [00:00<00:00, 101398.37rows/s]
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
Waiting on bqjob_r9f29196983fef59_0000017fdaf2daa3_1 ... (0s) Current status: DONE
###Markdown
Create a validation split Exercise
###Code
# TODO: You code to create the BQ table validation split
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (5)'
# TODO: Your code to export the validation table to GCS
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
(40009, 13)
(10027, 13)
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
0.6956218210830757
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
###Code
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to Vertex AI hyperparameter tuning service.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
Overwriting training_app/train.py
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TODO
#WORKDIR {TRAINING_APP_FOLDER}
ADD train.py /app
!ls training_app/train.py
###Output
training_app/train.py
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
os.environ["IMAGE_URI"] = IMAGE_URI
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
Creating temporary tarball archive of 3 file(s) totalling 2.7 KiB before compression.
Uploading tarball of [training_app] to [gs://qwiklabs-gcp-01-9a9d18213c32_cloudbuild/source/1648645877.943223-5b5b40d9dd8749e4bdf0239833d24877.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-01-9a9d18213c32/locations/global/builds/2a05aa78-892b-4c9f-86da-63d77823b73b].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/2a05aa78-892b-4c9f-86da-63d77823b73b?project=785019792420].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "2a05aa78-892b-4c9f-86da-63d77823b73b"
FETCHSOURCE
Fetching storage object: gs://qwiklabs-gcp-01-9a9d18213c32_cloudbuild/source/1648645877.943223-5b5b40d9dd8749e4bdf0239833d24877.tgz#1648645878170428
Copying gs://qwiklabs-gcp-01-9a9d18213c32_cloudbuild/source/1648645877.943223-5b5b40d9dd8749e4bdf0239833d24877.tgz#1648645878170428...
/ [1 files][ 1.4 KiB/ 1.4 KiB]
Operation completed over 1 objects/1.4 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 6.656kB
Step 1/3 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
7c3b88808835: Already exists
382fcf64a9ea: Pulling fs layer
d764c2aa40d3: Pulling fs layer
90cc2e264020: Pulling fs layer
4f4fb700ef54: Pulling fs layer
395e65f0ab42: Pulling fs layer
4f4fb700ef54: Waiting
9e19ad4dbd7d: Pulling fs layer
957c609522d8: Pulling fs layer
6a5e2168e631: Pulling fs layer
5740bb01bc78: Pulling fs layer
be09da654f5c: Pulling fs layer
288d40a4f176: Pulling fs layer
e2d3eec75c0c: Pulling fs layer
3769728eb7d7: Pulling fs layer
211e30f752a4: Pulling fs layer
ae6a5f7af5b1: Pulling fs layer
274bb2dca45b: Pulling fs layer
4105864a46df: Pulling fs layer
6a5e2168e631: Waiting
5740bb01bc78: Waiting
be09da654f5c: Waiting
288d40a4f176: Waiting
e2d3eec75c0c: Waiting
3769728eb7d7: Waiting
211e30f752a4: Waiting
ae6a5f7af5b1: Waiting
274bb2dca45b: Waiting
4105864a46df: Waiting
395e65f0ab42: Waiting
9e19ad4dbd7d: Waiting
957c609522d8: Waiting
382fcf64a9ea: Verifying Checksum
382fcf64a9ea: Download complete
4f4fb700ef54: Verifying Checksum
4f4fb700ef54: Download complete
382fcf64a9ea: Pull complete
395e65f0ab42: Verifying Checksum
395e65f0ab42: Download complete
9e19ad4dbd7d: Verifying Checksum
9e19ad4dbd7d: Download complete
957c609522d8: Verifying Checksum
957c609522d8: Download complete
6a5e2168e631: Verifying Checksum
6a5e2168e631: Download complete
90cc2e264020: Verifying Checksum
90cc2e264020: Download complete
5740bb01bc78: Verifying Checksum
5740bb01bc78: Download complete
be09da654f5c: Verifying Checksum
be09da654f5c: Download complete
e2d3eec75c0c: Verifying Checksum
e2d3eec75c0c: Download complete
288d40a4f176: Verifying Checksum
288d40a4f176: Download complete
3769728eb7d7: Verifying Checksum
3769728eb7d7: Download complete
211e30f752a4: Verifying Checksum
211e30f752a4: Download complete
ae6a5f7af5b1: Verifying Checksum
ae6a5f7af5b1: Download complete
4105864a46df: Verifying Checksum
4105864a46df: Download complete
d764c2aa40d3: Verifying Checksum
d764c2aa40d3: Download complete
274bb2dca45b: Verifying Checksum
274bb2dca45b: Download complete
d764c2aa40d3: Pull complete
90cc2e264020: Pull complete
4f4fb700ef54: Pull complete
395e65f0ab42: Pull complete
9e19ad4dbd7d: Pull complete
957c609522d8: Pull complete
6a5e2168e631: Pull complete
5740bb01bc78: Pull complete
be09da654f5c: Pull complete
288d40a4f176: Pull complete
e2d3eec75c0c: Pull complete
3769728eb7d7: Pull complete
211e30f752a4: Pull complete
ae6a5f7af5b1: Pull complete
274bb2dca45b: Pull complete
4105864a46df: Pull complete
Digest: sha256:5290a56a15cebd867722be8bdfd859ef959ffd14f85979a9fbd80c5c2760c3a1
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> 0db22ebb67a2
Step 2/3 : RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
---> Running in a707391a0b4f
Collecting fire
Downloading fire-0.4.0.tar.gz (87 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 87.7/87.7 KB 11.5 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting cloudml-hypertune
Downloading cloudml-hypertune-0.1.0.dev6.tar.gz (3.2 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 5.4/5.4 MB 48.6 MB/s eta 0:00:00
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 10.1/10.1 MB 53.6 MB/s eta 0:00:00
Requirement already satisfied: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.7.3)
Requirement already satisfied: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.19.5)
Requirement already satisfied: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.2)
Requirement already satisfied: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2021.3)
Requirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from fire) (1.16.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Building wheels for collected packages: fire, cloudml-hypertune, termcolor
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.4.0-py2.py3-none-any.whl size=115942 sha256=3f450b691286e19e266c267fa273e75b8ff21569caf37d3a69b53686368bd74a
Stored in directory: /root/.cache/pip/wheels/8a/67/fb/2e8a12fa16661b9d5af1f654bd199366799740a85c64981226
Building wheel for cloudml-hypertune (setup.py): started
Building wheel for cloudml-hypertune (setup.py): finished with status 'done'
Created wheel for cloudml-hypertune: filename=cloudml_hypertune-0.1.0.dev6-py2.py3-none-any.whl size=3987 sha256=2907985c9013bfa16dd863d1b6b3a104f3ee523d8ff0094b12c69f26c149e948
Stored in directory: /root/.cache/pip/wheels/a7/ff/87/e7bed0c2741fe219b3d6da67c2431d7f7fedb183032e00f81e
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4848 sha256=e0145fcef50aa0327acdca982154c0aedfe2577634deef5c417194cc911e1ce6
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built fire cloudml-hypertune termcolor
Installing collected packages: termcolor, cloudml-hypertune, fire, scikit-learn, pandas
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 1.0.2
Uninstalling scikit-learn-1.0.2:
Successfully uninstalled scikit-learn-1.0.2
Attempting uninstall: pandas
Found existing installation: pandas 1.3.5
Uninstalling pandas-1.3.5:
Successfully uninstalled pandas-1.3.5
[91mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
visions 0.7.1 requires pandas>=0.25.3, but you have pandas 0.24.2 which is incompatible.
statsmodels 0.13.2 requires pandas>=0.25, but you have pandas 0.24.2 which is incompatible.
phik 0.12.0 requires pandas>=0.25.1, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 3.0.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 3.0.0 requires tangled-up-in-unicode==0.1.0, but you have tangled-up-in-unicode 0.2.0 which is incompatible.
[0mSuccessfully installed cloudml-hypertune-0.1.0.dev6 fire-0.4.0 pandas-0.24.2 scikit-learn-0.20.4 termcolor-1.1.0
[91mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[0mRemoving intermediate container a707391a0b4f
---> 8462e0d41239
Step 3/3 : ADD train.py /app
---> 24993d1316cf
Successfully built 24993d1316cf
Successfully tagged gcr.io/qwiklabs-gcp-01-9a9d18213c32/trainer_image:latest
PUSH
Pushing gcr.io/qwiklabs-gcp-01-9a9d18213c32/trainer_image:latest
The push refers to repository [gcr.io/qwiklabs-gcp-01-9a9d18213c32/trainer_image]
9a9c0bfd9e79: Preparing
a39c89c2134b: Preparing
83a0dd2b9e38: Preparing
9638e29d8d24: Preparing
b3ab95a574c8: Preparing
d1b010151b48: Preparing
b80bc089358e: Preparing
11bc9b36546a: Preparing
43d282ce8d0b: Preparing
69fd467ac3a5: Preparing
ed4291c31559: Preparing
4bf5ae11254c: Preparing
0d592bcbe281: Preparing
770c4c112e39: Preparing
1874048fd290: Preparing
5f70bf18a086: Preparing
7e897a45d8d8: Preparing
42826651fb01: Preparing
4236d5cafaa0: Preparing
68a85fa9d77e: Preparing
d1b010151b48: Waiting
b80bc089358e: Waiting
11bc9b36546a: Waiting
43d282ce8d0b: Waiting
69fd467ac3a5: Waiting
ed4291c31559: Waiting
4bf5ae11254c: Waiting
0d592bcbe281: Waiting
770c4c112e39: Waiting
1874048fd290: Waiting
5f70bf18a086: Waiting
7e897a45d8d8: Waiting
42826651fb01: Waiting
4236d5cafaa0: Waiting
68a85fa9d77e: Waiting
9638e29d8d24: Layer already exists
83a0dd2b9e38: Layer already exists
b3ab95a574c8: Layer already exists
d1b010151b48: Layer already exists
11bc9b36546a: Layer already exists
b80bc089358e: Layer already exists
ed4291c31559: Layer already exists
43d282ce8d0b: Layer already exists
69fd467ac3a5: Layer already exists
770c4c112e39: Layer already exists
4bf5ae11254c: Layer already exists
0d592bcbe281: Layer already exists
7e897a45d8d8: Layer already exists
1874048fd290: Layer already exists
5f70bf18a086: Layer already exists
42826651fb01: Layer already exists
4236d5cafaa0: Layer already exists
68a85fa9d77e: Layer already exists
9a9c0bfd9e79: Pushed
a39c89c2134b: Pushed
latest: digest: sha256:6ca44732c6feba74eb210032319c4fbd32fa22fbfdfd54d79f72e519f71a9bb2 size: 4499
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
2a05aa78-892b-4c9f-86da-63d77823b73b 2022-03-30T13:11:18+00:00 2M11S gs://qwiklabs-gcp-01-9a9d18213c32_cloudbuild/source/1648645877.943223-5b5b40d9dd8749e4bdf0239833d24877.tgz gcr.io/qwiklabs-gcp-01-9a9d18213c32/trainer_image (+1 more) SUCCESS
###Markdown
Submit an Vertex AI hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of `max_iter` and the linear range between `1.0e-4` and `1.0e-1` for `alpha`.
###Code
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
###Output
_____no_output_____
###Markdown
ExerciseComplete the `config.yaml` file generated below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 10 and 20* `alpha` a linear range of values between 1.0e-4 and 1.0e-1Also complete the `gcloud` command to start the hyperparameter tuning job with a max trial count anda max number of parallel trials both of 5 each.
###Code
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: max_iter
dicreteValueSpec
# TODO
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
gcloud ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=$CONFIG_YAML \
--max-trial-count=# TODO \
--parallel-trial-count=# TODO
echo "JOB_NAME: $JOB_NAME"
###Output
JOB_NAME: forestcover_tuning_20220330_131346
###Markdown
Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs". Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized): ExerciseComplete the body of the function below to retrieve the best trial from the `JOBNAME`:
###Code
def retrieve_best_trial_from_job_name(jobname):
# TODO
return best_trial
###Output
_____no_output_____
###Markdown
You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
###Code
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f"""\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
"""
ARGS = f"""\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
"""
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
###Output
_____no_output_____
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.**Note:** We need to wait for job triggered by the cell above to complete before running the cells below.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to Vertex AI Prediction
###Code
MODEL_NAME = "forest_cover_classifier_2"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
SERVING_MACHINE_TYPE = "n1-standard-2"
###Output
_____no_output_____
###Markdown
Uploading the trained model ExerciseUpload the trained model using `aiplatform.Model.upload`:
###Code
uploaded_model = # TODO
###Output
_____no_output_____
###Markdown
Deploying the uploaded model ExerciseDeploy the model using `uploaded_model`:
###Code
endpoint = # TODO
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances. ExerciseQuery the deployed model using `endpoint`:
###Code
instance = [
2841.0,
45.0,
0.0,
644.0,
282.0,
1376.0,
218.0,
237.0,
156.0,
1003.0,
"Commanche",
"C4758",
]
# TODO
###Output
_____no_output_____
###Markdown
Using custom containers with Vertex AI Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query itIn this lab, you develop, package as a docker image, and run on **Vertex AI Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **Vertex AI** hyperparameter tuning.
###Code
import os
import time
import pandas as pd
from google.cloud import aiplatform, bigquery
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for Vertex AI Training and Prediction- `ARTIFACT_STORE` - A GCS bucket in the created in the same region.
###Code
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-vertex"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
###Output
_____no_output_____
###Markdown
We now create the `ARTIFACT_STORE` bucket if it's not there. Note that this bucket should be created in the region specified in the variable `REGION` (if you have already a bucket with this name in a different region than `REGION`, you may want to change the `ARTIFACT_STORE` name so that you can recreate a bucket in `REGION` with the command in the cell below).
###Code
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
###Output
_____no_output_____
###Markdown
Importing the dataset into BigQuery
###Code
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
_____no_output_____
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
_____no_output_____
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
_____no_output_____
###Markdown
Create a validation split Exercise
###Code
# TODO: You code to create the BQ table validation split
# TODO: Your code to export the validation table to GCS
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
_____no_output_____
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
_____no_output_____
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
###Code
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to Vertex AI hyperparameter tuning service.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
_____no_output_____
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
# TODO
###Output
_____no_output_____
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
os.environ["IMAGE_URI"] = IMAGE_URI
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
_____no_output_____
###Markdown
Submit an Vertex AI hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of `max_iter` and the linear range between `1.0e-4` and `1.0e-1` for `alpha`.
###Code
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
###Output
_____no_output_____
###Markdown
ExerciseComplete the `config.yaml` file generated below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 10 and 20* `alpha` a linear range of values between 1.0e-4 and 1.0e-1Also complete the `gcloud` command to start the hyperparameter tuning job with a max trial count anda max number of parallel trials both of 5 each.
###Code
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
# TODO
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
gcloud ai hp-tuning-jobs create \
--region=# TODO \
--display-name=# TODO \
--config=# TODO \
--max-trial-count=# TODO \
--parallel-trial-count=# TODO
echo "JOB_NAME: $JOB_NAME"
###Output
_____no_output_____
###Markdown
Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs". Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized): ExerciseComplete the body of the function below to retrieve the best trial from the `JOBNAME`:
###Code
def retrieve_best_trial_from_job_name(jobname):
# TODO
return best_trial
###Output
_____no_output_____
###Markdown
You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
###Code
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f"""\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
"""
ARGS = f"""\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
"""
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
###Output
_____no_output_____
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.**Note:** We need to wait for job triggered by the cell above to complete before running the cells below.
###Code
!gsutil ls $JOB_DIR
###Output
_____no_output_____
###Markdown
Deploy the model to Vertex AI Prediction
###Code
MODEL_NAME = "forest_cover_classifier_2"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
SERVING_MACHINE_TYPE = "n1-standard-2"
###Output
_____no_output_____
###Markdown
Uploading the trained model ExerciseUpload the trained model using `aiplatform.Model.upload`:
###Code
uploaded_model = # TODO
###Output
_____no_output_____
###Markdown
Deploying the uploaded model ExerciseDeploy the model using `uploaded_model`:
###Code
endpoint = # TODO
###Output
_____no_output_____
###Markdown
Serve predictions Prepare the input file with JSON formated instances. ExerciseQuery the deployed model using `endpoint`:
###Code
instance = [
2841.0,
45.0,
0.0,
644.0,
282.0,
1376.0,
218.0,
237.0,
156.0,
1003.0,
"Commanche",
"C4758",
]
# TODO
###Output
_____no_output_____
###Markdown
Using custom containers with Vertex AI Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query itIn this lab, you develop, package as a docker image, and run on **Vertex AI Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **Vertex AI** hyperparameter tuning.
###Code
import os
import time
import pandas as pd
from google.cloud import aiplatform, bigquery
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for Vertex AI Training and Prediction- `ARTIFACT_STORE` - A GCS bucket in the created in the same region.
###Code
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
###Output
_____no_output_____
###Markdown
We now create the `ARTIFACT_STORE` bucket if it's not there. Note that this bucket should be created in the region specified in the variable `REGION` (if you have already a bucket with this name in a different region than `REGION`, you may want to change the `ARTIFACT_STORE` name so that you can recreate a bucket in `REGION` with the command in the cell below).
###Code
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
###Output
Creating gs://qwiklabs-gcp-04-5f5e7d641646-kfp-artifact-store/...
###Markdown
Importing the dataset into BigQuery
###Code
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
Dataset 'qwiklabs-gcp-04-5f5e7d641646:covertype_dataset' successfully created.
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
Query complete after 0.00s: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 950.23query/s]
Downloading: 100%|โโโโโโโโโโ| 100000/100000 [00:01<00:00, 78950.76rows/s]
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
Waiting on bqjob_r26006efa8dcc1516_0000017fd68e3ac7_1 ... (0s) Current status: DONE
###Markdown
Create a validation split Exercise
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (5, 6, 7, 8, 9)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
(40009, 13)
(50060, 13)
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
0.6995805033959249
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
###Code
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to Vertex AI hyperparameter tuning service.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
Writing training_app/train.py
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
###Output
Overwriting training_app/Dockerfile
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
os.environ["IMAGE_URI"] = IMAGE_URI
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
Creating temporary tarball archive of 2 file(s) totalling 2.6 KiB before compression.
Uploading tarball of [training_app] to [gs://qwiklabs-gcp-04-5f5e7d641646_cloudbuild/source/1648572993.004149-369f4d7aa31d49498758004bd315945c.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-04-5f5e7d641646/locations/global/builds/c0e5f2db-d5af-439e-bced-4a6b2e69b92a].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/c0e5f2db-d5af-439e-bced-4a6b2e69b92a?project=997419976351].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "c0e5f2db-d5af-439e-bced-4a6b2e69b92a"
FETCHSOURCE
Fetching storage object: gs://qwiklabs-gcp-04-5f5e7d641646_cloudbuild/source/1648572993.004149-369f4d7aa31d49498758004bd315945c.tgz#1648572993463191
Copying gs://qwiklabs-gcp-04-5f5e7d641646_cloudbuild/source/1648572993.004149-369f4d7aa31d49498758004bd315945c.tgz#1648572993463191...
/ [1 files][ 1.2 KiB/ 1.2 KiB]
Operation completed over 1 objects/1.2 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 5.12kB
Step 1/5 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
7c3b88808835: Already exists
382fcf64a9ea: Pulling fs layer
d764c2aa40d3: Pulling fs layer
90cc2e264020: Pulling fs layer
4f4fb700ef54: Pulling fs layer
395e65f0ab42: Pulling fs layer
9e19ad4dbd7d: Pulling fs layer
957c609522d8: Pulling fs layer
6a5e2168e631: Pulling fs layer
5740bb01bc78: Pulling fs layer
be09da654f5c: Pulling fs layer
288d40a4f176: Pulling fs layer
e2d3eec75c0c: Pulling fs layer
3769728eb7d7: Pulling fs layer
211e30f752a4: Pulling fs layer
ae6a5f7af5b1: Pulling fs layer
274bb2dca45b: Pulling fs layer
4105864a46df: Pulling fs layer
4f4fb700ef54: Waiting
395e65f0ab42: Waiting
9e19ad4dbd7d: Waiting
957c609522d8: Waiting
6a5e2168e631: Waiting
5740bb01bc78: Waiting
be09da654f5c: Waiting
288d40a4f176: Waiting
3769728eb7d7: Waiting
211e30f752a4: Waiting
ae6a5f7af5b1: Waiting
274bb2dca45b: Waiting
4105864a46df: Waiting
e2d3eec75c0c: Waiting
382fcf64a9ea: Download complete
4f4fb700ef54: Verifying Checksum
4f4fb700ef54: Download complete
382fcf64a9ea: Pull complete
395e65f0ab42: Verifying Checksum
395e65f0ab42: Download complete
90cc2e264020: Verifying Checksum
90cc2e264020: Download complete
9e19ad4dbd7d: Verifying Checksum
9e19ad4dbd7d: Download complete
957c609522d8: Verifying Checksum
957c609522d8: Download complete
6a5e2168e631: Verifying Checksum
6a5e2168e631: Download complete
5740bb01bc78: Verifying Checksum
5740bb01bc78: Download complete
be09da654f5c: Verifying Checksum
be09da654f5c: Download complete
288d40a4f176: Verifying Checksum
288d40a4f176: Download complete
e2d3eec75c0c: Verifying Checksum
e2d3eec75c0c: Download complete
3769728eb7d7: Verifying Checksum
3769728eb7d7: Download complete
211e30f752a4: Verifying Checksum
211e30f752a4: Download complete
ae6a5f7af5b1: Verifying Checksum
ae6a5f7af5b1: Download complete
4105864a46df: Download complete
d764c2aa40d3: Verifying Checksum
d764c2aa40d3: Download complete
274bb2dca45b: Verifying Checksum
274bb2dca45b: Download complete
d764c2aa40d3: Pull complete
90cc2e264020: Pull complete
4f4fb700ef54: Pull complete
395e65f0ab42: Pull complete
9e19ad4dbd7d: Pull complete
957c609522d8: Pull complete
6a5e2168e631: Pull complete
5740bb01bc78: Pull complete
be09da654f5c: Pull complete
288d40a4f176: Pull complete
e2d3eec75c0c: Pull complete
3769728eb7d7: Pull complete
211e30f752a4: Pull complete
ae6a5f7af5b1: Pull complete
274bb2dca45b: Pull complete
4105864a46df: Pull complete
Digest: sha256:5290a56a15cebd867722be8bdfd859ef959ffd14f85979a9fbd80c5c2760c3a1
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> 0db22ebb67a2
Step 2/5 : RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
---> Running in 69cbec514c88
Collecting fire
Downloading fire-0.4.0.tar.gz (87 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 87.7/87.7 KB 5.2 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting cloudml-hypertune
Downloading cloudml-hypertune-0.1.0.dev6.tar.gz (3.2 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 5.4/5.4 MB 44.2 MB/s eta 0:00:00
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 10.1/10.1 MB 51.8 MB/s eta 0:00:00
Requirement already satisfied: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.7.3)
Requirement already satisfied: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.19.5)
Requirement already satisfied: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.2)
Requirement already satisfied: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2021.3)
Requirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from fire) (1.16.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Building wheels for collected packages: fire, cloudml-hypertune, termcolor
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.4.0-py2.py3-none-any.whl size=115942 sha256=14858edf53195efef1737203942008beae0fdf0ff3fdad6610337b02302ef367
Stored in directory: /root/.cache/pip/wheels/8a/67/fb/2e8a12fa16661b9d5af1f654bd199366799740a85c64981226
Building wheel for cloudml-hypertune (setup.py): started
Building wheel for cloudml-hypertune (setup.py): finished with status 'done'
Created wheel for cloudml-hypertune: filename=cloudml_hypertune-0.1.0.dev6-py2.py3-none-any.whl size=3987 sha256=f2f84f73f145fb554bf8cad2e4cdbd309c67d2139ef68d8f6242cc0a957bbc78
Stored in directory: /root/.cache/pip/wheels/a7/ff/87/e7bed0c2741fe219b3d6da67c2431d7f7fedb183032e00f81e
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4848 sha256=52de9eae7db79cddb58e9d341925af2814e813b96035ffc574c4a9dc66fe13e6
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built fire cloudml-hypertune termcolor
Installing collected packages: termcolor, cloudml-hypertune, fire, scikit-learn, pandas
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 1.0.2
Uninstalling scikit-learn-1.0.2:
Successfully uninstalled scikit-learn-1.0.2
Attempting uninstall: pandas
Found existing installation: pandas 1.3.5
Uninstalling pandas-1.3.5:
Successfully uninstalled pandas-1.3.5
[91mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
visions 0.7.1 requires pandas>=0.25.3, but you have pandas 0.24.2 which is incompatible.
statsmodels 0.13.2 requires pandas>=0.25, but you have pandas 0.24.2 which is incompatible.
phik 0.12.0 requires pandas>=0.25.1, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 3.0.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 3.0.0 requires tangled-up-in-unicode==0.1.0, but you have tangled-up-in-unicode 0.2.0 which is incompatible.
[0mSuccessfully installed cloudml-hypertune-0.1.0.dev6 fire-0.4.0 pandas-0.24.2 scikit-learn-0.20.4 termcolor-1.1.0
[91mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[0mRemoving intermediate container 69cbec514c88
---> 71777ce13516
Step 3/5 : WORKDIR /app
---> Running in 10fe8cf45e5c
Removing intermediate container 10fe8cf45e5c
---> 03c8340f9087
Step 4/5 : COPY train.py .
---> 7a0b26dc62ec
Step 5/5 : ENTRYPOINT ["python", "train.py"]
---> Running in aa3824b2e508
Removing intermediate container aa3824b2e508
---> 03c68c9c6787
Successfully built 03c68c9c6787
Successfully tagged gcr.io/qwiklabs-gcp-04-5f5e7d641646/trainer_image:latest
PUSH
Pushing gcr.io/qwiklabs-gcp-04-5f5e7d641646/trainer_image:latest
The push refers to repository [gcr.io/qwiklabs-gcp-04-5f5e7d641646/trainer_image]
64634402e705: Preparing
a598a173801b: Preparing
a322735f9dca: Preparing
83a0dd2b9e38: Preparing
9638e29d8d24: Preparing
b3ab95a574c8: Preparing
d1b010151b48: Preparing
b80bc089358e: Preparing
11bc9b36546a: Preparing
43d282ce8d0b: Preparing
69fd467ac3a5: Preparing
ed4291c31559: Preparing
4bf5ae11254c: Preparing
0d592bcbe281: Preparing
770c4c112e39: Preparing
1874048fd290: Preparing
5f70bf18a086: Preparing
7e897a45d8d8: Preparing
42826651fb01: Preparing
4236d5cafaa0: Preparing
68a85fa9d77e: Preparing
ed4291c31559: Waiting
4bf5ae11254c: Waiting
0d592bcbe281: Waiting
770c4c112e39: Waiting
1874048fd290: Waiting
5f70bf18a086: Waiting
7e897a45d8d8: Waiting
42826651fb01: Waiting
4236d5cafaa0: Waiting
68a85fa9d77e: Waiting
b3ab95a574c8: Waiting
d1b010151b48: Waiting
b80bc089358e: Waiting
11bc9b36546a: Waiting
43d282ce8d0b: Waiting
69fd467ac3a5: Waiting
9638e29d8d24: Layer already exists
83a0dd2b9e38: Layer already exists
d1b010151b48: Layer already exists
b3ab95a574c8: Layer already exists
b80bc089358e: Layer already exists
11bc9b36546a: Layer already exists
69fd467ac3a5: Layer already exists
43d282ce8d0b: Layer already exists
ed4291c31559: Layer already exists
4bf5ae11254c: Layer already exists
770c4c112e39: Layer already exists
0d592bcbe281: Layer already exists
1874048fd290: Layer already exists
5f70bf18a086: Layer already exists
7e897a45d8d8: Layer already exists
42826651fb01: Layer already exists
4236d5cafaa0: Layer already exists
68a85fa9d77e: Layer already exists
a598a173801b: Pushed
64634402e705: Pushed
a322735f9dca: Pushed
latest: digest: sha256:1e7f9d57c5349b321ccba14ee3a6200273e2e29ebaa89ca4d3fb1317d7540e10 size: 4707
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
c0e5f2db-d5af-439e-bced-4a6b2e69b92a 2022-03-29T16:56:33+00:00 2M14S gs://qwiklabs-gcp-04-5f5e7d641646_cloudbuild/source/1648572993.004149-369f4d7aa31d49498758004bd315945c.tgz gcr.io/qwiklabs-gcp-04-5f5e7d641646/trainer_image (+1 more) SUCCESS
###Markdown
Submit an Vertex AI hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of `max_iter` and the linear range between `1.0e-4` and `1.0e-1` for `alpha`.
###Code
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
###Output
_____no_output_____
###Markdown
ExerciseComplete the `config.yaml` file generated below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 10 and 20* `alpha` a linear range of values between 1.0e-4 and 1.0e-1Also complete the `gcloud` command to start the hyperparameter tuning job with a max trial count anda max number of parallel trials both of 5 each.
###Code
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: max_iter
discreteValueSpec:
values:
- 10
- 20
- parameterId: alpha
doubleValueSpec:
minValue: 1.0e-4
maxValue: 1.0e-1
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
gcloud ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=$CONFIG_YAML \
--max-trial-count=5 \
--parallel-trial-count=5
echo "JOB_NAME: $JOB_NAME"
###Output
JOB_NAME: forestcover_tuning_20220329_165921
###Markdown
Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs". Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized): ExerciseComplete the body of the function below to retrieve the best trial from the `JOBNAME`:
###Code
def get_trials(job_name):
jobs = aiplatform.HyperparameterTuningJob.list()
match = [job for job in jobs if job.display_name == JOB_NAME]
tuning_job = match[0] if match else None
return tuning_job.trials if tuning_job else None
def get_best_trial(trials):
metrics = [trial.final_measurement.metrics[0].value for trial in trials]
best_trial = trials[metrics.index(max(metrics))]
return best_trial
def retrieve_best_trial_from_job_name(jobname):
trials = get_trials(jobname)
best_trial = get_best_trial(trials)
return best_trial
###Output
_____no_output_____
###Markdown
You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
###Code
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f"""\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
"""
ARGS = f"""\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
"""
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
###Output
Using endpoint [https://us-central1-aiplatform.googleapis.com/]
CustomJob [projects/997419976351/locations/us-central1/customJobs/5913829392865296384] is submitted successfully.
Your job is still active. You may view the status of your job with the command
$ gcloud ai custom-jobs describe projects/997419976351/locations/us-central1/customJobs/5913829392865296384
or continue streaming the logs with the command
$ gcloud ai custom-jobs stream-logs projects/997419976351/locations/us-central1/customJobs/5913829392865296384
The model will be exported at: gs://qwiklabs-gcp-04-5f5e7d641646-kfp-artifact-store/jobs/JOB_VERTEX_20220329_171621
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.**Note:** We need to wait for job triggered by the cell above to complete before running the cells below.
###Code
!gsutil ls $JOB_DIR
###Output
gs://qwiklabs-gcp-04-5f5e7d641646-kfp-artifact-store/jobs/JOB_VERTEX_20220329_171621/model.pkl
###Markdown
Deploy the model to Vertex AI Prediction
###Code
MODEL_NAME = "forest_cover_classifier_2"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
SERVING_MACHINE_TYPE = "n1-standard-2"
###Output
_____no_output_____
###Markdown
Uploading the trained model ExerciseUpload the trained model using `aiplatform.Model.upload`:
###Code
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_NAME,
artifact_uri=JOB_DIR,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
)
###Output
INFO:google.cloud.aiplatform.models:Creating Model
INFO:google.cloud.aiplatform.models:Create Model backing LRO: projects/997419976351/locations/us-central1/models/6449880344068882432/operations/4070289310409031680
INFO:google.cloud.aiplatform.models:Model created. Resource name: projects/997419976351/locations/us-central1/models/6449880344068882432
INFO:google.cloud.aiplatform.models:To use this Model in another session:
INFO:google.cloud.aiplatform.models:model = aiplatform.Model('projects/997419976351/locations/us-central1/models/6449880344068882432')
###Markdown
Deploying the uploaded model ExerciseDeploy the model using `uploaded_model`:
###Code
endpoint = uploaded_model.deploy(
machine_type=SERVING_MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
###Output
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/997419976351/locations/us-central1/endpoints/5497672488089288704/operations/3047409245042507776
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/997419976351/locations/us-central1/endpoints/5497672488089288704
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/997419976351/locations/us-central1/endpoints/5497672488089288704')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/997419976351/locations/us-central1/endpoints/5497672488089288704
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/997419976351/locations/us-central1/endpoints/5497672488089288704/operations/4702482108101165056
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/997419976351/locations/us-central1/endpoints/5497672488089288704
###Markdown
Serve predictions Prepare the input file with JSON formated instances. ExerciseQuery the deployed model using `endpoint`:
###Code
instance = [
2841.0,
45.0,
0.0,
644.0,
282.0,
1376.0,
218.0,
237.0,
156.0,
1003.0,
"Commanche",
"C4758",
]
endpoint.predict([instance])
###Output
_____no_output_____
###Markdown
Using custom containers with Vertex AI Training**Learning Objectives:**1. Learn how to create a train and a validation split with BigQuery1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query itIn this lab, you develop, package as a docker image, and run on **Vertex AI Training** a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The [dataset](../../../datasets/covertype/README.md) used in the lab is based on **Covertype Data Set** from UCI Machine Learning Repository.The training code uses `scikit-learn` for data pre-processing and modeling. The code has been instrumented using the `hypertune` package so it can be used with **Vertex AI** hyperparameter tuning.
###Code
import os
import time
import pandas as pd
from google.cloud import aiplatform, bigquery
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
###Output
_____no_output_____
###Markdown
Configure environment settings Set location paths, connections strings, and other environment settings. Make sure to update `REGION`, and `ARTIFACT_STORE` with the settings reflecting your lab environment. - `REGION` - the compute region for Vertex AI Training and Prediction- `ARTIFACT_STORE` - A GCS bucket in the created in the same region.
###Code
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
###Output
_____no_output_____
###Markdown
We now create the `ARTIFACT_STORE` bucket if it's not there. Note that this bucket should be created in the region specified in the variable `REGION` (if you have already a bucket with this name in a different region than `REGION`, you may want to change the `ARTIFACT_STORE` name so that you can recreate a bucket in `REGION` with the command in the cell below).
###Code
!echo $REGION
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
###Output
us-central1
Creating gs://qwiklabs-gcp-01-37ab11ee03f8-kfp-artifact-store/...
###Markdown
Importing the dataset into BigQuery
###Code
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
###Output
Dataset 'qwiklabs-gcp-01-37ab11ee03f8:covertype_dataset' successfully created.
###Markdown
Explore the Covertype dataset
###Code
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
###Output
Query complete after 0.00s: 100%|โโโโโโโโโโ| 2/2 [00:00<00:00, 1198.03query/s]
Downloading: 100%|โโโโโโโโโโ| 100000/100000 [00:00<00:00, 106064.13rows/s]
###Markdown
Create training and validation splitsUse BigQuery to sample training and validation splits and save them to GCS storage Create a training split
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
###Output
Waiting on bqjob_r3315650686996ce5_0000017edefe88fd_1 ... (0s) Current status: DONE
###Markdown
Create a validation split Exercise
###Code
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
###Output
(40009, 13)
(9836, 13)
###Markdown
Develop a training application Configure the `sklearn` training pipeline.The training pipeline preprocesses data by standardizing all numeric features using `sklearn.preprocessing.StandardScaler` and encoding all categorical features using `sklearn.preprocessing.OneHotEncoder`. It uses stochastic gradient descent linear classifier (`SGDClassifier`) for modeling.
###Code
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
###Output
_____no_output_____
###Markdown
Convert all numeric features to `float64`To avoid warning messages from `StandardScaler` all numeric features are converted to `float64`.
###Code
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
###Output
_____no_output_____
###Markdown
Run the pipeline locally.
###Code
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Calculate the trained model's accuracy.
###Code
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
###Output
0.692456283041887
###Markdown
Prepare the hyperparameter tuning application.Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
###Code
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
###Output
_____no_output_____
###Markdown
Write the tuning script. Notice the use of the `hypertune` package to report the `accuracy` optimization metric to Vertex AI hyperparameter tuning service.
###Code
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
###Output
Writing training_app/train.py
###Markdown
Package the script into a docker image.Notice that we are installing specific versions of `scikit-learn` and `pandas` in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container. Make sure to update the URI for the base image so that it points to your project's **Container Registry**. ExerciseComplete the Dockerfile below so that it copies the 'train.py' file into the containerat `/app` and runs it when the container is started.
###Code
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
###Output
Writing training_app/Dockerfile
###Markdown
Build the docker image. You use **Cloud Build** to build the image and push it your project's **Container Registry**. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
###Code
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
os.environ["IMAGE_URI"] = IMAGE_URI
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
###Output
Creating temporary tarball archive of 2 file(s) totalling 2.6 KiB before compression.
Uploading tarball of [training_app] to [gs://qwiklabs-gcp-01-37ab11ee03f8_cloudbuild/source/1644419366.533746-eb800fc8a4fa4c67bdfa0eb2cedc7a7a.tgz]
Created [https://cloudbuild.googleapis.com/v1/projects/qwiklabs-gcp-01-37ab11ee03f8/locations/global/builds/633bf7c1-a6e6-487b-ab2a-ac719e279bea].
Logs are available at [https://console.cloud.google.com/cloud-build/builds/633bf7c1-a6e6-487b-ab2a-ac719e279bea?project=562035846305].
----------------------------- REMOTE BUILD OUTPUT ------------------------------
starting build "633bf7c1-a6e6-487b-ab2a-ac719e279bea"
FETCHSOURCE
Fetching storage object: gs://qwiklabs-gcp-01-37ab11ee03f8_cloudbuild/source/1644419366.533746-eb800fc8a4fa4c67bdfa0eb2cedc7a7a.tgz#1644419367380300
Copying gs://qwiklabs-gcp-01-37ab11ee03f8_cloudbuild/source/1644419366.533746-eb800fc8a4fa4c67bdfa0eb2cedc7a7a.tgz#1644419367380300...
/ [1 files][ 1.2 KiB/ 1.2 KiB]
Operation completed over 1 objects/1.2 KiB.
BUILD
Already have image (with digest): gcr.io/cloud-builders/docker
Sending build context to Docker daemon 5.12kB
Step 1/5 : FROM gcr.io/deeplearning-platform-release/base-cpu
latest: Pulling from deeplearning-platform-release/base-cpu
ea362f368469: Already exists
eac27809cab6: Pulling fs layer
036adb2e026f: Pulling fs layer
02a952c9f89d: Pulling fs layer
4f4fb700ef54: Pulling fs layer
0ae3f8214e8b: Pulling fs layer
ca41810bd5e2: Pulling fs layer
b72e35350998: Pulling fs layer
c95a831d214e: Pulling fs layer
dd21cbaee501: Pulling fs layer
34c0d5f571ee: Pulling fs layer
cffd6b808cdb: Pulling fs layer
0c9fca2a66fe: Pulling fs layer
e7e70d8d1c2f: Pulling fs layer
13bd35af8cff: Pulling fs layer
549a6d6636b4: Pulling fs layer
812c2650a52b: Pulling fs layer
171e3814b2ec: Pulling fs layer
4f4fb700ef54: Waiting
0ae3f8214e8b: Waiting
ca41810bd5e2: Waiting
b72e35350998: Waiting
c95a831d214e: Waiting
dd21cbaee501: Waiting
34c0d5f571ee: Waiting
cffd6b808cdb: Waiting
0c9fca2a66fe: Waiting
e7e70d8d1c2f: Waiting
13bd35af8cff: Waiting
549a6d6636b4: Waiting
812c2650a52b: Waiting
171e3814b2ec: Waiting
eac27809cab6: Verifying Checksum
eac27809cab6: Download complete
4f4fb700ef54: Verifying Checksum
4f4fb700ef54: Download complete
eac27809cab6: Pull complete
0ae3f8214e8b: Verifying Checksum
0ae3f8214e8b: Download complete
02a952c9f89d: Download complete
ca41810bd5e2: Verifying Checksum
ca41810bd5e2: Download complete
b72e35350998: Verifying Checksum
b72e35350998: Download complete
c95a831d214e: Verifying Checksum
c95a831d214e: Download complete
dd21cbaee501: Verifying Checksum
dd21cbaee501: Download complete
34c0d5f571ee: Verifying Checksum
34c0d5f571ee: Download complete
cffd6b808cdb: Verifying Checksum
cffd6b808cdb: Download complete
0c9fca2a66fe: Verifying Checksum
0c9fca2a66fe: Download complete
e7e70d8d1c2f: Verifying Checksum
e7e70d8d1c2f: Download complete
13bd35af8cff: Verifying Checksum
13bd35af8cff: Download complete
549a6d6636b4: Verifying Checksum
549a6d6636b4: Download complete
171e3814b2ec: Verifying Checksum
171e3814b2ec: Download complete
036adb2e026f: Verifying Checksum
036adb2e026f: Download complete
812c2650a52b: Verifying Checksum
812c2650a52b: Download complete
036adb2e026f: Pull complete
02a952c9f89d: Pull complete
4f4fb700ef54: Pull complete
0ae3f8214e8b: Pull complete
ca41810bd5e2: Pull complete
b72e35350998: Pull complete
c95a831d214e: Pull complete
dd21cbaee501: Pull complete
34c0d5f571ee: Pull complete
cffd6b808cdb: Pull complete
0c9fca2a66fe: Pull complete
e7e70d8d1c2f: Pull complete
13bd35af8cff: Pull complete
549a6d6636b4: Pull complete
812c2650a52b: Pull complete
171e3814b2ec: Pull complete
Digest: sha256:0ff776d12620e1526f999481051595075692b977a0ce7bbf573208eed5867823
Status: Downloaded newer image for gcr.io/deeplearning-platform-release/base-cpu:latest
---> 70d8dcc15a81
Step 2/5 : RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
---> Running in 9594aa18fd55
Collecting fire
Downloading fire-0.4.0.tar.gz (87 kB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 87.7/87.7 KB 4.2 MB/s eta 0:00:00
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting cloudml-hypertune
Downloading cloudml-hypertune-0.1.0.dev6.tar.gz (3.2 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting scikit-learn==0.20.4
Downloading scikit_learn-0.20.4-cp37-cp37m-manylinux1_x86_64.whl (5.4 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 5.4/5.4 MB 38.2 MB/s eta 0:00:00
Collecting pandas==0.24.2
Downloading pandas-0.24.2-cp37-cp37m-manylinux1_x86_64.whl (10.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 10.1/10.1 MB 46.4 MB/s eta 0:00:00
Requirement already satisfied: numpy>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.19.5)
Requirement already satisfied: scipy>=0.13.3 in /opt/conda/lib/python3.7/site-packages (from scikit-learn==0.20.4) (1.7.3)
Requirement already satisfied: pytz>=2011k in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2021.3)
Requirement already satisfied: python-dateutil>=2.5.0 in /opt/conda/lib/python3.7/site-packages (from pandas==0.24.2) (2.8.2)
Requirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from fire) (1.16.0)
Collecting termcolor
Downloading termcolor-1.1.0.tar.gz (3.9 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Building wheels for collected packages: fire, cloudml-hypertune, termcolor
Building wheel for fire (setup.py): started
Building wheel for fire (setup.py): finished with status 'done'
Created wheel for fire: filename=fire-0.4.0-py2.py3-none-any.whl size=115942 sha256=e9f224a14a76ca03b9cc95b44448b1f401eea76110e0e2a751495bbec2a7e36e
Stored in directory: /root/.cache/pip/wheels/8a/67/fb/2e8a12fa16661b9d5af1f654bd199366799740a85c64981226
Building wheel for cloudml-hypertune (setup.py): started
Building wheel for cloudml-hypertune (setup.py): finished with status 'done'
Created wheel for cloudml-hypertune: filename=cloudml_hypertune-0.1.0.dev6-py2.py3-none-any.whl size=3987 sha256=3e027f772b57ba33f5e654acde8cec17ed46d7017d996bb05241805116186c19
Stored in directory: /root/.cache/pip/wheels/a7/ff/87/e7bed0c2741fe219b3d6da67c2431d7f7fedb183032e00f81e
Building wheel for termcolor (setup.py): started
Building wheel for termcolor (setup.py): finished with status 'done'
Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4848 sha256=026f8383c642798e3d41abf6bca12818059dbd282d2f9fe8480037acf2a7fc53
Stored in directory: /root/.cache/pip/wheels/3f/e3/ec/8a8336ff196023622fbcb36de0c5a5c218cbb24111d1d4c7f2
Successfully built fire cloudml-hypertune termcolor
Installing collected packages: termcolor, cloudml-hypertune, fire, scikit-learn, pandas
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 1.0.2
Uninstalling scikit-learn-1.0.2:
Successfully uninstalled scikit-learn-1.0.2
Attempting uninstall: pandas
Found existing installation: pandas 1.3.5
Uninstalling pandas-1.3.5:
Successfully uninstalled pandas-1.3.5
[91mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
visions 0.7.4 requires pandas>=0.25.3, but you have pandas 0.24.2 which is incompatible.
statsmodels 0.13.1 requires pandas>=0.25, but you have pandas 0.24.2 which is incompatible.
phik 0.12.0 requires pandas>=0.25.1, but you have pandas 0.24.2 which is incompatible.
pandas-profiling 3.1.0 requires pandas!=1.0.0,!=1.0.1,!=1.0.2,!=1.1.0,>=0.25.3, but you have pandas 0.24.2 which is incompatible.
[0mSuccessfully installed cloudml-hypertune-0.1.0.dev6 fire-0.4.0 pandas-0.24.2 scikit-learn-0.20.4 termcolor-1.1.0
[91mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
[0mRemoving intermediate container 9594aa18fd55
---> bdaaa61adce1
Step 3/5 : WORKDIR /app
---> Running in 06a811f572ec
Removing intermediate container 06a811f572ec
---> e1e6e3a72361
Step 4/5 : COPY train.py .
---> 01d63e0b8a38
Step 5/5 : ENTRYPOINT ["python", "train.py"]
---> Running in eb395cc425bb
Removing intermediate container eb395cc425bb
---> 44a622084ce6
Successfully built 44a622084ce6
Successfully tagged gcr.io/qwiklabs-gcp-01-37ab11ee03f8/trainer_image:latest
PUSH
Pushing gcr.io/qwiklabs-gcp-01-37ab11ee03f8/trainer_image:latest
The push refers to repository [gcr.io/qwiklabs-gcp-01-37ab11ee03f8/trainer_image]
ca2c34e579d3: Preparing
c1a36b269934: Preparing
298d24cba698: Preparing
afdacae73a44: Preparing
beceb4a3223c: Preparing
b1e73422ceb7: Preparing
5b99d0f1aa52: Preparing
dbd6221f1b98: Preparing
4402691a71a1: Preparing
883e47620bc6: Preparing
f5e5c749d02e: Preparing
52ef15a58fce: Preparing
b94b9d90a09e: Preparing
f2c55a6fb80d: Preparing
1b7bf230df94: Preparing
0e19a08a8060: Preparing
5f70bf18a086: Preparing
36a8dea33eff: Preparing
dfe5bb6eff86: Preparing
57b271862993: Preparing
0eba131dffd0: Preparing
b1e73422ceb7: Waiting
5b99d0f1aa52: Waiting
dbd6221f1b98: Waiting
4402691a71a1: Waiting
883e47620bc6: Waiting
f5e5c749d02e: Waiting
52ef15a58fce: Waiting
b94b9d90a09e: Waiting
f2c55a6fb80d: Waiting
1b7bf230df94: Waiting
0e19a08a8060: Waiting
5f70bf18a086: Waiting
36a8dea33eff: Waiting
dfe5bb6eff86: Waiting
57b271862993: Waiting
0eba131dffd0: Waiting
beceb4a3223c: Mounted from deeplearning-platform-release/base-cpu
afdacae73a44: Mounted from deeplearning-platform-release/base-cpu
b1e73422ceb7: Mounted from deeplearning-platform-release/base-cpu
5b99d0f1aa52: Mounted from deeplearning-platform-release/base-cpu
dbd6221f1b98: Mounted from deeplearning-platform-release/base-cpu
4402691a71a1: Mounted from deeplearning-platform-release/base-cpu
c1a36b269934: Pushed
ca2c34e579d3: Pushed
883e47620bc6: Mounted from deeplearning-platform-release/base-cpu
52ef15a58fce: Mounted from deeplearning-platform-release/base-cpu
f5e5c749d02e: Mounted from deeplearning-platform-release/base-cpu
b94b9d90a09e: Mounted from deeplearning-platform-release/base-cpu
f2c55a6fb80d: Mounted from deeplearning-platform-release/base-cpu
5f70bf18a086: Layer already exists
1b7bf230df94: Mounted from deeplearning-platform-release/base-cpu
0e19a08a8060: Mounted from deeplearning-platform-release/base-cpu
36a8dea33eff: Mounted from deeplearning-platform-release/base-cpu
0eba131dffd0: Layer already exists
dfe5bb6eff86: Mounted from deeplearning-platform-release/base-cpu
57b271862993: Mounted from deeplearning-platform-release/base-cpu
298d24cba698: Pushed
latest: digest: sha256:f532a7fa48a893e5e159a1fe8615217284d69091b8cac3ced00af5cae556ca38 size: 4707
DONE
--------------------------------------------------------------------------------
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
633bf7c1-a6e6-487b-ab2a-ac719e279bea 2022-02-09T15:09:27+00:00 2M12S gs://qwiklabs-gcp-01-37ab11ee03f8_cloudbuild/source/1644419366.533746-eb800fc8a4fa4c67bdfa0eb2cedc7a7a.tgz gcr.io/qwiklabs-gcp-01-37ab11ee03f8/trainer_image (+1 more) SUCCESS
###Markdown
Submit an Vertex AI hyperparameter tuning job Create the hyperparameter configuration file. Recall that the training code uses `SGDClassifier`. The training application has been designed to accept two hyperparameters that control `SGDClassifier`:- Max iterations- AlphaThe file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of `max_iter` and the linear range between `1.0e-4` and `1.0e-1` for `alpha`.
###Code
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
###Output
_____no_output_____
###Markdown
ExerciseComplete the `config.yaml` file generated below so that the hyperparametertunning engine try for parameter values* `max_iter` the two values 10 and 20* `alpha` a linear range of values between 1.0e-4 and 1.0e-1Also complete the `gcloud` command to start the hyperparameter tuning job with a max trial count anda max number of parallel trials both of 5 each.
###Code
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: max_iter
discreteValueSpec:
values:
- 10
- 20
- parameterId: alpha
doubleValueSpec:
minValue: 1.0e-4
maxValue: 1.0e-1
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
gcloud ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=$CONFIG_YAML \
--max-trial-count=5 \
--parallel-trial-count=5
echo "JOB_NAME: $JOB_NAME"
###Output
JOB_NAME: forestcover_tuning_20220209_151607
###Markdown
Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs". Retrieve HP-tuning results. After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized): ExerciseComplete the body of the function below to retrieve the best trial from the `JOBNAME`:
###Code
def get_trials(job_name):
jobs = aiplatform.HyperparameterTuningJob.list()
match = [job for job in jobs if job.display_name == JOB_NAME]
tuning_job = match[0] if match else None
return tuning_job.trials if tuning_job else None
def get_best_trial(trials):
metrics = [trial.final_measurement.metrics[0].value for trial in trials]
best_trial = trials[metrics.index(max(metrics))]
return best_trial
def retrieve_best_trial_from_job_name(jobname):
trials = get_trials(jobname)
best_trial = get_best_trial(trials)
return best_trial
###Output
_____no_output_____
###Markdown
You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
###Code
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
###Output
_____no_output_____
###Markdown
Retrain the model with the best hyperparametersYou can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset. Configure and run the training job
###Code
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f"""\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
"""
ARGS = f"""\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
"""
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
###Output
Using endpoint [https://us-central1-aiplatform.googleapis.com/]
CustomJob [projects/562035846305/locations/us-central1/customJobs/9065898882013069312] is submitted successfully.
Your job is still active. You may view the status of your job with the command
$ gcloud ai custom-jobs describe projects/562035846305/locations/us-central1/customJobs/9065898882013069312
or continue streaming the logs with the command
$ gcloud ai custom-jobs stream-logs projects/562035846305/locations/us-central1/customJobs/9065898882013069312
The model will be exported at: gs://qwiklabs-gcp-01-37ab11ee03f8-kfp-artifact-store/jobs/JOB_VERTEX_20220209_154807
###Markdown
Examine the training outputThe training script saved the trained model as the 'model.pkl' in the `JOB_DIR` folder on GCS.**Note:** We need to wait for job triggered by the cell above to complete before running the cells below.
###Code
!gsutil ls $JOB_DIR
###Output
gs://qwiklabs-gcp-01-37ab11ee03f8-kfp-artifact-store/jobs/JOB_VERTEX_20220209_154807/model.pkl
###Markdown
Deploy the model to Vertex AI Prediction
###Code
MODEL_NAME = "forest_cover_classifier_2"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
SERVING_MACHINE_TYPE = "n1-standard-2"
###Output
_____no_output_____
###Markdown
Uploading the trained model ExerciseUpload the trained model using `aiplatform.Model.upload`:
###Code
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_NAME,
artifact_uri=JOB_DIR,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
)
###Output
INFO:google.cloud.aiplatform.models:Creating Model
INFO:google.cloud.aiplatform.models:Create Model backing LRO: projects/562035846305/locations/us-central1/models/5001357337357713408/operations/2948741270589145088
INFO:google.cloud.aiplatform.models:Model created. Resource name: projects/562035846305/locations/us-central1/models/5001357337357713408
INFO:google.cloud.aiplatform.models:To use this Model in another session:
INFO:google.cloud.aiplatform.models:model = aiplatform.Model('projects/562035846305/locations/us-central1/models/5001357337357713408')
###Markdown
Deploying the uploaded model ExerciseDeploy the model using `uploaded_model`:
###Code
endpoint = uploaded_model.deploy(
machine_type=SERVING_MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
###Output
INFO:google.cloud.aiplatform.models:Creating Endpoint
INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/562035846305/locations/us-central1/endpoints/8547770520098045952/operations/1541366387035865088
INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/562035846305/locations/us-central1/endpoints/8547770520098045952
INFO:google.cloud.aiplatform.models:To use this Endpoint in another session:
INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/562035846305/locations/us-central1/endpoints/8547770520098045952')
INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/562035846305/locations/us-central1/endpoints/8547770520098045952
INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/562035846305/locations/us-central1/endpoints/8547770520098045952/operations/1320690005294710784
INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/562035846305/locations/us-central1/endpoints/8547770520098045952
###Markdown
Serve predictions Prepare the input file with JSON formated instances. ExerciseQuery the deployed model using `endpoint`:
###Code
instance = [
2841.0,
45.0,
0.0,
644.0,
282.0,
1376.0,
218.0,
237.0,
156.0,
1003.0,
"Commanche",
"C4758",
]
endpoint.predict([instance])
###Output
_____no_output_____ |
starter_code/student_project.ipynb | ###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import pandas as pd
import aequitas as ae
import seaborn as sns
# Put all of the helper functions in utils
from sklearn.metrics import roc_auc_score, accuracy_score, f1_score, classification_report, precision_score, recall_score
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
df.head()
assert len(np.unique(df["encounter_id"])) < df.shape[0]
print("The dataset is at the line level")
###Output
The dataset is at the line level
###Markdown
Determine Level of Dataset (Line or Encounter) **Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked. Student Response: - From the previous cell we can conclude that the dataset is at the line level.- The other key filed that we should aggregate on is "primary_diagnosis_code" Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project. Answer to question 2.athe fields with a high missing/zero values are: weight, max_glu_serum, A1Cresult, edical_speciality,prayer_code and ndc_code
###Code
df = df.replace('?', np.nan).replace('None', np.nan)
df.isnull().mean().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Answer to question 2.bnumerical fileds with gaussian distribution are : num_lab_procedures, num_medications
###Code
df.info()
numeric_field = [c for c in df.columns if df[c].dtype == "int64"]
numeric_field
for c in numeric_field:
sns.distplot(df[c], kde=False)
plt.title(c)
plt.show()
###Output
_____no_output_____
###Markdown
Answer to question 2.cThe fields with high cardinality are: 'other_diagnosis_codes', 'primary_diagnosis_code', 'ndc_code'
###Code
# identify categorical columns
cat_col = list(df.select_dtypes(['object']).columns)
cat_col.extend(['admission_type_id','discharge_disposition_id', 'admission_source_id'])
for col in cat_col:
df[col] = df[col].astype(str)
cat_col
pd.DataFrame({'cardinality': df[cat_col].nunique()})
###Output
_____no_output_____
###Markdown
Answer to question 2.daccording to the representations below it is shown that the majority of ages are between 50 to 90 and that a small advantage of the femal number of hospitalization
###Code
plt.figure(figsize=(8, 5))
sns.countplot(x = 'age', data = df)
plt.figure(figsize=(8, 5))
sns.countplot(x = 'gender', data = df)
plt.figure(figsize=(8, 5))
sns.countplot(x = 'age', hue = 'gender', data = df)
# !pip install tensorflow-data-validation
# !pip install apache-beam[interactive]
#import tensorflow_data_validation as tfdv
######NOTE: The visualization will only display in Chrome browser. ########
#full_data_stats = tfdv.generate_statistics_from_csv(data_location='./data/final_project_dataset.csv')
#tfdv.visualize_statistics(full_data_stats)
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
ndc_code_df.head()
from student_utils import reduce_dimension_ndc
reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
reduce_dim_df.head()
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
Number of unique patients:71518
Number of unique encounters:71518
Tests passed!!
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
exclusion_list = ['generic_drug_name', 'ndc_code']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice.
###Code
plt.figure(figsize=(8, 5))
sns.countplot(x = 'payer_code', data = agg_drug_df)
plt.figure(figsize=(8, 5))
sns.countplot(x = 'number_emergency', data = agg_drug_df)
###Output
_____no_output_____
###Markdown
Student response: We should exclude both payer_code and weight in our model due to the big amount of missing values
###Code
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
required_demo_col_list = ['race', 'gender', 'age']
student_categorical_col_list = [ 'change', 'primary_diagnosis_code'] + required_demo_col_list + ndc_col_list
student_numerical_col_list = [ 'number_inpatient', 'number_emergency', 'num_lab_procedures', 'number_diagnoses','num_medications','num_procedures']
PREDICTOR_FIELD = 'time_in_hospital'
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return agg_drug_df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
/home/workspace/starter_code/utils.py:29: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[predictor] = df[predictor].astype(float)
/home/workspace/starter_code/utils.py:31: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[c] = cast_df(df, c, d_type=str)
/home/workspace/starter_code/utils.py:33: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[numerical_column] = impute_df(df, numerical_column, numerical_impute_value)
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
def patient_dataset_splitter(df, patient_key='patient_nbr'):
'''
df: pandas dataframe, input dataset that will be split
patient_key: string, column that is the patient id
return:
- train: pandas dataframe,
- validation: pandas dataframe,
- test: pandas dataframe,
'''
df[student_numerical_col_list] = df[student_numerical_col_list].astype(float)
train_val_df = df.sample(frac = 0.8, random_state=3)
train_df = train_val_df.sample(frac = 0.8, random_state=3)
val_df = train_val_df.drop(train_df.index)
test_df = df.drop(train_val_df.index)
return train_df.reset_index(drop = True), val_df.reset_index(drop = True), test_df.reset_index(drop = True)
#from student_utils import patient_dataset_splitter
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
Test passed for number of unique patients being equal!
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 2159
2.0 2486
3.0 2576
4.0 1842
5.0 1364
6.0 1041
7.0 814
8.0 584
9.0 404
10.0 307
11.0 242
12.0 182
13.0 165
14.0 138
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
gender
Female 7631
Male 6672
Unknown/Invalid 1
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
IndicatorColumn(categorical_column=VocabularyFileCategoricalColumn(key='change', vocabulary_file='./diabetes_vocab/change_vocab.txt', vocabulary_size=3, num_oov_buckets=1, dtype=tf.string, default_value=-1))
WARNING:tensorflow:Layer dense_features is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4267: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4322: VocabularyFileCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
tf.Tensor(
[[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 0. 1. 0.]
[0. 0. 1. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]
[0. 1. 0. 0.]], shape=(128, 4), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='number_inpatient', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=functools.partial(<function create_tf_numeric_feature.<locals>.<lambda> at 0x7f8b0dcff290>, m=0.17600664176006642, s=0.6009985590232482))
WARNING:tensorflow:Layer dense_features_1 is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2. The layer has dtype float32 because it's dtype defaults to floatx.
If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.
To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.
tf.Tensor(
[[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 1.3710403]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 3.0349379]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 1.3710403]
[-0.292857 ]
[ 1.3710403]
[-0.292857 ]
[-0.292857 ]
[ 3.0349379]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 1.3710403]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 3.0349379]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 1.3710403]
[-0.292857 ]
[-0.292857 ]
[ 1.3710403]
[ 3.0349379]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 1.3710403]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 1.3710403]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 1.3710403]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 1.3710403]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 1.3710403]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 4.6988354]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[-0.292857 ]
[ 1.3710403]
[-0.292857 ]
[-0.292857 ]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=100)
###Output
Train for 358 steps, validate for 90 steps
Epoch 1/100
358/358 [==============================] - 8s 22ms/step - loss: 24.6293 - mse: 24.4528 - val_loss: 18.8390 - val_mse: 18.0638
Epoch 2/100
358/358 [==============================] - 5s 14ms/step - loss: 16.2279 - mse: 15.5325 - val_loss: 13.9956 - val_mse: 12.7708
Epoch 3/100
358/358 [==============================] - 5s 13ms/step - loss: 12.7957 - mse: 11.8131 - val_loss: 12.6461 - val_mse: 11.9815
Epoch 4/100
358/358 [==============================] - 5s 13ms/step - loss: 11.1345 - mse: 10.1729 - val_loss: 9.6361 - val_mse: 8.7062
Epoch 5/100
358/358 [==============================] - 5s 13ms/step - loss: 10.5669 - mse: 9.6203 - val_loss: 8.7706 - val_mse: 7.6288
Epoch 6/100
358/358 [==============================] - 5s 14ms/step - loss: 9.7915 - mse: 8.9645 - val_loss: 10.0533 - val_mse: 9.2921
Epoch 7/100
358/358 [==============================] - 5s 13ms/step - loss: 9.3032 - mse: 8.3535 - val_loss: 9.9639 - val_mse: 9.1109
Epoch 8/100
358/358 [==============================] - 5s 13ms/step - loss: 9.1580 - mse: 8.3823 - val_loss: 9.1492 - val_mse: 8.6072
Epoch 9/100
358/358 [==============================] - 5s 15ms/step - loss: 8.5008 - mse: 7.6392 - val_loss: 9.1765 - val_mse: 8.3628
Epoch 10/100
358/358 [==============================] - 5s 14ms/step - loss: 8.6621 - mse: 7.9144 - val_loss: 8.8989 - val_mse: 8.1129
Epoch 11/100
358/358 [==============================] - 5s 13ms/step - loss: 8.4136 - mse: 7.5674 - val_loss: 9.0078 - val_mse: 8.0410
Epoch 12/100
358/358 [==============================] - 4s 12ms/step - loss: 8.4214 - mse: 7.5767 - val_loss: 8.2852 - val_mse: 7.1796
Epoch 13/100
358/358 [==============================] - 5s 13ms/step - loss: 8.0986 - mse: 7.2194 - val_loss: 8.4048 - val_mse: 7.3981
Epoch 14/100
358/358 [==============================] - 5s 13ms/step - loss: 8.0938 - mse: 7.1935 - val_loss: 8.5188 - val_mse: 7.6221
Epoch 15/100
358/358 [==============================] - 5s 13ms/step - loss: 8.1018 - mse: 7.1305 - val_loss: 8.0634 - val_mse: 7.2399
Epoch 16/100
358/358 [==============================] - 4s 13ms/step - loss: 7.8850 - mse: 6.9900 - val_loss: 8.3079 - val_mse: 7.2268
Epoch 17/100
358/358 [==============================] - 5s 14ms/step - loss: 7.7499 - mse: 6.9222 - val_loss: 8.0496 - val_mse: 7.0870
Epoch 18/100
358/358 [==============================] - 5s 14ms/step - loss: 7.7917 - mse: 6.8462 - val_loss: 8.0874 - val_mse: 7.1399
Epoch 19/100
358/358 [==============================] - 7s 19ms/step - loss: 7.6744 - mse: 6.7074 - val_loss: 7.8026 - val_mse: 6.6841
Epoch 20/100
358/358 [==============================] - 5s 13ms/step - loss: 7.5486 - mse: 6.6938 - val_loss: 8.2209 - val_mse: 7.1531
Epoch 21/100
358/358 [==============================] - 4s 12ms/step - loss: 7.5367 - mse: 6.6350 - val_loss: 7.5655 - val_mse: 6.8027
Epoch 22/100
358/358 [==============================] - 5s 13ms/step - loss: 7.4362 - mse: 6.5466 - val_loss: 7.7240 - val_mse: 6.7320
Epoch 23/100
358/358 [==============================] - 5s 13ms/step - loss: 7.5003 - mse: 6.6328 - val_loss: 8.3950 - val_mse: 7.0638
Epoch 24/100
358/358 [==============================] - 5s 13ms/step - loss: 7.5862 - mse: 6.5718 - val_loss: 7.7765 - val_mse: 6.9072
Epoch 25/100
358/358 [==============================] - 5s 13ms/step - loss: 7.5311 - mse: 6.5044 - val_loss: 7.5655 - val_mse: 6.4988
Epoch 26/100
358/358 [==============================] - 5s 15ms/step - loss: 7.3392 - mse: 6.4640 - val_loss: 8.1907 - val_mse: 7.2635
Epoch 27/100
358/358 [==============================] - 4s 12ms/step - loss: 7.4509 - mse: 6.4358 - val_loss: 7.7066 - val_mse: 6.7276
Epoch 28/100
358/358 [==============================] - 5s 13ms/step - loss: 7.3021 - mse: 6.4213 - val_loss: 7.8700 - val_mse: 6.6681
Epoch 29/100
358/358 [==============================] - 4s 12ms/step - loss: 7.1892 - mse: 6.3133 - val_loss: 7.7775 - val_mse: 6.7267
Epoch 30/100
358/358 [==============================] - 4s 12ms/step - loss: 7.1800 - mse: 6.3027 - val_loss: 7.2485 - val_mse: 6.6883
Epoch 31/100
358/358 [==============================] - 4s 12ms/step - loss: 7.2975 - mse: 6.2679 - val_loss: 7.4930 - val_mse: 6.4465
Epoch 32/100
358/358 [==============================] - 5s 13ms/step - loss: 7.0811 - mse: 6.1155 - val_loss: 7.6528 - val_mse: 6.6499
Epoch 33/100
358/358 [==============================] - 5s 13ms/step - loss: 7.1447 - mse: 6.2611 - val_loss: 7.2010 - val_mse: 6.3405
Epoch 34/100
358/358 [==============================] - 5s 13ms/step - loss: 7.1626 - mse: 6.2342 - val_loss: 7.8166 - val_mse: 6.9278
Epoch 35/100
358/358 [==============================] - 5s 14ms/step - loss: 7.0655 - mse: 6.0895 - val_loss: 7.7267 - val_mse: 6.6891
Epoch 36/100
358/358 [==============================] - 5s 13ms/step - loss: 6.9679 - mse: 6.0368 - val_loss: 7.6034 - val_mse: 6.5356
Epoch 37/100
358/358 [==============================] - 5s 14ms/step - loss: 6.9613 - mse: 6.0795 - val_loss: 7.5522 - val_mse: 6.7222
Epoch 38/100
358/358 [==============================] - 4s 12ms/step - loss: 7.0079 - mse: 6.0620 - val_loss: 7.3254 - val_mse: 6.7018
Epoch 39/100
358/358 [==============================] - 5s 13ms/step - loss: 6.9752 - mse: 5.9678 - val_loss: 7.6082 - val_mse: 6.8405
Epoch 40/100
358/358 [==============================] - 4s 12ms/step - loss: 6.8712 - mse: 5.9475 - val_loss: 7.6724 - val_mse: 6.6324
Epoch 41/100
358/358 [==============================] - 5s 13ms/step - loss: 6.9053 - mse: 5.9111 - val_loss: 8.5137 - val_mse: 7.7260
Epoch 42/100
358/358 [==============================] - 5s 13ms/step - loss: 6.7378 - mse: 5.8993 - val_loss: 7.2622 - val_mse: 6.6142
Epoch 43/100
358/358 [==============================] - 4s 12ms/step - loss: 6.8369 - mse: 5.9022 - val_loss: 7.5613 - val_mse: 6.8527
Epoch 44/100
358/358 [==============================] - 5s 14ms/step - loss: 6.7802 - mse: 5.8444 - val_loss: 7.3737 - val_mse: 6.4707
Epoch 45/100
358/358 [==============================] - 4s 12ms/step - loss: 6.9552 - mse: 6.0110 - val_loss: 7.8480 - val_mse: 6.7347
Epoch 46/100
358/358 [==============================] - 5s 13ms/step - loss: 6.8904 - mse: 5.8933 - val_loss: 7.7557 - val_mse: 6.8180
Epoch 47/100
358/358 [==============================] - 4s 12ms/step - loss: 6.8171 - mse: 5.8959 - val_loss: 7.1823 - val_mse: 6.5033
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
from student_utils import get_mean_std_from_preds
m, s = get_mean_std_from_preds(diabetes_yhat)
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head()
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations?
###Code
# AUC, F1, precision and recall
print(classification_report(pred_test_df['label_value'], pred_test_df['score']))
f1_score(pred_test_df['label_value'], pred_test_df['score'], average='weighted')
accuracy_score(pred_test_df['label_value'], pred_test_df['score'])
roc_auc_score(pred_test_df['label_value'], pred_test_df['score'])
precision_score(pred_test_df['label_value'], pred_test_df['score'])
recall_score(pred_test_df['label_value'], pred_test_df['score'])
###Output
_____no_output_____
###Markdown
7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
/opt/conda/lib/python3.7/site-packages/aequitas/group.py:143: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df['score'] = df['score'].astype(float)
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
get_disparity_predefined_group()
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
aqp.plot_group_metric(clean_xtab, 'fpr', min_group_size=0.05)
# Is there significant bias in your model for either race or gender?
aqp.plot_group_metric(clean_xtab, 'tpr', min_group_size=0.05)
aqp.plot_group_metric(clean_xtab, 'fnr', min_group_size=0.05)
aqp.plot_group_metric(clean_xtab, 'tnr', min_group_size=0.05)
###Output
_____no_output_____
###Markdown
Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
# Reference group fairness plot
aqp.plot_fairness_disparity(bdf, group_metric='fnr', attribute_name='race', significance_alpha=0.05, min_group_size=0.05)
aqp.plot_fairness_disparity(fdf, group_metric='fnr', attribute_name='gender', significance_alpha=0.05, min_group_size=0.05)
aqp.plot_fairness_disparity(fdf, group_metric='fpr', attribute_name='race', significance_alpha=0.05, min_group_size=0.05)
aqp.plot_fairness_group(fdf, group_metric='fpr', title=True, min_group_size=0.05)
aqp.plot_fairness_group(fdf, group_metric='fnr', title=True)
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import pandas as pd
import aequitas as ae
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
feature_df = df.copy()
df.head()
df.sort_values(by=['patient_nbr','encounter_id'])
df.describe()
###Output
_____no_output_____
###Markdown
Determine Level of Dataset (Line or Encounter)
###Code
# Line Test
try:
assert len(df) > df['encounter_id'].nunique()
print("Dataset could be at the line level")
except:
print("Dataset is not at the line level")
# Encounter Test
try:
assert len(df) == df['encounter_id'].nunique()
print("Dataset could be at the encounter level")
except:
print("Dataset is not at the encounter level")
###Output
Dataset is not at the encounter level
###Markdown
**Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked. Student Response:Tests for identifying level of the datasetLine: Total number of rows > Number of Unique EncountersEncounter level: Total Number of Rows = Number of Unique EncountersThe line test is True so the dataset is at the line level.Should also aggregate on the primary_diagnosis_code
###Code
grouping_fields = ['encounter_id', 'patient_nbr', 'primary_diagnosis_code']
###Output
_____no_output_____
###Markdown
Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project. **Student Response**: I have included my response after the analysis section for each question below:
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df = df.replace('?', np.nan)
df = df.replace()
df.dtypes
###Output
_____no_output_____
###Markdown
a. Field(s) with high amount of missing/zero values
###Code
# Missing values
def check_null_values(df):
null_df = pd.DataFrame({'columns': df.columns,
'percent_null': df.isnull().sum() * 100 / len(df),
'percent_zero': df.isin([0]).sum() * 100 / len(df)
} )
return null_df
null_df = check_null_values(df)
null_df
###Output
_____no_output_____
###Markdown
Weight and payer code has a high number of missing/null values. I will not include these features in my model.
###Code
df['primary_diagnosis_code'].value_counts()[0:20].plot(kind='bar')
plt.title('Top 20 Primary Diagnosis Codes')
categorical_features = [
'race','gender','age','weight', 'max_glu_serum','A1Cresult','change','readmitted','payer_code','ndc_code',
'primary_diagnosis_code','other_diagnosis_codes','admission_type_id','discharge_disposition_id']
numerical_features = [
'time_in_hospital','number_outpatient','number_inpatient','number_emergency','number_diagnoses',
'num_lab_procedures','num_medications','num_procedures']
# analyse categorical features
fig, ax =plt.subplots(len(categorical_features),1, figsize=(10,20))
for counter, col in enumerate(categorical_features):
sns.countplot(df[col], ax=ax[counter])
ax[counter].set_title(col)
fig.show()
###Output
_____no_output_____
###Markdown
b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape?
###Code
# analyse numerical features
fig, ax =plt.subplots(len(numerical_features),1, figsize=(10,20))
for counter, col in enumerate(numerical_features):
sns.distplot(df[col],ax=ax[counter], bins=(df[col].nunique()))
ax[counter].set_title(col)
fig.show()
###Output
_____no_output_____
###Markdown
The fields that have a Gaussian distribution are num_lab_procedures and num_medications c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature)
###Code
def count_unique_values(df, cat_col_list):
cat_df = df[cat_col_list]
val_df = pd.DataFrame({'columns': cat_df.columns,
'cardinality': cat_df.nunique() } )
return val_df
val_df = count_unique_values(df, categorical_features)
val_df
###Output
_____no_output_____
###Markdown
primary_diagnosis_code, other_diagnosis_codes, ndc_code have high cardinality. There is high cardinality for these fields because medical codesets have an extremely wide range of possible values compared to a field like gender or age. d. Please describe the demographic distributions in the dataset for the age and gender fields.
###Code
# analyse categorical features
fig, ax =plt.subplots(1,2, figsize=(15,5))
for counter, col in enumerate(['age','gender']):
sns.countplot(df[col], ax=ax[counter])
ax[counter].set_title(col)
fig.show()
sns.countplot(x="age", hue="gender", data=df)
###Output
_____no_output_____
###Markdown
The dataset contains male and female patients and is roughly balanced for this demographic. The age buckets range in increments of tens from 0 to 100 and the population is skewed towards older age groups above 50. When grouping the age and gender together, the distribution of the genders across the ages follows a similiar pattern.
###Code
######NOTE: The visualization will only display in Chrome browser. ########
full_data_stats = tfdv.generate_statistics_from_csv(data_location='./data/final_project_dataset.csv')
tfdv.visualize_statistics(full_data_stats)
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
from student_utils import reduce_dimension_ndc
reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
reduce_dim_df
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
reduce_dim_df.head()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
Number of unique patients:56133
Number of unique encounters:56133
Tests passed!!
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
exclusion_list = ['generic_drug_name','ndc_code']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
agg_drug_df.head()
agg_drug_df['patient_nbr'].nunique()
agg_drug_df['encounter_id'].nunique()
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice. Student response: I am going to exclude the weight and payer_code fields as these have a high proportion of missing/null fields as shown in the exploration phase and this could introduce noise into the model
###Code
agg_drug_df.NDC_Code
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
required_demo_col_list = ['race', 'gender', 'age']
student_categorical_col_list = [ "max_glu_serum", "A1Cresult", 'change', 'readmitted', 'NDC_Code','primary_diagnosis_code'] + required_demo_col_list + ndc_col_list
student_numerical_col_list = [ "number_diagnoses", "num_medications",'num_procedures','num_lab_procedures']
PREDICTOR_FIELD = 'time_in_hospital'
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return agg_drug_df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
/home/workspace/starter_code/utils.py:29: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[predictor] = df[predictor].astype(float)
/home/workspace/starter_code/utils.py:31: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[c] = cast_df(df, c, d_type=str)
/home/workspace/starter_code/utils.py:33: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[numerical_column] = impute_df(df, numerical_column, numerical_impute_value)
###Markdown
**Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
from student_utils import patient_dataset_splitter
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
Test passed for number of unique patients being equal!
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 1474
2.0 1935
3.0 2071
4.0 1504
5.0 1126
6.0 786
7.0 642
8.0 483
9.0 326
10.0 251
11.0 185
12.0 170
13.0 153
14.0 121
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
The distributions are consistent across the splits Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
gender
Female 5936
Male 5291
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
The demographics are consistent across the splits Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
%autoreload
from student_utils import create_tf_categorical_feature_cols
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
IndicatorColumn(categorical_column=VocabularyFileCategoricalColumn(key='max_glu_serum', vocabulary_file='./diabetes_vocab/max_glu_serum_vocab.txt', vocabulary_size=5, num_oov_buckets=0, dtype=tf.string, default_value=-1))
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4267: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4322: VocabularyFileCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
tf.Tensor(
[[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 0. 0. 1. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 0. 0. 0. 1.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 0. 0. 0. 1.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0.]], shape=(128, 5), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
%autoreload
from student_utils import create_tf_numeric_feature
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='number_diagnoses', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=functools.partial(<function normalize_numeric_with_zscore at 0x7f9325730a70>, mean=7.280938242280285, std=1.9929338813352762))
tf.Tensor(
[[ 2.]
[-1.]
[-2.]
[-3.]
[ 2.]
[ 2.]
[ 2.]
[ 0.]
[-2.]
[ 2.]
[-2.]
[-3.]
[ 2.]
[ 2.]
[ 2.]
[ 1.]
[ 0.]
[ 1.]
[ 2.]
[ 1.]
[ 2.]
[ 0.]
[-1.]
[ 2.]
[ 0.]
[-1.]
[-2.]
[-3.]
[ 2.]
[ 0.]
[ 0.]
[ 1.]
[ 2.]
[-4.]
[ 2.]
[ 1.]
[ 1.]
[ 2.]
[-4.]
[-2.]
[-2.]
[ 2.]
[ 0.]
[ 2.]
[-1.]
[ 1.]
[ 1.]
[ 2.]
[ 2.]
[ 1.]
[-2.]
[-1.]
[ 1.]
[ 2.]
[ 0.]
[ 2.]
[ 2.]
[ 2.]
[-2.]
[-2.]
[-4.]
[-1.]
[ 2.]
[ 2.]
[ 2.]
[ 2.]
[-1.]
[ 2.]
[ 2.]
[-3.]
[ 2.]
[ 2.]
[ 1.]
[ 1.]
[ 2.]
[-3.]
[-3.]
[-3.]
[ 1.]
[ 1.]
[ 2.]
[ 2.]
[ 2.]
[-1.]
[ 0.]
[-3.]
[ 2.]
[-3.]
[ 2.]
[ 2.]
[ 2.]
[ 1.]
[-2.]
[-2.]
[-2.]
[ 1.]
[-2.]
[ 0.]
[ 2.]
[-2.]
[ 2.]
[ 2.]
[ 0.]
[ 2.]
[ 2.]
[ 1.]
[ 1.]
[-1.]
[ 0.]
[ 2.]
[ 2.]
[ 2.]
[ 2.]
[-2.]
[ 2.]
[ 1.]
[ 0.]
[ 2.]
[-1.]
[ 2.]
[ 2.]
[-1.]
[ 2.]
[ 0.]
[ 2.]
[ 2.]
[-3.]
[-3.]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=10)
###Output
Train for 264 steps, validate for 88 steps
Epoch 1/10
264/264 [==============================] - 11s 40ms/step - loss: 31.3270 - mse: 31.1397 - val_loss: 22.9542 - val_mse: 22.5868
Epoch 2/10
264/264 [==============================] - 6s 21ms/step - loss: 18.4408 - mse: 17.9012 - val_loss: 18.6944 - val_mse: 18.3352
Epoch 3/10
264/264 [==============================] - 5s 19ms/step - loss: 17.1018 - mse: 16.3253 - val_loss: 14.1931 - val_mse: 13.3952
Epoch 4/10
264/264 [==============================] - 5s 21ms/step - loss: 13.2144 - mse: 12.2935 - val_loss: 14.0796 - val_mse: 13.3591
Epoch 5/10
264/264 [==============================] - 5s 20ms/step - loss: 13.3461 - mse: 12.5401 - val_loss: 11.3334 - val_mse: 10.3507
Epoch 6/10
264/264 [==============================] - 6s 22ms/step - loss: 11.2645 - mse: 10.1656 - val_loss: 11.6603 - val_mse: 10.6994
Epoch 7/10
264/264 [==============================] - 6s 22ms/step - loss: 10.8903 - mse: 9.7690 - val_loss: 11.1973 - val_mse: 10.2565
Epoch 8/10
264/264 [==============================] - 6s 22ms/step - loss: 9.3742 - mse: 8.4811 - val_loss: 10.2552 - val_mse: 9.4057
Epoch 9/10
264/264 [==============================] - 6s 23ms/step - loss: 9.7205 - mse: 8.9104 - val_loss: 9.2307 - val_mse: 8.4258
Epoch 10/10
264/264 [==============================] - 6s 22ms/step - loss: 9.6772 - mse: 8.7730 - val_loss: 9.9512 - val_mse: 9.1529
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
%autoreload
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
from student_utils import get_mean_std_from_preds
m, s = get_mean_std_from_preds(diabetes_yhat)
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head()
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
%autoreload
from student_utils import get_student_binary_prediction
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
student_binary_prediction
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations? ReportThe Precision recall tradeoff:The precision informs how many true positives there were i.e. how many positives predicted by the model were actually positive. The recall gives the fraction of all existing positives that we predict correctly so that is affected by any positives that we did not predict as positives.As the model will determine which patients are included in the clinical trial, it is important to have a high rate of true positives and a low rate of false positives. If we mistakenly include a patient that ends up having less than 5 days in hospital they will have to be removed from the trial and it will be a waste of medication and resources.
###Code
# AUC, F1, precision and recal
from sklearn.metrics import accuracy_score, f1_score, classification_report, roc_auc_score
y_true = pred_test_df['label_value'].values
y_pred = pred_test_df['score'].values
print(classification_report(y_true, y_pred))
print('AUC SCORE: {}'.format(roc_auc_score(y_true, y_pred)))
###Output
precision recall f1-score support
0 0.64 1.00 0.78 6984
1 0.93 0.09 0.16 4243
accuracy 0.65 11227
macro avg 0.79 0.54 0.47 11227
weighted avg 0.75 0.65 0.54 11227
AUC SCORE: 0.5406076655060731
###Markdown
Areas of improvement: - Trying a different model architecture- Increasing size of dataset- Experimenting with the threshold- Add regularization to handle overfitting- Experiment with the learning rate - Changing the threshold 7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
/opt/conda/lib/python3.7/site-packages/aequitas/group.py:143: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df['score'] = df['score'].astype(float)
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
get_disparity_predefined_group()
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
p = aqp.plot_group_metric_all(xtab, metrics=['tpr', 'fpr', 'ppr', 'pprev', 'fnr'], ncols=5)
# Is there significant bias in your model for either race or gender?
###Output
_____no_output_____
###Markdown
The metrics are balanced for gender.THe predictive probabilty rate (PPR) is higher for Caucasian than other races. It is worth noting that the population for race=asian in the dataset is very small (62) which can affect the results Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
# Reference group fairness plot
fpr_disparity = aqp.plot_disparity(bdf, group_metric='fpr_disparity',
attribute_name='race')
###Output
_____no_output_____
###Markdown
Asians are 5X more likely to be falsely identified as positive than the reference group
###Code
fpr_fairness = aqp.plot_fairness_group(fdf, group_metric='fpr', title=True)
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/tree/master/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import pandas as pd
import aequitas as ae
#from tensorflow.data import DataVlidation
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/tree/master/project/data_schema_references/.
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path).replace(["?"],np.nan)
df.head()
print("dataset shape",df.shape)
print("number of unique encounters",df["encounter_id"].nunique())
print("number of unique patients",df["patient_nbr"].nunique())
unique_encounters=df["encounter_id"].nunique()
df.info()
###Output
dataset shape (143424, 26)
number of unique encounters 101766
number of unique patients 71518
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 143424 entries, 0 to 143423
Data columns (total 26 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 encounter_id 143424 non-null int64
1 patient_nbr 143424 non-null int64
2 race 140115 non-null object
3 gender 143424 non-null object
4 age 143424 non-null object
5 weight 4302 non-null object
6 admission_type_id 143424 non-null int64
7 discharge_disposition_id 143424 non-null int64
8 admission_source_id 143424 non-null int64
9 time_in_hospital 143424 non-null int64
10 payer_code 89234 non-null object
11 medical_specialty 73961 non-null object
12 primary_diagnosis_code 143391 non-null object
13 other_diagnosis_codes 143424 non-null object
14 number_outpatient 143424 non-null int64
15 number_inpatient 143424 non-null int64
16 number_emergency 143424 non-null int64
17 num_lab_procedures 143424 non-null int64
18 number_diagnoses 143424 non-null int64
19 num_medications 143424 non-null int64
20 num_procedures 143424 non-null int64
21 ndc_code 119962 non-null object
22 max_glu_serum 143424 non-null object
23 A1Cresult 143424 non-null object
24 change 143424 non-null object
25 readmitted 143424 non-null object
dtypes: int64(13), object(13)
memory usage: 28.5+ MB
###Markdown
Determine Level of Dataset (Line or Encounter) **Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked. Student Response: The data is presented in the line level because the number of unique encounters is less than the dataset rows. the keys that we should aggregate on are all the keys that doesn't change per encounter if we will use one encounter for each patient from the dataset. from the schema, this keys are :- encounter_id- patient_nbr - number_intpatient- number_outpatient- number_emergency- num_lab_procedures- num_medications- num_procedures- race- age- genderthese were the keys stated in the dataset schema which are fixed per encounter. After investigating the dataset, it was found that all the feature keys are fixed per encounter except the "ndc_code" Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project. **Student Response**: a- I- keys with high amout of missing values :- weight(139122)- medical_specialty(69463)- payer_code(54190)- ndc_code(23462) II- keys with high amout of zero values: - number_outpatient(120027) - number_inpatient(96698) - number_emergency(127444)b- these numerical features have a Gaussian response:- num_lab_procedures- num_medications- time_in_hospitalc- keys with high cardinality are:- primary_diagnosis_code(716)- other_diagnosis_codes(19374)- ndc_code(251)- medical_specialty(72)This high cardinality is either because these keys are used to identify the patient, encounter, and will not be used as features or these keys have many combinations like the procedure, medication, and diagnosis codesets.d- the demographic distribution in the dataset shows that most of the dataset patients are in above the 40 years and this may be normal because of the high infection rate above this age, the distribution of the gender through the data set almost has no bias except in the 80-90 age bin which is biased toward the female patients.
###Code
######NOTE: The visualization will only display in Chrome browser. ########
###a.
print("null values count\n",df.isnull().sum())
print("\n zero values count\n",df.isin([0]).sum())
data_schema=pd.read_csv("project_data_schema.csv")
data_schema
# categorical_features=data_schema[data_schema["Type"].isin(["categorical","categorical array","categorical\n"])]["Feature Name\n"].unique()
# numerical_features=data_schema[data_schema["Type"].isin(["numerical","numerical\n","numeric ID\n",])]["Feature Name\n"].unique()
#'encounter_id', 'patient_nbr'
numerical_features=['time_in_hospital','number_outpatient', 'number_inpatient',
'number_emergency','num_lab_procedures', 'number_diagnoses',
'num_medications','num_procedures']
categorical_features=['weight','race','age', 'gender',
'admission_type_id', 'discharge_disposition_id','admission_source_id',
'payer_code','medical_specialty','primary_diagnosis_code', 'other_diagnosis_codes',
'ndc_code','max_glu_serum', 'A1Cresult', 'change']
###### b.numerical features histogram
import seaborn as sns
fig, axs = plt.subplots(ncols=1,nrows=8,figsize=(16,35))
# fig.figsize=(12, 6)
#sns.countplot(data=cleaned_df,x="encounter_id", ax=axs[0])
#sns.countplot(cleaned_df["patient_nbr"], ax=axs[1])
time_bins=[x*2 for x in range(7)]
#print(time_bins)
sns.distplot(df["time_in_hospital"],bins=time_bins, ax=axs[0],kde=False)
sns.distplot(df["number_outpatient"], ax=axs[1],kde=False,bins=[x*1 for x in range(10)])
sns.distplot(df["number_inpatient"], ax=axs[2],kde=False,bins=[x*1 for x in range(10)])
sns.distplot(df["number_emergency"], ax=axs[3],kde=False,bins=[x*1 for x in range(6)])
sns.distplot(df["num_lab_procedures"], ax=axs[4])
sns.distplot(df["number_diagnoses"], ax=axs[5],kde=False,bins=[x for x in range(12)])
sns.distplot(df["num_medications"], ax=axs[6])
sns.distplot(df["num_procedures"], ax=axs[7],kde=False,bins=[x for x in range(9)])
#sns.countplot(df["weight"], ax=axs[8],kde=False,bins=[x for x in range(9)])
# c.cardinality check
df[categorical_features].nunique()
######## d. the demographic distrubution ########
fig2, axs2 = plt.subplots(ncols=1,nrows=1,figsize=(12,8))
x=sns.countplot(x="age", hue="gender", data=df,ax=axs2)
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/tree/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
ndc_code_df
ndc_code_df.nunique()
from student_utils import reduce_dimension_ndc
reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
ndc_code_df[ndc_code_df["NDC_Code"]=="47918-902"]
reduce_dim_df["generic_drug_name"].nunique()
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
#reduce_dim_enc.head(20)
print(first_encounter_df["patient_nbr"].nunique())
print(first_encounter_df["encounter_id"].nunique())
reduce_dim_df[reduce_dim_df["patient_nbr"]==41088789]
##test
first_encounter_df[first_encounter_df["patient_nbr"]==41088789]
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
Number of unique patients:71518
Number of unique encounters:71518
Tests passed!!
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course. Note: that performing the grouping on a rows with null values will remove this rows from the dataset, so removing or imputing these values was required to be done before the grouping step
###Code
first_encounter_df.isna().sum()
df_l=first_encounter_df.drop("weight",axis='columns')
df_l=df_l.drop("payer_code",axis='columns')
df_l=df_l.drop("medical_specialty",axis='columns')
df_l=df_l[df_l['race'].notna()]
df_l= df_l[df_l['primary_diagnosis_code'].notna()]
#df_l['other_diagnosis_codes']= df_l['other_diagnosis_codes'].apply(lambda x: x.split("|") if x is not np.nan else [])
df_l.head(20)
df_l.isnull().sum()
def aggregate_dataset(df, grouping_field_list, array_field):
df = df.groupby(grouping_field_list)['encounter_id',
array_field].apply(lambda x: x[array_field].values.tolist()).reset_index().rename(columns={
0: array_field + "_array"})
dummy_df = pd.get_dummies(df[array_field + '_array'].apply(pd.Series).stack()).sum(level=0)
dummy_col_list = [x.replace(" ", "_") for x in list(dummy_df.columns)]
mapping_name_dict = dict(zip([x for x in list(dummy_df.columns)], dummy_col_list ) )
concat_df = pd.concat([df, dummy_df], axis=1)
new_col_list = [x.replace(" ", "_") for x in list(concat_df.columns)]
concat_df.columns = new_col_list
return concat_df, dummy_col_list
exclusion_list = ['generic_drug_name',"ndc_code"]
grouping_field_list = [c for c in df_l.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(df_l, grouping_field_list, 'generic_drug_name')
# df_l=first_encounter_df.drop("weight",axis='columns')
# df_l=df_l.drop("payer_code",axis='columns')
# df_l=df_l.drop("medical_specialty",axis='columns')
# df_l.head(10)
# non_grouping=["ndc_code","generic_drug_name"]
# grouping_field_list=[x for x in df_l.columns if x not in non_grouping]
# gr_df=df_l.groupby(grouping_field_list)
agg_drug_df["patient_nbr"].nunique()
agg_drug_df=agg_drug_df.drop(agg_drug_df[agg_drug_df["gender"]=='Unknown/Invalid'].index)
agg_drug_df["other_diagnosis_codes"]=agg_drug_df["other_diagnosis_codes"].replace("?|?",np.nan)
#agg_drug_df.head(10)
agg_drug_df.isna().sum()
agg_drug_df['other_diagnosis_codes']= agg_drug_df['other_diagnosis_codes'].apply(lambda x: x.split("|") if x is not np.nan else [])
agg_drug_df.head()
print(agg_drug_df["encounter_id"].count())
print(agg_drug_df["patient_nbr"].count())
print(len(agg_drug_df["patient_nbr"]))
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice. Student response: For the weight, payer_code, medical_specialty I think they should be dropped from the dataset because most of these columns data is missing (139122,54190,69463), Also I think payer_code,medical_specialty are hardly relevant to the main goal of the project.
###Code
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
#generic_drug_name
required_demo_col_list = ['race', 'gender', 'age']
student_categorical_col_list = [ 'admission_type_id', 'discharge_disposition_id',
'admission_source_id','primary_diagnosis_code', #'other_diagnosis_codes',
'max_glu_serum', 'A1Cresult', 'change'] + required_demo_col_list + ndc_col_list
student_numerical_col_list =['number_outpatient', 'number_inpatient', 'number_emergency',
'num_lab_procedures', 'number_diagnoses', 'num_medications',
'num_procedures']
PREDICTOR_FIELD = 'time_in_hospital'
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return df[selected_col_list]
agg_drug_df.isna().sum()
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
#selected_features_df.isna().sum()
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?Answer: imputation with zero can give missing data real wrong values, so it will be better of the imputing values for features has a value out of the range of the featur values like -1 in "num_lab_procedures"
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value=np.nan, numerical_impute_value=0)
#processed_df.isna().sum()
#processed_df.isnull().sum()
#processed_df["other_diagnosis_codes"].head(10)
# print(processed_df.isna().sum())
# print("\n",processed_df.isnull().sum())
###Output
_____no_output_____
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to sure that Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
from student_utils import patient_dataset_splitter
#processed_df=processed_df.drop(processed_df[processed_df["gender"]=='Unknown/Invalid'].index)
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
Test passed for number of unique patients being equal!
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 2084
2.0 2438
3.0 2412
4.0 1873
5.0 1375
6.0 982
7.0 770
8.0 552
9.0 390
10.0 301
11.0 253
12.0 186
13.0 164
14.0 132
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
gender
Female 7385
Male 6527
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
diabetes_batch
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
#ll
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
vocab_file_list
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
%load_ext autoreload
%autoreload
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
tf_cat_col_list
test_cat_var1 = tf_cat_col_list[5]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
IndicatorColumn(categorical_column=VocabularyFileCategoricalColumn(key='A1Cresult', vocabulary_file='./diabetes_vocab/A1Cresult_vocab.txt', vocabulary_size=5, num_oov_buckets=1, dtype=tf.string, default_value=-1))
tf.Tensor(
[[0. 1. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0.]
[0. 0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 0.]
[0. 1. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0.]], shape=(128, 6), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
d_train.info()
from student_utils import create_tf_numeric_feature
%load_ext autoreload
%autoreload
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
print(mean,std)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std,default_value=0)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
diabetes_batch
test_cont_var1 = tf_cont_col_list[1]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='number_inpatient', shape=(1,), default_value=(0,), dtype=tf.float32, normalizer_fn=None)
tf.Tensor(
[[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[1.]
[0.]
[0.]
[0.]
[0.]
[2.]
[0.]
[0.]
[0.]
[0.]
[1.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[1.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[1.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[1.]
[0.]
[0.]
[0.]
[0.]
[0.]
[1.]
[0.]
[1.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(32, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mae'):
model = build_sequential_model(feature_layer)
negloglik = lambda y, rv_y: -rv_y.log_prob(y)
loss = negloglik
model.compile(optimizer=tf.optimizers.Adam(learning_rate=0.05), loss=loss, metrics=[loss_metric])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=5)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=15)
###Output
Train for 327 steps, validate for 109 steps
Epoch 1/15
327/327 [==============================] - 14s 44ms/step - loss: 29.7056 - mae: 2.8076 - val_loss: 9.4923 - val_mae: 2.0006
Epoch 2/15
327/327 [==============================] - 10s 30ms/step - loss: 8.9280 - mae: 1.9363 - val_loss: 8.7376 - val_mae: 1.9455
Epoch 3/15
327/327 [==============================] - 10s 31ms/step - loss: 8.6785 - mae: 1.9485 - val_loss: 8.5906 - val_mae: 2.0648
Epoch 4/15
327/327 [==============================] - 10s 31ms/step - loss: 8.2286 - mae: 1.9249 - val_loss: 8.6013 - val_mae: 1.9811
Epoch 5/15
327/327 [==============================] - 10s 30ms/step - loss: 8.1439 - mae: 1.9190 - val_loss: 8.6158 - val_mae: 2.1189
Epoch 6/15
327/327 [==============================] - 10s 31ms/step - loss: 7.9895 - mae: 1.8988 - val_loss: 8.7186 - val_mae: 1.9808
Epoch 7/15
327/327 [==============================] - 10s 30ms/step - loss: 7.9590 - mae: 1.9147 - val_loss: 7.7640 - val_mae: 1.9004
Epoch 8/15
327/327 [==============================] - 10s 31ms/step - loss: 7.9167 - mae: 1.9175 - val_loss: 8.0035 - val_mae: 1.9449
Epoch 9/15
327/327 [==============================] - 10s 31ms/step - loss: 8.0387 - mae: 1.9341 - val_loss: 7.9794 - val_mae: 1.9771
Epoch 10/15
327/327 [==============================] - 10s 30ms/step - loss: 8.0779 - mae: 1.9368 - val_loss: 8.9321 - val_mae: 2.0502
Epoch 11/15
327/327 [==============================] - 10s 30ms/step - loss: 8.0297 - mae: 1.9508 - val_loss: 7.8726 - val_mae: 1.9805
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhats = [diabetes_model(diabetes_x_tst) for _ in range(10)]
preds = diabetes_model.predict(diabetes_test_ds)
from student_utils import get_mean_std_from_preds
%load_ext autoreload
%autoreload
m, s = get_mean_std_from_preds(diabetes_yhats)
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.flatten(),
"pred_std": s.flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head(20)
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
%load_ext autoreload
%autoreload
student_binary_prediction = get_student_binary_prediction(d_test,m)
student_binary_prediction
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head(15)
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations?
###Code
from sklearn.metrics import brier_score_loss, accuracy_score, f1_score, classification_report, roc_auc_score, roc_curve
f1_score(pred_test_df['label_value'], pred_test_df['score'], average='weighted')
roc_auc_score(pred_test_df['label_value'], pred_test_df['score'])
print(classification_report(pred_test_df['label_value'], pred_test_df['score']))
###Output
precision recall f1-score support
0 0.83 0.79 0.81 8807
1 0.67 0.71 0.69 5105
accuracy 0.76 13912
macro avg 0.75 0.75 0.75 13912
weighted avg 0.77 0.76 0.77 13912
###Markdown
Summary AUC, F1, precision and recall 1- ROC AUCthe ROC AUC relations is used to compare different algorithm performance depending on calculating the area under the curve of the AUC plot.As long as this area increases, this means that the algorithm is performing better and that the TPR and FPR both are better in one algorithm than another. The result in our case = 0.7262468962506119. If another architecture of the model was tried and this number increased. this gives an indication that the new model is doing better. 1- F1 Score the overall F1 score =0.75 which gives an overall bet the average precision and recall of each class and the more this value is near one, this gives an indication that the model performance is getting better. 2- Class Precision, Recallfrom the results, the overall Precision = 0.75 and the overall recall = 0.75, this means that over the model classes the FP=FN.class 0: the result of the precision = 0.79 and the recall=0.84 meaning that the FP is slightly higher than FN in this classclass 1: the result of the precision = 0.68 and the recall=0.61 meaning that the FN is slightly higher than FP in this class non-technical overviewbecause the main goal is to choose patients that can decrease the costs for the company as possible, in this case, it's more important to avoid any False Positive results rather than the False Negatives, this means that if the model it's better for the model to predict that higher percentage of the patient will require more costs rather than predicting more people will require lower costs compared to the real values.As a result, this means that the recall should be as high as possible and is more important than the precision future improvementsfor future improvements, there is many parts that can be optimized to give better results like1 - VariationalGaussianProcess layer can be added instead of the normal distribution layer in the model, this can give us a better probability for the mean and standard deviation increasing the overall performance of the model2- the architecture of the model can be improved by looking to more architectures that have been built by the community for a similar problem3- batch_normalization can be added to increase the speed, accuracy in the training process4- trying to reduce the unnecessary numerical, categorical features as possible can lead to increasing the performance of the model 7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
p = aqp.plot_group_metric_all(xtab, metrics=['tpr', 'fpr', 'pprev', 'fnr'], ncols=1)
###Output
_____no_output_____
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
get_disparity_predefined_group()
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
# Is there significant bias in your model for either race or gender?
fpr_disparity = aqp.plot_disparity(bdf, group_metric='fpr_disparity',
attribute_name='race')
###Output
_____no_output_____
###Markdown
there is no significant bias in race field compared to the reference group
###Code
fpr_disparity = aqp.plot_disparity(bdf, group_metric='fpr_disparity',
attribute_name='gender')
###Output
_____no_output_____
###Markdown
there is no significant bias in gender field compared to the reference group too Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
fpr_fairness = aqp.plot_fairness_group(fdf, group_metric='fpr', title=True)
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import aequitas as ae
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
df.head(5)
df.columns
df.dtypes
###Output
_____no_output_____
###Markdown
Determine Level of Dataset (Line or Encounter) **Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked.
###Code
print("Number of unique enconter IDs:", df['encounter_id'].nunique())
print("Number of unique patients IDs:", df['patient_nbr'].nunique())
print("Number of rows: ", len(df))
###Output
Number of unique enconter IDs: 101766
Number of unique patients IDs: 71518
Number of rows: 143424
###Markdown
Student Response: This dataset is at line level because the number of encounters is higher than the number of patients and the lenght of the dataset. Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project.
###Code
######NOTE: The visualization will only display in Chrome browser. ########
# full_data_stats = tfdv.generate_statistics_from_csv(data_location='./data/final_project_dataset.csv')
# tfdv.visualize_statistics(full_data_stats)
###Output
_____no_output_____
###Markdown
**Answer 2.a**
###Code
df= df.replace('?', np.nan)
df = df.replace('None', np.nan)
df = df.replace('Unknown/Invalid', np.nan)
df.isna().sum()
###Output
_____no_output_____
###Markdown
**Answer 2.b**
###Code
df.hist(figsize=(10,10),bins=100)
plt.show()
###Output
_____no_output_____
###Markdown
Num_medications, Num_lab_procedures and Time_in_hospital have Gaussian distributions **Answer 2.c**
###Code
categorical_features = list(df.select_dtypes(['object']).columns)
categorical_features = categorical_features + ['admission_type_id','discharge_disposition_id', 'admission_source_id']
categorical_features_df = df[categorical_features]
categorical_features_df.nunique().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
The following features have high cardinality:- other_diagnosis_codes - primary_diagnosis_code - ndc_code **Answer 2.d**
###Code
plt.figure(figsize=(10,6))
sns.countplot(x='age', data=df)
plt.title('Age distribution')
plt.show()
plt.figure(figsize=(10,6))
sns.countplot(x='gender', data=df)
plt.title("Gender distribution")
plt.show()
male_percentage = len(df[df['gender']=='Male']) / len(df)
female_percentage = len(df[df['gender']=='Female']) / len(df)
print(f'Male percentage: {male_percentage:.2f}%')
print(f'Female percentage: {female_percentage:.2f}%')
plt.figure(figsize=(10,6))
sns.countplot(x='age', hue='gender', data=df)
plt.title("Distribution of the age and gender")
plt.show()
###Output
_____no_output_____
###Markdown
- The Age field has a Gaussian distribution that is skewed to the right- The proportion of males and females is equally distibuted (Male percentage: 0.47%, Female percentage: 0.53%)- At larger ages, there are more females than males Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
ndc_code_df.head(5)
def reduce_dim(df, ndc_code):
output = df.copy()
output = output.merge(ndc_code[['NDC_Code', 'Non-proprietary Name']], left_on='ndc_code', right_on='NDC_Code')
output['generic_drug_name'] = output['Non-proprietary Name']
del output['Non-proprietary Name']
del output["NDC_Code"]
return output
reduce_dim_df = reduce_dim(df, ndc_code_df)
reduce_dim_df.head(5)
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
print("The dimensionality has been reduced")
print('Prior dimensionality: ', df['ndc_code'].nunique())
print('Post dimensionality: ', reduce_dim_df['generic_drug_name'].nunique())
import student_utils
from student_utils import reduce_dimension_ndc
# reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
reduce_dim_df.head(5)
# Number of unique values should be less for the new output field
# assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
def select_first_encounter(df):
df.sort_values('encounter_id')
first_encounter = df.groupby('patient_nbr')['encounter_id'].head(1).values
return df[df['encounter_id'].isin(first_encounter)]
# from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
first_encounter_df.head()
###Output
_____no_output_____
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
first_encounter_df = first_encounter_df.drop(columns=['weight','ndc_code'])
first_encounter_df = first_encounter_df.drop(columns=['A1Cresult'])
exclusion_list = ['generic_drug_name']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
grouping_field_list
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
agg_drug_df.head(5)
agg_drug_df.columns
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice. Student response: ??
###Code
agg_drug_df = agg_drug_df.replace('?', np.nan)
agg_drug_df = agg_drug_df.replace('None', np.nan)
agg_drug_df = agg_drug_df.replace('Unknown/Invalid', np.nan)
agg_drug_df.isna().sum()
agg_drug_df.dtypes
agg_drug_df.encounter_id = agg_drug_df.encounter_id.astype(str)
agg_drug_df.patient_nbr = agg_drug_df.patient_nbr.astype(str)
agg_drug_df.discharge_disposition_id = agg_drug_df.discharge_disposition_id.astype(str)
agg_drug_df.admission_source_id = agg_drug_df.admission_source_id.astype(str)
agg_drug_df.admission_type_id = agg_drug_df.admission_type_id.astype(str)
numerical_features = ['number_outpatient', 'number_inpatient', 'number_emergency',
'num_lab_procedures', 'number_diagnoses', 'num_medications', 'num_procedures']
for i in numerical_features:
plt.figure(figsize=(10,5))
plt.title(i)
agg_drug_df[i].hist(bins=50)
###Output
_____no_output_____
###Markdown
Only the features "num_lab_procedures" and "num_medications" have a normal distribution
###Code
categorical_features = ['payer_code', 'medical_specialty', 'primary_diagnosis_code', 'other_diagnosis_codes',
'max_glu_serum', 'change', 'readmitted', 'discharge_disposition_id', 'admission_source_id',
'admission_type_id']
for i in categorical_features:
plt.figure(figsize=(10,6))
plt.title(i)
sns.countplot(x = i, data=agg_drug_df)
pd.DataFrame(agg_drug_df[categorical_features].nunique().sort_values(ascending=False))
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
required_demo_col_list = ['race', 'gender', 'age']
student_categorical_col_list = ['medical_specialty', 'primary_diagnosis_code', 'max_glu_serum', 'change', 'readmitted'] + required_demo_col_list + ndc_col_list
student_numerical_col_list = ['num_lab_procedures', 'num_medications']
PREDICTOR_FIELD = 'time_in_hospital'
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return agg_drug_df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
selected_features_df.head(5)
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
/home/workspace/starter_code/utils.py:29: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[predictor] = df[predictor].astype(float)
/home/workspace/starter_code/utils.py:31: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[c] = cast_df(df, c, d_type=str)
/home/workspace/starter_code/utils.py:33: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[numerical_column] = impute_df(df, numerical_column, numerical_impute_value)
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
import importlib
from student_utils import patient_dataset_splitter
importlib.reload(student_utils)
print(processed_df['patient_nbr'].nunique())
print(len(processed_df))
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
Test passed for number of unique patients being equal!
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 14
2.0 14
3.0 13
4.0 6
5.0 11
6.0 3
7.0 4
8.0 4
10.0 1
11.0 1
13.0 1
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
gender
Female 29
Male 43
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
importlib.reload(student_utils)
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
test_cat_var1 = tf_cat_col_list[1]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
EmbeddingColumn(categorical_column=VocabularyFileCategoricalColumn(key='primary_diagnosis_code', vocabulary_file='./diabetes_vocab/primary_diagnosis_code_vocab.txt', vocabulary_size=85, num_oov_buckets=1, dtype=tf.string, default_value=-1), dimension=10, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x7f58c8074ad0>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True)
tf.Tensor(
[[-0.19272576 -0.46973816 -0.13562748 ... -0.55834883 -0.0332163
0.36597723]
[ 0.32501793 -0.11838983 0.44070455 ... 0.1687631 -0.11054371
0.00086591]
[-0.00884776 0.09645091 0.3113975 ... -0.00577439 -0.34254968
0.2876387 ]
...
[-0.29531235 0.11592381 0.19631763 ... -0.6281603 0.08634932
0.19323488]
[ 0.24968225 0.29576984 0.24595736 ... 0.45602253 0.42217895
0.04204753]
[-0.29531235 0.11592381 0.19631763 ... -0.6281603 0.08634932
0.19323488]], shape=(128, 10), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
importlib.reload(student_utils)
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='num_lab_procedures', shape=(1,), default_value=(0,), dtype=tf.float32, normalizer_fn=functools.partial(<function normalize_numeric_with_zscore at 0x7f58bdc30560>, mean=15.393518518518519, std=4.534903873715684))
tf.Tensor(
[[ 0. ]
[-1.5 ]
[ 0.5 ]
[ 0.5 ]
[-0.75]
[ 0. ]
[ 0.5 ]
[ 0. ]
[ 0.25]
[-1.5 ]
[-1. ]
[ 0. ]
[ 1. ]
[ 1.75]
[-1.5 ]
[ 2.75]
[ 0. ]
[ 0.75]
[-1.5 ]
[-1.25]
[ 0.5 ]
[-1.25]
[ 1. ]
[ 0. ]
[ 2.5 ]
[-1.25]
[-1.25]
[ 2. ]
[ 0.25]
[ 0.5 ]
[ 2.25]
[ 0.5 ]
[ 0. ]
[ 0.25]
[ 0.5 ]
[ 0.25]
[ 2. ]
[ 0. ]
[-1.5 ]
[ 0.25]
[ 0.75]
[-1.5 ]
[-1.5 ]
[ 0.75]
[ 0. ]
[-1.25]
[-0.75]
[ 1.75]
[-1. ]
[ 0.75]
[-1.5 ]
[ 0.25]
[-1.5 ]
[-1. ]
[ 0.5 ]
[ 0.25]
[-1.5 ]
[ 0.5 ]
[ 0.25]
[ 2. ]
[-1.5 ]
[ 0. ]
[ 0.25]
[ 0.5 ]
[ 0.75]
[ 1. ]
[ 0. ]
[-1.5 ]
[-1.25]
[ 0. ]
[ 2. ]
[ 1.25]
[-1. ]
[-0.75]
[ 0.75]
[ 1. ]
[ 0.25]
[-1. ]
[ 0.75]
[-1.25]
[ 2.25]
[ 0.5 ]
[ 0.5 ]
[ 0.25]
[-1.5 ]
[-0.75]
[-0.75]
[ 8.25]
[ 1.25]
[ 2. ]
[-0.75]
[-0.75]
[-0.75]
[ 0.5 ]
[ 0.5 ]
[ 0.5 ]
[ 1. ]
[-1. ]
[-1. ]
[-1. ]
[ 0. ]
[ 2. ]
[ 2.25]
[ 0.75]
[ 1. ]
[ 1. ]
[ 0. ]
[ 0.25]
[-1.5 ]
[ 0.5 ]
[ 0.5 ]
[ 0. ]
[ 0.75]
[ 0. ]
[ 0.25]
[ 0.5 ]
[ 0. ]
[ 0.25]
[ 0. ]
[ 0.75]
[ 1. ]
[ 1. ]
[ 0. ]
[ 0.25]
[-1. ]
[ 0.75]
[ 0.75]
[ 0.25]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=100)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=1000)
###Output
Train for 2 steps, validate for 1 steps
Epoch 1/1000
2/2 [==============================] - 3s 2s/step - loss: 51.3937 - mse: 47.4905 - val_loss: 51.2443 - val_mse: 51.1835
Epoch 2/1000
2/2 [==============================] - 0s 51ms/step - loss: 32.3192 - mse: 30.8041 - val_loss: 52.1219 - val_mse: 52.1130
Epoch 3/1000
2/2 [==============================] - 0s 53ms/step - loss: 42.1779 - mse: 43.0655 - val_loss: 65.1471 - val_mse: 65.0731
Epoch 4/1000
2/2 [==============================] - 0s 51ms/step - loss: 26.2322 - mse: 26.7634 - val_loss: 56.1347 - val_mse: 56.3009
Epoch 5/1000
2/2 [==============================] - 0s 52ms/step - loss: 23.4230 - mse: 24.0653 - val_loss: 40.5879 - val_mse: 40.6307
Epoch 6/1000
2/2 [==============================] - 0s 50ms/step - loss: 41.5354 - mse: 41.6638 - val_loss: 17.7018 - val_mse: 17.8527
Epoch 7/1000
2/2 [==============================] - 0s 50ms/step - loss: 19.9149 - mse: 21.2503 - val_loss: 21.3841 - val_mse: 21.2544
Epoch 8/1000
2/2 [==============================] - 0s 51ms/step - loss: 16.5927 - mse: 15.2386 - val_loss: 20.0516 - val_mse: 19.9644
Epoch 9/1000
2/2 [==============================] - 0s 53ms/step - loss: 27.8627 - mse: 26.0382 - val_loss: 28.2688 - val_mse: 28.2692
Epoch 10/1000
2/2 [==============================] - 0s 51ms/step - loss: 30.8063 - mse: 31.8892 - val_loss: 8.9045 - val_mse: 9.0587
Epoch 11/1000
2/2 [==============================] - 0s 51ms/step - loss: 28.2525 - mse: 29.3739 - val_loss: 14.2992 - val_mse: 14.1325
Epoch 12/1000
2/2 [==============================] - 0s 52ms/step - loss: 42.5795 - mse: 43.2779 - val_loss: 33.4095 - val_mse: 33.4539
Epoch 13/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.0000 - mse: 8.9674 - val_loss: 20.7286 - val_mse: 20.6680
Epoch 14/1000
2/2 [==============================] - 0s 51ms/step - loss: 25.3980 - mse: 25.5179 - val_loss: 108.5243 - val_mse: 108.9996
Epoch 15/1000
2/2 [==============================] - 0s 51ms/step - loss: 38.0908 - mse: 40.8115 - val_loss: 8.5420 - val_mse: 8.5511
Epoch 16/1000
2/2 [==============================] - 0s 51ms/step - loss: 24.4089 - mse: 25.2137 - val_loss: 30.6923 - val_mse: 31.1648
Epoch 17/1000
2/2 [==============================] - 0s 51ms/step - loss: 30.5568 - mse: 30.3875 - val_loss: 8.3821 - val_mse: 8.2514
Epoch 18/1000
2/2 [==============================] - 0s 50ms/step - loss: 62.0335 - mse: 65.6186 - val_loss: 26.1482 - val_mse: 26.3038
Epoch 19/1000
2/2 [==============================] - 0s 50ms/step - loss: 13.5099 - mse: 12.7222 - val_loss: 39.2025 - val_mse: 39.4480
Epoch 20/1000
2/2 [==============================] - 0s 49ms/step - loss: 40.7501 - mse: 39.3598 - val_loss: 26.5379 - val_mse: 26.8978
Epoch 21/1000
2/2 [==============================] - 0s 50ms/step - loss: 19.8053 - mse: 19.0971 - val_loss: 25.6309 - val_mse: 25.6577
Epoch 22/1000
2/2 [==============================] - 0s 51ms/step - loss: 67.3964 - mse: 67.4126 - val_loss: 22.8906 - val_mse: 22.8703
Epoch 23/1000
2/2 [==============================] - 0s 50ms/step - loss: 25.7820 - mse: 27.4644 - val_loss: 56.1736 - val_mse: 56.0374
Epoch 24/1000
2/2 [==============================] - 0s 52ms/step - loss: 34.1553 - mse: 32.2721 - val_loss: 53.8706 - val_mse: 54.4655
Epoch 25/1000
2/2 [==============================] - 0s 53ms/step - loss: 23.6476 - mse: 21.5434 - val_loss: 35.3352 - val_mse: 35.6672
Epoch 26/1000
2/2 [==============================] - 0s 51ms/step - loss: 42.7554 - mse: 45.5793 - val_loss: 17.9642 - val_mse: 17.5691
Epoch 27/1000
2/2 [==============================] - 0s 51ms/step - loss: 32.4499 - mse: 30.7121 - val_loss: 47.0177 - val_mse: 46.7315
Epoch 28/1000
2/2 [==============================] - 0s 50ms/step - loss: 22.5284 - mse: 23.7479 - val_loss: 33.9030 - val_mse: 34.0836
Epoch 29/1000
2/2 [==============================] - 0s 50ms/step - loss: 48.4749 - mse: 46.5638 - val_loss: 37.4714 - val_mse: 37.4793
Epoch 30/1000
2/2 [==============================] - 0s 50ms/step - loss: 18.7066 - mse: 18.1515 - val_loss: 65.1326 - val_mse: 65.5687
Epoch 31/1000
2/2 [==============================] - 0s 52ms/step - loss: 30.5474 - mse: 29.5069 - val_loss: 11.6824 - val_mse: 11.1312
Epoch 32/1000
2/2 [==============================] - 0s 51ms/step - loss: 20.2392 - mse: 20.1827 - val_loss: 19.1911 - val_mse: 19.4668
Epoch 33/1000
2/2 [==============================] - 0s 52ms/step - loss: 47.6662 - mse: 48.3701 - val_loss: 41.6686 - val_mse: 41.5358
Epoch 34/1000
2/2 [==============================] - 0s 52ms/step - loss: 29.9658 - mse: 29.7618 - val_loss: 24.3677 - val_mse: 24.3707
Epoch 35/1000
2/2 [==============================] - 0s 51ms/step - loss: 13.0476 - mse: 12.7535 - val_loss: 12.3992 - val_mse: 11.7946
Epoch 36/1000
2/2 [==============================] - 0s 50ms/step - loss: 29.8242 - mse: 28.6578 - val_loss: 11.0685 - val_mse: 10.3169
Epoch 37/1000
2/2 [==============================] - 0s 50ms/step - loss: 15.9852 - mse: 16.7447 - val_loss: 12.4540 - val_mse: 12.4304
Epoch 38/1000
2/2 [==============================] - 0s 52ms/step - loss: 58.3805 - mse: 61.7076 - val_loss: 9.3550 - val_mse: 9.4755
Epoch 39/1000
2/2 [==============================] - 0s 50ms/step - loss: 27.4435 - mse: 25.5620 - val_loss: 28.1388 - val_mse: 28.5333
Epoch 40/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.7183 - mse: 10.9881 - val_loss: 14.5241 - val_mse: 14.2848
Epoch 41/1000
2/2 [==============================] - 0s 52ms/step - loss: 15.2993 - mse: 14.6231 - val_loss: 11.5630 - val_mse: 10.8935
Epoch 42/1000
2/2 [==============================] - 0s 54ms/step - loss: 64.9073 - mse: 64.7232 - val_loss: 13.9246 - val_mse: 13.8273
Epoch 43/1000
2/2 [==============================] - 0s 52ms/step - loss: 50.7788 - mse: 47.0292 - val_loss: 31.2566 - val_mse: 31.6291
Epoch 44/1000
2/2 [==============================] - 0s 52ms/step - loss: 23.1916 - mse: 23.9944 - val_loss: 38.9865 - val_mse: 39.6867
Epoch 45/1000
2/2 [==============================] - 0s 51ms/step - loss: 32.5576 - mse: 34.3871 - val_loss: 37.8498 - val_mse: 37.6262
Epoch 46/1000
2/2 [==============================] - 0s 52ms/step - loss: 25.7686 - mse: 27.2424 - val_loss: 18.7232 - val_mse: 18.8332
Epoch 47/1000
2/2 [==============================] - 0s 52ms/step - loss: 26.1384 - mse: 26.7941 - val_loss: 24.4338 - val_mse: 24.6786
Epoch 48/1000
2/2 [==============================] - 0s 50ms/step - loss: 28.1815 - mse: 29.1909 - val_loss: 21.5299 - val_mse: 21.8544
Epoch 49/1000
2/2 [==============================] - 0s 53ms/step - loss: 44.0585 - mse: 41.9667 - val_loss: 48.8018 - val_mse: 48.7612
Epoch 50/1000
2/2 [==============================] - 0s 55ms/step - loss: 10.7840 - mse: 11.0134 - val_loss: 69.1753 - val_mse: 69.5980
Epoch 51/1000
2/2 [==============================] - 0s 50ms/step - loss: 27.4417 - mse: 28.1836 - val_loss: 30.1201 - val_mse: 29.2463
Epoch 52/1000
2/2 [==============================] - 0s 51ms/step - loss: 30.3989 - mse: 31.6961 - val_loss: 13.1937 - val_mse: 12.3276
Epoch 53/1000
2/2 [==============================] - 0s 54ms/step - loss: 33.1967 - mse: 31.1088 - val_loss: 11.3272 - val_mse: 11.1946
Epoch 54/1000
2/2 [==============================] - 0s 52ms/step - loss: 18.8367 - mse: 19.8455 - val_loss: 17.8809 - val_mse: 17.5054
Epoch 55/1000
2/2 [==============================] - 0s 52ms/step - loss: 23.9040 - mse: 23.9143 - val_loss: 34.6533 - val_mse: 34.3066
Epoch 56/1000
2/2 [==============================] - 0s 50ms/step - loss: 26.1371 - mse: 25.1104 - val_loss: 16.5869 - val_mse: 16.5496
Epoch 57/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.9999 - mse: 10.6550 - val_loss: 39.1305 - val_mse: 40.0432
Epoch 58/1000
2/2 [==============================] - 0s 50ms/step - loss: 18.1710 - mse: 17.5384 - val_loss: 30.7266 - val_mse: 30.9123
Epoch 59/1000
2/2 [==============================] - 0s 51ms/step - loss: 36.7322 - mse: 40.0135 - val_loss: 15.1816 - val_mse: 14.3959
Epoch 60/1000
2/2 [==============================] - 0s 50ms/step - loss: 21.8578 - mse: 21.8457 - val_loss: 19.0115 - val_mse: 18.4605
Epoch 61/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.5506 - mse: 14.7386 - val_loss: 11.5511 - val_mse: 10.3065
Epoch 62/1000
2/2 [==============================] - 0s 50ms/step - loss: 23.2749 - mse: 22.8775 - val_loss: 19.6095 - val_mse: 19.8338
Epoch 63/1000
2/2 [==============================] - 0s 50ms/step - loss: 19.2669 - mse: 17.9881 - val_loss: 27.4267 - val_mse: 26.9421
Epoch 64/1000
2/2 [==============================] - 0s 51ms/step - loss: 46.8326 - mse: 46.2058 - val_loss: 32.0870 - val_mse: 32.4628
Epoch 65/1000
2/2 [==============================] - 0s 51ms/step - loss: 27.1633 - mse: 28.0072 - val_loss: 58.6195 - val_mse: 59.6422
Epoch 66/1000
2/2 [==============================] - 0s 50ms/step - loss: 66.3438 - mse: 65.4017 - val_loss: 13.6680 - val_mse: 12.6685
Epoch 67/1000
2/2 [==============================] - 0s 51ms/step - loss: 51.5553 - mse: 55.3545 - val_loss: 19.0647 - val_mse: 18.8401
Epoch 68/1000
2/2 [==============================] - 0s 51ms/step - loss: 55.7459 - mse: 52.1700 - val_loss: 17.6453 - val_mse: 16.5623
Epoch 69/1000
2/2 [==============================] - 0s 52ms/step - loss: 20.4110 - mse: 20.2428 - val_loss: 34.3789 - val_mse: 34.4321
Epoch 70/1000
2/2 [==============================] - 0s 51ms/step - loss: 22.4472 - mse: 22.5463 - val_loss: 13.6056 - val_mse: 13.0531
Epoch 71/1000
2/2 [==============================] - 0s 51ms/step - loss: 13.0343 - mse: 12.5174 - val_loss: 35.2987 - val_mse: 35.3763
Epoch 72/1000
2/2 [==============================] - 0s 50ms/step - loss: 27.1323 - mse: 28.9327 - val_loss: 37.1063 - val_mse: 36.8527
Epoch 73/1000
2/2 [==============================] - 0s 51ms/step - loss: 41.1542 - mse: 40.5055 - val_loss: 15.4098 - val_mse: 15.2232
Epoch 74/1000
2/2 [==============================] - 0s 52ms/step - loss: 24.8129 - mse: 23.9636 - val_loss: 9.9156 - val_mse: 8.6872
Epoch 75/1000
2/2 [==============================] - 0s 52ms/step - loss: 17.9429 - mse: 17.6593 - val_loss: 34.8333 - val_mse: 35.2380
Epoch 76/1000
2/2 [==============================] - 0s 55ms/step - loss: 30.9401 - mse: 31.3873 - val_loss: 17.1300 - val_mse: 16.4523
Epoch 77/1000
2/2 [==============================] - 0s 52ms/step - loss: 18.8880 - mse: 18.9175 - val_loss: 15.4769 - val_mse: 14.6433
Epoch 78/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.7848 - mse: 12.1656 - val_loss: 24.7081 - val_mse: 24.5183
Epoch 79/1000
2/2 [==============================] - 0s 51ms/step - loss: 40.9756 - mse: 41.2899 - val_loss: 39.4787 - val_mse: 40.2221
Epoch 80/1000
2/2 [==============================] - 0s 52ms/step - loss: 18.9693 - mse: 18.6089 - val_loss: 13.3190 - val_mse: 13.3575
Epoch 81/1000
2/2 [==============================] - 0s 51ms/step - loss: 37.7687 - mse: 35.4003 - val_loss: 15.8710 - val_mse: 15.7991
Epoch 82/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.2494 - mse: 8.8697 - val_loss: 10.5613 - val_mse: 10.0551
Epoch 83/1000
2/2 [==============================] - 0s 51ms/step - loss: 20.4021 - mse: 19.4813 - val_loss: 24.3902 - val_mse: 25.0296
Epoch 84/1000
2/2 [==============================] - 0s 50ms/step - loss: 8.8689 - mse: 7.7217 - val_loss: 33.2645 - val_mse: 33.7272
Epoch 85/1000
2/2 [==============================] - 0s 50ms/step - loss: 31.3885 - mse: 28.5651 - val_loss: 23.8980 - val_mse: 22.2482
Epoch 86/1000
2/2 [==============================] - 0s 50ms/step - loss: 30.3863 - mse: 31.8952 - val_loss: 16.9770 - val_mse: 16.8253
Epoch 87/1000
2/2 [==============================] - 0s 55ms/step - loss: 28.3106 - mse: 28.4879 - val_loss: 38.5029 - val_mse: 38.5002
Epoch 88/1000
2/2 [==============================] - 0s 50ms/step - loss: 32.6041 - mse: 34.9057 - val_loss: 28.9906 - val_mse: 29.0884
Epoch 89/1000
2/2 [==============================] - 0s 52ms/step - loss: 48.7119 - mse: 45.8165 - val_loss: 36.1509 - val_mse: 36.0486
Epoch 90/1000
2/2 [==============================] - 0s 51ms/step - loss: 31.3896 - mse: 30.8377 - val_loss: 20.0036 - val_mse: 18.9680
Epoch 91/1000
2/2 [==============================] - 0s 51ms/step - loss: 13.8704 - mse: 14.0314 - val_loss: 28.4482 - val_mse: 27.7141
Epoch 92/1000
2/2 [==============================] - 0s 52ms/step - loss: 26.8341 - mse: 26.4648 - val_loss: 24.7877 - val_mse: 25.9602
Epoch 93/1000
2/2 [==============================] - 0s 52ms/step - loss: 35.2589 - mse: 38.1265 - val_loss: 61.9779 - val_mse: 62.6882
Epoch 94/1000
2/2 [==============================] - 0s 52ms/step - loss: 23.4622 - mse: 22.0456 - val_loss: 22.8077 - val_mse: 21.9899
Epoch 95/1000
2/2 [==============================] - 0s 54ms/step - loss: 27.0662 - mse: 27.3198 - val_loss: 19.8283 - val_mse: 18.6732
Epoch 96/1000
2/2 [==============================] - 0s 53ms/step - loss: 33.5814 - mse: 35.2404 - val_loss: 18.6046 - val_mse: 17.5458
Epoch 97/1000
2/2 [==============================] - 0s 50ms/step - loss: 13.7806 - mse: 12.0966 - val_loss: 37.6363 - val_mse: 39.1287
Epoch 98/1000
2/2 [==============================] - 0s 51ms/step - loss: 12.6223 - mse: 11.8616 - val_loss: 20.8473 - val_mse: 20.7093
Epoch 99/1000
2/2 [==============================] - 0s 51ms/step - loss: 12.7401 - mse: 11.1685 - val_loss: 21.5695 - val_mse: 21.6040
Epoch 100/1000
2/2 [==============================] - 0s 51ms/step - loss: 20.7551 - mse: 21.3221 - val_loss: 12.3143 - val_mse: 10.6087
Epoch 101/1000
2/2 [==============================] - 0s 52ms/step - loss: 21.1570 - mse: 21.2659 - val_loss: 18.5230 - val_mse: 18.4452
Epoch 102/1000
2/2 [==============================] - 0s 51ms/step - loss: 41.7582 - mse: 38.7861 - val_loss: 15.8784 - val_mse: 15.7233
Epoch 103/1000
2/2 [==============================] - 0s 50ms/step - loss: 50.0725 - mse: 51.6169 - val_loss: 16.7914 - val_mse: 15.6867
Epoch 104/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.3806 - mse: 10.7921 - val_loss: 12.2893 - val_mse: 11.9641
Epoch 105/1000
2/2 [==============================] - 0s 49ms/step - loss: 18.5800 - mse: 17.4166 - val_loss: 28.8109 - val_mse: 29.0864
Epoch 106/1000
2/2 [==============================] - 0s 51ms/step - loss: 21.0717 - mse: 22.6959 - val_loss: 10.3922 - val_mse: 10.3152
Epoch 107/1000
2/2 [==============================] - 0s 49ms/step - loss: 19.3951 - mse: 18.2298 - val_loss: 33.9156 - val_mse: 34.2227
Epoch 108/1000
2/2 [==============================] - 0s 53ms/step - loss: 8.4434 - mse: 7.4587 - val_loss: 22.3226 - val_mse: 22.6315
Epoch 109/1000
2/2 [==============================] - 0s 51ms/step - loss: 15.9379 - mse: 14.9889 - val_loss: 9.7740 - val_mse: 8.9441
Epoch 110/1000
2/2 [==============================] - 0s 52ms/step - loss: 27.4372 - mse: 26.8076 - val_loss: 16.1317 - val_mse: 15.4610
Epoch 111/1000
2/2 [==============================] - 0s 54ms/step - loss: 24.9347 - mse: 24.9040 - val_loss: 10.3057 - val_mse: 8.5798
Epoch 112/1000
2/2 [==============================] - 0s 49ms/step - loss: 9.4258 - mse: 9.0679 - val_loss: 14.9178 - val_mse: 14.1059
Epoch 113/1000
2/2 [==============================] - 0s 52ms/step - loss: 18.7055 - mse: 18.9627 - val_loss: 34.9878 - val_mse: 34.5949
Epoch 114/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.8173 - mse: 10.3902 - val_loss: 13.9891 - val_mse: 13.8812
Epoch 115/1000
2/2 [==============================] - 0s 56ms/step - loss: 32.4034 - mse: 29.5544 - val_loss: 22.9956 - val_mse: 22.6756
Epoch 116/1000
2/2 [==============================] - 0s 51ms/step - loss: 13.9424 - mse: 13.5486 - val_loss: 11.5465 - val_mse: 10.8324
Epoch 117/1000
2/2 [==============================] - 0s 50ms/step - loss: 23.6447 - mse: 20.9954 - val_loss: 28.5201 - val_mse: 28.8625
Epoch 118/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.5415 - mse: 8.0934 - val_loss: 27.1618 - val_mse: 26.4663
Epoch 119/1000
2/2 [==============================] - 0s 52ms/step - loss: 12.9722 - mse: 11.3140 - val_loss: 34.0892 - val_mse: 34.4918
Epoch 120/1000
2/2 [==============================] - 0s 52ms/step - loss: 8.6661 - mse: 6.9521 - val_loss: 12.2580 - val_mse: 11.3951
Epoch 121/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.4301 - mse: 8.1095 - val_loss: 11.4102 - val_mse: 10.7788
Epoch 122/1000
2/2 [==============================] - 0s 54ms/step - loss: 22.9628 - mse: 24.3252 - val_loss: 13.7337 - val_mse: 13.7499
Epoch 123/1000
2/2 [==============================] - 0s 62ms/step - loss: 27.0175 - mse: 28.6018 - val_loss: 28.9270 - val_mse: 28.7798
Epoch 124/1000
2/2 [==============================] - 0s 55ms/step - loss: 11.1567 - mse: 9.4981 - val_loss: 11.2978 - val_mse: 10.0987
Epoch 125/1000
2/2 [==============================] - 0s 58ms/step - loss: 27.0957 - mse: 29.6072 - val_loss: 46.8017 - val_mse: 47.2078
Epoch 126/1000
2/2 [==============================] - 0s 51ms/step - loss: 20.0431 - mse: 18.1196 - val_loss: 12.2503 - val_mse: 11.6715
Epoch 127/1000
2/2 [==============================] - 0s 55ms/step - loss: 13.6529 - mse: 13.4509 - val_loss: 21.2528 - val_mse: 21.2836
Epoch 128/1000
2/2 [==============================] - 0s 50ms/step - loss: 10.7324 - mse: 10.0792 - val_loss: 13.9102 - val_mse: 13.5012
Epoch 129/1000
2/2 [==============================] - 0s 53ms/step - loss: 9.6503 - mse: 9.1025 - val_loss: 47.3434 - val_mse: 47.7570
Epoch 130/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.1300 - mse: 7.9623 - val_loss: 19.9373 - val_mse: 19.3274
Epoch 131/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.0357 - mse: 10.0198 - val_loss: 17.2556 - val_mse: 16.7506
Epoch 132/1000
2/2 [==============================] - 0s 53ms/step - loss: 21.7134 - mse: 20.1506 - val_loss: 22.5578 - val_mse: 21.8059
Epoch 133/1000
2/2 [==============================] - 0s 52ms/step - loss: 32.8628 - mse: 29.4653 - val_loss: 29.9097 - val_mse: 28.7707
Epoch 134/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.5310 - mse: 10.0640 - val_loss: 10.3635 - val_mse: 9.4161
Epoch 135/1000
2/2 [==============================] - 0s 50ms/step - loss: 31.3918 - mse: 28.9039 - val_loss: 36.6031 - val_mse: 37.7227
Epoch 136/1000
2/2 [==============================] - 0s 51ms/step - loss: 19.9192 - mse: 18.2127 - val_loss: 16.2505 - val_mse: 16.0854
Epoch 137/1000
2/2 [==============================] - 0s 53ms/step - loss: 17.4278 - mse: 15.4568 - val_loss: 17.0834 - val_mse: 16.7548
Epoch 138/1000
2/2 [==============================] - 0s 49ms/step - loss: 7.4126 - mse: 6.1445 - val_loss: 14.8860 - val_mse: 13.6292
Epoch 139/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.6608 - mse: 8.6149 - val_loss: 18.2725 - val_mse: 18.8394
Epoch 140/1000
2/2 [==============================] - 0s 50ms/step - loss: 8.5827 - mse: 6.7810 - val_loss: 30.9744 - val_mse: 32.7535
Epoch 141/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.6505 - mse: 13.1047 - val_loss: 10.5439 - val_mse: 9.9206
Epoch 142/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.2653 - mse: 12.2700 - val_loss: 12.9373 - val_mse: 12.7211
Epoch 143/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.8169 - mse: 6.8754 - val_loss: 19.3457 - val_mse: 19.4051
Epoch 144/1000
2/2 [==============================] - 0s 52ms/step - loss: 39.4377 - mse: 35.7771 - val_loss: 23.3786 - val_mse: 22.6136
Epoch 145/1000
2/2 [==============================] - 0s 52ms/step - loss: 23.9413 - mse: 22.4873 - val_loss: 9.7668 - val_mse: 8.9305
Epoch 146/1000
2/2 [==============================] - 0s 50ms/step - loss: 18.2605 - mse: 17.2516 - val_loss: 42.1295 - val_mse: 42.7084
Epoch 147/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.6196 - mse: 10.2554 - val_loss: 10.4245 - val_mse: 8.7287
Epoch 148/1000
2/2 [==============================] - 0s 50ms/step - loss: 26.6140 - mse: 26.7229 - val_loss: 22.6180 - val_mse: 19.3057
Epoch 149/1000
2/2 [==============================] - 0s 51ms/step - loss: 46.3014 - mse: 40.6143 - val_loss: 11.1204 - val_mse: 8.5085
Epoch 150/1000
2/2 [==============================] - 0s 52ms/step - loss: 31.0211 - mse: 34.1756 - val_loss: 11.8235 - val_mse: 11.9186
Epoch 151/1000
2/2 [==============================] - 0s 58ms/step - loss: 10.4883 - mse: 9.7698 - val_loss: 31.6994 - val_mse: 31.1366
Epoch 152/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.3391 - mse: 6.5871 - val_loss: 11.9073 - val_mse: 10.1910
Epoch 153/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.4254 - mse: 5.2241 - val_loss: 19.0743 - val_mse: 16.2785
Epoch 154/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.9078 - mse: 5.5637 - val_loss: 15.7536 - val_mse: 13.8128
Epoch 155/1000
2/2 [==============================] - 0s 49ms/step - loss: 18.9931 - mse: 19.9855 - val_loss: 11.4542 - val_mse: 9.6265
Epoch 156/1000
2/2 [==============================] - 0s 50ms/step - loss: 39.4125 - mse: 43.4674 - val_loss: 18.3800 - val_mse: 17.0582
Epoch 157/1000
2/2 [==============================] - 0s 51ms/step - loss: 13.4720 - mse: 12.1816 - val_loss: 15.5877 - val_mse: 15.4480
Epoch 158/1000
2/2 [==============================] - 0s 56ms/step - loss: 18.3914 - mse: 19.1698 - val_loss: 11.1190 - val_mse: 8.8800
Epoch 159/1000
2/2 [==============================] - 0s 52ms/step - loss: 22.0943 - mse: 22.1648 - val_loss: 12.2002 - val_mse: 10.9970
Epoch 160/1000
2/2 [==============================] - 0s 51ms/step - loss: 15.7420 - mse: 14.3722 - val_loss: 10.3514 - val_mse: 9.7789
Epoch 161/1000
2/2 [==============================] - 0s 60ms/step - loss: 10.2419 - mse: 8.6242 - val_loss: 46.0586 - val_mse: 45.4180
Epoch 162/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.0216 - mse: 8.5719 - val_loss: 13.7566 - val_mse: 12.7174
Epoch 163/1000
2/2 [==============================] - 0s 56ms/step - loss: 15.3582 - mse: 13.6955 - val_loss: 10.0904 - val_mse: 7.5698
Epoch 164/1000
2/2 [==============================] - 0s 50ms/step - loss: 11.8055 - mse: 10.7921 - val_loss: 16.2811 - val_mse: 14.5613
Epoch 165/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.9050 - mse: 4.5456 - val_loss: 10.4873 - val_mse: 8.5196
Epoch 166/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.8628 - mse: 15.4828 - val_loss: 18.1671 - val_mse: 18.1118
Epoch 167/1000
2/2 [==============================] - 0s 50ms/step - loss: 18.0353 - mse: 17.4725 - val_loss: 20.1864 - val_mse: 20.5881
Epoch 168/1000
2/2 [==============================] - 0s 52ms/step - loss: 12.7331 - mse: 13.7367 - val_loss: 36.4972 - val_mse: 36.4642
Epoch 169/1000
2/2 [==============================] - 0s 51ms/step - loss: 25.0884 - mse: 24.9171 - val_loss: 17.7895 - val_mse: 16.8503
Epoch 170/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.2230 - mse: 8.3918 - val_loss: 12.2346 - val_mse: 11.7448
Epoch 171/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.0306 - mse: 8.5654 - val_loss: 14.9829 - val_mse: 15.6367
Epoch 172/1000
2/2 [==============================] - 0s 50ms/step - loss: 20.6433 - mse: 20.7416 - val_loss: 12.1259 - val_mse: 11.2646
Epoch 173/1000
2/2 [==============================] - 0s 51ms/step - loss: 12.3301 - mse: 12.3438 - val_loss: 10.4726 - val_mse: 9.0637
Epoch 174/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.2572 - mse: 11.0585 - val_loss: 16.6626 - val_mse: 15.5956
Epoch 175/1000
2/2 [==============================] - 0s 51ms/step - loss: 39.7351 - mse: 37.1494 - val_loss: 20.5392 - val_mse: 19.7426
Epoch 176/1000
2/2 [==============================] - 0s 52ms/step - loss: 28.4928 - mse: 30.4824 - val_loss: 11.9471 - val_mse: 10.6510
Epoch 177/1000
2/2 [==============================] - 0s 50ms/step - loss: 12.2241 - mse: 10.2250 - val_loss: 18.9844 - val_mse: 20.0143
Epoch 178/1000
2/2 [==============================] - 0s 53ms/step - loss: 16.0171 - mse: 17.6517 - val_loss: 8.9114 - val_mse: 8.2594
Epoch 179/1000
2/2 [==============================] - 0s 53ms/step - loss: 8.0087 - mse: 6.3734 - val_loss: 7.8988 - val_mse: 7.6736
Epoch 180/1000
2/2 [==============================] - 0s 51ms/step - loss: 17.8768 - mse: 17.0130 - val_loss: 8.4872 - val_mse: 8.3210
Epoch 181/1000
2/2 [==============================] - 0s 50ms/step - loss: 14.5019 - mse: 12.9973 - val_loss: 9.3356 - val_mse: 9.6146
Epoch 182/1000
2/2 [==============================] - 0s 53ms/step - loss: 21.7888 - mse: 20.8085 - val_loss: 16.6222 - val_mse: 15.9423
Epoch 183/1000
2/2 [==============================] - 0s 50ms/step - loss: 13.8159 - mse: 13.0518 - val_loss: 16.8577 - val_mse: 16.8101
Epoch 184/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.3309 - mse: 5.3168 - val_loss: 10.7192 - val_mse: 9.6232
Epoch 185/1000
2/2 [==============================] - 0s 51ms/step - loss: 16.7768 - mse: 13.5737 - val_loss: 12.5655 - val_mse: 11.8444
Epoch 186/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.5882 - mse: 12.7122 - val_loss: 48.2031 - val_mse: 49.9071
Epoch 187/1000
2/2 [==============================] - 0s 51ms/step - loss: 32.7898 - mse: 33.6291 - val_loss: 21.8738 - val_mse: 20.6066
Epoch 188/1000
2/2 [==============================] - 0s 50ms/step - loss: 30.2012 - mse: 26.7071 - val_loss: 11.5112 - val_mse: 11.2144
Epoch 189/1000
2/2 [==============================] - 0s 51ms/step - loss: 19.0233 - mse: 20.5572 - val_loss: 13.4285 - val_mse: 12.9488
Epoch 190/1000
2/2 [==============================] - 0s 55ms/step - loss: 10.0228 - mse: 8.2411 - val_loss: 10.9611 - val_mse: 10.8436
Epoch 191/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.0744 - mse: 9.7725 - val_loss: 36.1429 - val_mse: 36.6044
Epoch 192/1000
2/2 [==============================] - 0s 56ms/step - loss: 32.0138 - mse: 28.6707 - val_loss: 17.2605 - val_mse: 16.1802
Epoch 193/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.4749 - mse: 6.4100 - val_loss: 22.4700 - val_mse: 23.0385
Epoch 194/1000
2/2 [==============================] - 0s 50ms/step - loss: 21.3847 - mse: 19.5344 - val_loss: 15.3955 - val_mse: 14.2763
Epoch 195/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.8641 - mse: 9.5611 - val_loss: 12.8369 - val_mse: 11.2253
Epoch 196/1000
2/2 [==============================] - 0s 50ms/step - loss: 16.5822 - mse: 16.8394 - val_loss: 11.1820 - val_mse: 8.8661
Epoch 197/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.9469 - mse: 8.5268 - val_loss: 43.4258 - val_mse: 40.7151
Epoch 198/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.3892 - mse: 8.7844 - val_loss: 43.5717 - val_mse: 43.9399
Epoch 199/1000
2/2 [==============================] - 0s 50ms/step - loss: 16.0521 - mse: 17.0372 - val_loss: 11.7399 - val_mse: 11.1975
Epoch 200/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.9302 - mse: 10.1221 - val_loss: 12.8829 - val_mse: 12.0287
Epoch 201/1000
2/2 [==============================] - 0s 51ms/step - loss: 16.2251 - mse: 13.3245 - val_loss: 8.0117 - val_mse: 7.2264
Epoch 202/1000
2/2 [==============================] - 0s 50ms/step - loss: 33.2771 - mse: 36.1092 - val_loss: 11.7774 - val_mse: 10.9343
Epoch 203/1000
2/2 [==============================] - 0s 50ms/step - loss: 11.8113 - mse: 10.8406 - val_loss: 12.2740 - val_mse: 9.6310
Epoch 204/1000
2/2 [==============================] - 0s 51ms/step - loss: 26.5930 - mse: 27.3428 - val_loss: 11.1937 - val_mse: 9.9507
Epoch 205/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.8010 - mse: 3.9754 - val_loss: 10.6871 - val_mse: 9.8759
Epoch 206/1000
2/2 [==============================] - 0s 52ms/step - loss: 9.7021 - mse: 9.9846 - val_loss: 10.9494 - val_mse: 8.8462
Epoch 207/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.2690 - mse: 6.1363 - val_loss: 22.0454 - val_mse: 23.0621
Epoch 208/1000
2/2 [==============================] - 0s 53ms/step - loss: 6.4565 - mse: 5.9118 - val_loss: 9.1469 - val_mse: 9.0034
Epoch 209/1000
2/2 [==============================] - 0s 52ms/step - loss: 12.3619 - mse: 12.5398 - val_loss: 15.9330 - val_mse: 15.5301
Epoch 210/1000
2/2 [==============================] - 0s 52ms/step - loss: 15.1137 - mse: 15.1473 - val_loss: 12.6477 - val_mse: 11.0231
Epoch 211/1000
2/2 [==============================] - 0s 51ms/step - loss: 13.0307 - mse: 11.6683 - val_loss: 37.3983 - val_mse: 38.6187
Epoch 212/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.0218 - mse: 6.9544 - val_loss: 11.0156 - val_mse: 9.9594
Epoch 213/1000
2/2 [==============================] - 0s 53ms/step - loss: 29.1081 - mse: 29.6611 - val_loss: 8.5021 - val_mse: 6.8658
Epoch 214/1000
2/2 [==============================] - 0s 53ms/step - loss: 26.0536 - mse: 24.3367 - val_loss: 11.1400 - val_mse: 11.7857
Epoch 215/1000
2/2 [==============================] - 0s 51ms/step - loss: 21.2442 - mse: 20.6963 - val_loss: 22.3877 - val_mse: 21.9513
Epoch 216/1000
2/2 [==============================] - 0s 51ms/step - loss: 17.4345 - mse: 16.8682 - val_loss: 20.2928 - val_mse: 20.5592
Epoch 217/1000
2/2 [==============================] - 0s 51ms/step - loss: 18.5167 - mse: 20.9716 - val_loss: 27.4613 - val_mse: 23.9400
Epoch 218/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.3486 - mse: 13.8472 - val_loss: 17.0888 - val_mse: 16.9355
Epoch 219/1000
2/2 [==============================] - 0s 52ms/step - loss: 7.2051 - mse: 5.9130 - val_loss: 9.9352 - val_mse: 8.2928
Epoch 220/1000
2/2 [==============================] - 0s 52ms/step - loss: 21.9541 - mse: 20.4993 - val_loss: 12.2725 - val_mse: 10.9755
Epoch 221/1000
2/2 [==============================] - 0s 50ms/step - loss: 12.6554 - mse: 11.9317 - val_loss: 13.2136 - val_mse: 12.0431
Epoch 222/1000
2/2 [==============================] - 0s 50ms/step - loss: 10.1002 - mse: 10.1953 - val_loss: 10.5613 - val_mse: 9.2240
Epoch 223/1000
2/2 [==============================] - 0s 54ms/step - loss: 11.0689 - mse: 9.5310 - val_loss: 23.8222 - val_mse: 23.6579
Epoch 224/1000
2/2 [==============================] - 0s 57ms/step - loss: 9.6634 - mse: 8.0081 - val_loss: 13.7001 - val_mse: 11.7609
Epoch 225/1000
2/2 [==============================] - 0s 51ms/step - loss: 20.2399 - mse: 20.2751 - val_loss: 11.8671 - val_mse: 10.9069
Epoch 226/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.4898 - mse: 7.2900 - val_loss: 9.8873 - val_mse: 8.9283
Epoch 227/1000
2/2 [==============================] - 0s 51ms/step - loss: 32.0554 - mse: 35.9963 - val_loss: 9.0361 - val_mse: 7.8429
Epoch 228/1000
2/2 [==============================] - 0s 51ms/step - loss: 21.7447 - mse: 18.1275 - val_loss: 9.3245 - val_mse: 7.9520
Epoch 229/1000
2/2 [==============================] - 0s 51ms/step - loss: 12.8811 - mse: 11.2259 - val_loss: 25.3520 - val_mse: 26.0639
Epoch 230/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.9592 - mse: 5.8528 - val_loss: 18.2149 - val_mse: 17.9450
Epoch 231/1000
2/2 [==============================] - 0s 54ms/step - loss: 13.4173 - mse: 12.8883 - val_loss: 8.7173 - val_mse: 7.7265
Epoch 232/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.1376 - mse: 7.6876 - val_loss: 12.6029 - val_mse: 11.5602
Epoch 233/1000
2/2 [==============================] - 0s 50ms/step - loss: 13.4047 - mse: 13.6392 - val_loss: 17.1524 - val_mse: 16.2268
Epoch 234/1000
2/2 [==============================] - 0s 51ms/step - loss: 17.5064 - mse: 16.3183 - val_loss: 10.5330 - val_mse: 8.2986
Epoch 235/1000
2/2 [==============================] - 0s 50ms/step - loss: 20.2804 - mse: 18.9660 - val_loss: 16.0332 - val_mse: 16.1090
Epoch 236/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.7784 - mse: 6.6505 - val_loss: 9.7999 - val_mse: 7.6677
Epoch 237/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.0715 - mse: 5.1761 - val_loss: 9.3183 - val_mse: 8.4596
Epoch 238/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.9999 - mse: 3.7645 - val_loss: 57.6988 - val_mse: 59.5889
Epoch 239/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.2433 - mse: 5.2256 - val_loss: 28.5707 - val_mse: 27.7379
Epoch 240/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.7482 - mse: 3.5892 - val_loss: 12.2357 - val_mse: 9.4964
Epoch 241/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.6458 - mse: 6.8225 - val_loss: 31.4298 - val_mse: 32.8217
Epoch 242/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.5283 - mse: 5.4183 - val_loss: 11.6039 - val_mse: 11.0773
Epoch 243/1000
2/2 [==============================] - 0s 50ms/step - loss: 12.8475 - mse: 13.4384 - val_loss: 10.6184 - val_mse: 9.7089
Epoch 244/1000
2/2 [==============================] - 0s 50ms/step - loss: 8.6296 - mse: 7.5638 - val_loss: 10.4600 - val_mse: 10.1220
Epoch 245/1000
2/2 [==============================] - 0s 50ms/step - loss: 10.7475 - mse: 9.4740 - val_loss: 18.3092 - val_mse: 19.7825
Epoch 246/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.1783 - mse: 3.8840 - val_loss: 10.2788 - val_mse: 9.2407
Epoch 247/1000
2/2 [==============================] - 0s 57ms/step - loss: 23.0945 - mse: 23.3918 - val_loss: 10.8419 - val_mse: 9.8790
Epoch 248/1000
2/2 [==============================] - 0s 53ms/step - loss: 5.8390 - mse: 5.9069 - val_loss: 12.8527 - val_mse: 11.0514
Epoch 249/1000
2/2 [==============================] - 0s 54ms/step - loss: 16.7765 - mse: 17.3161 - val_loss: 9.4913 - val_mse: 8.3325
Epoch 250/1000
2/2 [==============================] - 0s 50ms/step - loss: 35.4266 - mse: 36.0605 - val_loss: 14.3958 - val_mse: 14.2260
Epoch 251/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.3563 - mse: 6.8378 - val_loss: 12.8929 - val_mse: 10.3523
Epoch 252/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.1070 - mse: 4.2817 - val_loss: 19.3446 - val_mse: 19.1312
Epoch 253/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.5883 - mse: 5.9694 - val_loss: 72.9556 - val_mse: 74.2397
Epoch 254/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.3090 - mse: 3.4880 - val_loss: 15.7257 - val_mse: 16.1801
Epoch 255/1000
2/2 [==============================] - 0s 53ms/step - loss: 8.2744 - mse: 8.9431 - val_loss: 46.2859 - val_mse: 47.7147
Epoch 256/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.1467 - mse: 9.4478 - val_loss: 11.6189 - val_mse: 9.2911
Epoch 257/1000
2/2 [==============================] - 0s 53ms/step - loss: 8.0778 - mse: 7.6836 - val_loss: 33.4041 - val_mse: 33.7809
Epoch 258/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.3678 - mse: 4.2119 - val_loss: 13.7098 - val_mse: 12.9278
Epoch 259/1000
2/2 [==============================] - 0s 51ms/step - loss: 17.8709 - mse: 19.8586 - val_loss: 10.1159 - val_mse: 8.5514
Epoch 260/1000
2/2 [==============================] - 0s 52ms/step - loss: 7.5145 - mse: 5.8055 - val_loss: 14.9017 - val_mse: 14.9838
Epoch 261/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.5535 - mse: 4.0958 - val_loss: 12.3441 - val_mse: 10.2955
Epoch 262/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.3464 - mse: 7.7017 - val_loss: 12.2332 - val_mse: 10.8640
Epoch 263/1000
2/2 [==============================] - 0s 52ms/step - loss: 31.3275 - mse: 28.0834 - val_loss: 13.9507 - val_mse: 12.9577
Epoch 264/1000
2/2 [==============================] - 0s 52ms/step - loss: 12.1421 - mse: 11.9812 - val_loss: 13.0111 - val_mse: 11.5675
Epoch 265/1000
2/2 [==============================] - 0s 52ms/step - loss: 4.5324 - mse: 3.5054 - val_loss: 43.1686 - val_mse: 44.3921
Epoch 266/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.3167 - mse: 9.0081 - val_loss: 15.7031 - val_mse: 15.1016
Epoch 267/1000
2/2 [==============================] - 0s 51ms/step - loss: 17.5183 - mse: 18.8267 - val_loss: 25.5666 - val_mse: 26.4367
Epoch 268/1000
2/2 [==============================] - 0s 52ms/step - loss: 8.6621 - mse: 8.1311 - val_loss: 14.6629 - val_mse: 12.7096
Epoch 269/1000
2/2 [==============================] - 0s 52ms/step - loss: 9.7426 - mse: 9.6990 - val_loss: 10.9188 - val_mse: 11.1239
Epoch 270/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.9806 - mse: 3.8628 - val_loss: 21.6956 - val_mse: 21.6990
Epoch 271/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.8482 - mse: 4.8750 - val_loss: 26.8881 - val_mse: 25.7790
Epoch 272/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.2960 - mse: 9.7934 - val_loss: 52.2863 - val_mse: 53.2070
Epoch 273/1000
2/2 [==============================] - 0s 50ms/step - loss: 27.4378 - mse: 30.0596 - val_loss: 44.0568 - val_mse: 45.3191
Epoch 274/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.2832 - mse: 16.0187 - val_loss: 11.5973 - val_mse: 11.3975
Epoch 275/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.4006 - mse: 4.7753 - val_loss: 15.9338 - val_mse: 14.5322
Epoch 276/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.0486 - mse: 5.5711 - val_loss: 9.5692 - val_mse: 9.2375
Epoch 277/1000
2/2 [==============================] - 0s 50ms/step - loss: 11.7875 - mse: 10.2629 - val_loss: 8.6912 - val_mse: 7.6069
Epoch 278/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.4717 - mse: 5.6603 - val_loss: 23.2146 - val_mse: 22.0472
Epoch 279/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.4108 - mse: 10.3447 - val_loss: 11.3643 - val_mse: 10.6530
Epoch 280/1000
2/2 [==============================] - 0s 51ms/step - loss: 12.6914 - mse: 12.9607 - val_loss: 18.2512 - val_mse: 18.1124
Epoch 281/1000
2/2 [==============================] - 0s 52ms/step - loss: 8.0124 - mse: 7.6547 - val_loss: 9.1934 - val_mse: 8.8857
Epoch 282/1000
2/2 [==============================] - 0s 55ms/step - loss: 3.2932 - mse: 3.5997 - val_loss: 24.0308 - val_mse: 22.7207
Epoch 283/1000
2/2 [==============================] - 0s 52ms/step - loss: 9.2679 - mse: 10.2454 - val_loss: 27.4104 - val_mse: 27.7672
Epoch 284/1000
2/2 [==============================] - 0s 50ms/step - loss: 26.6025 - mse: 24.0904 - val_loss: 37.3009 - val_mse: 34.4215
Epoch 285/1000
2/2 [==============================] - 0s 52ms/step - loss: 19.0602 - mse: 20.5687 - val_loss: 20.2206 - val_mse: 16.5185
Epoch 286/1000
2/2 [==============================] - 0s 52ms/step - loss: 4.7370 - mse: 3.9604 - val_loss: 16.5814 - val_mse: 14.9911
Epoch 287/1000
2/2 [==============================] - 0s 51ms/step - loss: 19.6658 - mse: 20.2027 - val_loss: 18.2541 - val_mse: 15.5088
Epoch 288/1000
2/2 [==============================] - 0s 50ms/step - loss: 12.1269 - mse: 13.2677 - val_loss: 11.4282 - val_mse: 12.6287
Epoch 289/1000
2/2 [==============================] - 0s 51ms/step - loss: 25.7072 - mse: 23.4604 - val_loss: 9.7625 - val_mse: 7.6835
Epoch 290/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.0288 - mse: 4.2203 - val_loss: 11.3438 - val_mse: 10.3829
Epoch 291/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.6148 - mse: 3.3494 - val_loss: 15.5141 - val_mse: 12.8668
Epoch 292/1000
2/2 [==============================] - 0s 52ms/step - loss: 4.8107 - mse: 4.6684 - val_loss: 8.8542 - val_mse: 7.1467
Epoch 293/1000
2/2 [==============================] - 0s 52ms/step - loss: 8.3710 - mse: 7.2857 - val_loss: 11.2163 - val_mse: 8.0549
Epoch 294/1000
2/2 [==============================] - 0s 50ms/step - loss: 19.3532 - mse: 21.4772 - val_loss: 11.9023 - val_mse: 9.3853
Epoch 295/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.3305 - mse: 9.9634 - val_loss: 13.7843 - val_mse: 13.3048
Epoch 296/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.7906 - mse: 4.4552 - val_loss: 16.8755 - val_mse: 14.2075
Epoch 297/1000
2/2 [==============================] - 0s 52ms/step - loss: 18.6936 - mse: 14.5466 - val_loss: 13.6742 - val_mse: 13.5797
Epoch 298/1000
2/2 [==============================] - 0s 50ms/step - loss: 25.3414 - mse: 22.9317 - val_loss: 9.3430 - val_mse: 9.1793
Epoch 299/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.0850 - mse: 4.5312 - val_loss: 11.9931 - val_mse: 10.8633
Epoch 300/1000
2/2 [==============================] - 0s 50ms/step - loss: 25.1924 - mse: 22.0071 - val_loss: 15.8791 - val_mse: 15.5243
Epoch 301/1000
2/2 [==============================] - 0s 52ms/step - loss: 7.2406 - mse: 5.3661 - val_loss: 38.9950 - val_mse: 40.9444
Epoch 302/1000
2/2 [==============================] - 0s 50ms/step - loss: 21.0970 - mse: 19.8857 - val_loss: 10.5360 - val_mse: 8.2809
Epoch 303/1000
2/2 [==============================] - 0s 50ms/step - loss: 14.0616 - mse: 14.6016 - val_loss: 17.2237 - val_mse: 17.5098
Epoch 304/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.5411 - mse: 4.2541 - val_loss: 13.5622 - val_mse: 13.3241
Epoch 305/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.3540 - mse: 5.2662 - val_loss: 11.4258 - val_mse: 8.8594
Epoch 306/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.9356 - mse: 6.1964 - val_loss: 9.5059 - val_mse: 8.6748
Epoch 307/1000
2/2 [==============================] - 0s 51ms/step - loss: 17.4279 - mse: 16.6543 - val_loss: 11.3900 - val_mse: 9.9377
Epoch 308/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.1529 - mse: 6.6564 - val_loss: 9.5842 - val_mse: 8.6729
Epoch 309/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.7713 - mse: 8.6847 - val_loss: 19.0019 - val_mse: 18.1357
Epoch 310/1000
2/2 [==============================] - 0s 56ms/step - loss: 6.3863 - mse: 4.8954 - val_loss: 10.0721 - val_mse: 8.4877
Epoch 311/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.1307 - mse: 5.1095 - val_loss: 12.3107 - val_mse: 9.9704
Epoch 312/1000
2/2 [==============================] - 0s 51ms/step - loss: 15.0582 - mse: 13.9573 - val_loss: 29.3946 - val_mse: 28.4614
Epoch 313/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.9513 - mse: 4.0988 - val_loss: 22.0168 - val_mse: 20.9719
Epoch 314/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.7300 - mse: 3.1932 - val_loss: 19.6030 - val_mse: 19.1209
Epoch 315/1000
2/2 [==============================] - 0s 49ms/step - loss: 32.1863 - mse: 36.1080 - val_loss: 13.2812 - val_mse: 10.9646
Epoch 316/1000
2/2 [==============================] - 0s 51ms/step - loss: 26.2030 - mse: 28.4281 - val_loss: 9.6272 - val_mse: 10.4602
Epoch 317/1000
2/2 [==============================] - 0s 54ms/step - loss: 3.6567 - mse: 2.7217 - val_loss: 10.5908 - val_mse: 11.1382
Epoch 318/1000
2/2 [==============================] - 0s 63ms/step - loss: 4.9083 - mse: 3.1616 - val_loss: 9.1485 - val_mse: 7.5770
Epoch 319/1000
2/2 [==============================] - 0s 64ms/step - loss: 9.6991 - mse: 8.9189 - val_loss: 25.9224 - val_mse: 25.5522
Epoch 320/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.4530 - mse: 9.3968 - val_loss: 12.2685 - val_mse: 11.3115
Epoch 321/1000
2/2 [==============================] - 0s 49ms/step - loss: 17.4990 - mse: 15.4536 - val_loss: 9.8877 - val_mse: 7.9629
Epoch 322/1000
2/2 [==============================] - 0s 49ms/step - loss: 14.2367 - mse: 14.4982 - val_loss: 24.7968 - val_mse: 23.8442
Epoch 323/1000
2/2 [==============================] - 0s 52ms/step - loss: 9.8938 - mse: 9.3477 - val_loss: 19.7207 - val_mse: 18.3842
Epoch 324/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.0312 - mse: 4.3847 - val_loss: 10.4824 - val_mse: 9.0574
Epoch 325/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.3444 - mse: 5.4124 - val_loss: 14.2120 - val_mse: 13.6060
Epoch 326/1000
2/2 [==============================] - 0s 50ms/step - loss: 19.6009 - mse: 18.2854 - val_loss: 20.3136 - val_mse: 21.3486
Epoch 327/1000
2/2 [==============================] - 0s 53ms/step - loss: 14.0379 - mse: 11.2572 - val_loss: 14.6811 - val_mse: 13.7927
Epoch 328/1000
2/2 [==============================] - 0s 50ms/step - loss: 27.8701 - mse: 27.4709 - val_loss: 9.4862 - val_mse: 9.0984
Epoch 329/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.3193 - mse: 6.3231 - val_loss: 10.9994 - val_mse: 9.4531
Epoch 330/1000
2/2 [==============================] - 0s 50ms/step - loss: 18.8972 - mse: 17.3106 - val_loss: 24.6491 - val_mse: 25.7830
Epoch 331/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.8579 - mse: 9.1232 - val_loss: 11.4185 - val_mse: 9.9292
Epoch 332/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.9105 - mse: 3.7006 - val_loss: 11.4092 - val_mse: 11.0235
Epoch 333/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.8754 - mse: 5.7029 - val_loss: 12.7202 - val_mse: 9.0938
Epoch 334/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.4640 - mse: 3.6495 - val_loss: 19.4377 - val_mse: 19.2876
Epoch 335/1000
2/2 [==============================] - 0s 51ms/step - loss: 16.0503 - mse: 18.0402 - val_loss: 15.6795 - val_mse: 16.0789
Epoch 336/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.5241 - mse: 4.6188 - val_loss: 12.1088 - val_mse: 9.3862
Epoch 337/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.8671 - mse: 6.4764 - val_loss: 11.5781 - val_mse: 10.2736
Epoch 338/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.6643 - mse: 4.6906 - val_loss: 32.6672 - val_mse: 33.9545
Epoch 339/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.5070 - mse: 3.7645 - val_loss: 13.3126 - val_mse: 13.0801
Epoch 340/1000
2/2 [==============================] - 0s 50ms/step - loss: 13.4521 - mse: 12.3391 - val_loss: 17.0321 - val_mse: 16.9437
Epoch 341/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.0504 - mse: 5.2026 - val_loss: 19.9034 - val_mse: 18.6374
Epoch 342/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.0738 - mse: 5.7505 - val_loss: 21.3800 - val_mse: 16.8830
Epoch 343/1000
2/2 [==============================] - 0s 52ms/step - loss: 8.5266 - mse: 6.9995 - val_loss: 8.3460 - val_mse: 8.7084
Epoch 344/1000
2/2 [==============================] - 0s 50ms/step - loss: 8.4293 - mse: 7.7439 - val_loss: 12.9107 - val_mse: 13.2586
Epoch 345/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.5056 - mse: 10.4061 - val_loss: 42.6213 - val_mse: 45.6643
Epoch 346/1000
2/2 [==============================] - 0s 49ms/step - loss: 4.9334 - mse: 3.6777 - val_loss: 12.5059 - val_mse: 11.6194
Epoch 347/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.2118 - mse: 9.1536 - val_loss: 9.7432 - val_mse: 8.2741
Epoch 348/1000
2/2 [==============================] - 0s 50ms/step - loss: 10.4117 - mse: 7.8587 - val_loss: 10.8830 - val_mse: 8.5452
Epoch 349/1000
2/2 [==============================] - 0s 52ms/step - loss: 23.0677 - mse: 21.1497 - val_loss: 9.8283 - val_mse: 7.4853
Epoch 350/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.0972 - mse: 6.2902 - val_loss: 13.5842 - val_mse: 10.9530
Epoch 351/1000
2/2 [==============================] - 0s 51ms/step - loss: 26.1365 - mse: 20.9148 - val_loss: 14.3524 - val_mse: 15.1818
Epoch 352/1000
2/2 [==============================] - 0s 55ms/step - loss: 7.2955 - mse: 5.9286 - val_loss: 15.1632 - val_mse: 12.2787
Epoch 353/1000
2/2 [==============================] - 0s 52ms/step - loss: 7.0965 - mse: 5.3282 - val_loss: 19.3941 - val_mse: 17.0818
Epoch 354/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.5357 - mse: 4.9284 - val_loss: 14.9456 - val_mse: 13.9319
Epoch 355/1000
2/2 [==============================] - 0s 55ms/step - loss: 7.5857 - mse: 6.7317 - val_loss: 12.8077 - val_mse: 10.4782
Epoch 356/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.1989 - mse: 5.3610 - val_loss: 13.2807 - val_mse: 12.7891
Epoch 357/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.5384 - mse: 3.3954 - val_loss: 15.5240 - val_mse: 15.5002
Epoch 358/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.8343 - mse: 3.0732 - val_loss: 8.2398 - val_mse: 7.6802
Epoch 359/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.8703 - mse: 5.8888 - val_loss: 10.9322 - val_mse: 8.8958
Epoch 360/1000
2/2 [==============================] - 0s 51ms/step - loss: 13.9605 - mse: 13.9755 - val_loss: 10.7767 - val_mse: 10.3155
Epoch 361/1000
2/2 [==============================] - 0s 52ms/step - loss: 19.7105 - mse: 18.7032 - val_loss: 10.5117 - val_mse: 10.0055
Epoch 362/1000
2/2 [==============================] - 0s 53ms/step - loss: 6.1737 - mse: 5.3760 - val_loss: 18.2244 - val_mse: 17.8424
Epoch 363/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.3040 - mse: 12.8920 - val_loss: 9.8527 - val_mse: 7.5618
Epoch 364/1000
2/2 [==============================] - 0s 53ms/step - loss: 6.9251 - mse: 5.7222 - val_loss: 8.3260 - val_mse: 7.6093
Epoch 365/1000
2/2 [==============================] - 0s 50ms/step - loss: 24.7207 - mse: 23.3583 - val_loss: 12.5297 - val_mse: 12.5413
Epoch 366/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.9763 - mse: 8.0543 - val_loss: 14.1572 - val_mse: 13.6752
Epoch 367/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.3390 - mse: 3.5136 - val_loss: 11.1964 - val_mse: 9.8813
Epoch 368/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.0793 - mse: 6.0247 - val_loss: 9.7980 - val_mse: 8.6719
Epoch 369/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.2211 - mse: 7.9854 - val_loss: 13.9634 - val_mse: 13.7148
Epoch 370/1000
2/2 [==============================] - 0s 53ms/step - loss: 3.9558 - mse: 3.9709 - val_loss: 8.8940 - val_mse: 8.3938
Epoch 371/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.7410 - mse: 4.9497 - val_loss: 11.0815 - val_mse: 9.3364
Epoch 372/1000
2/2 [==============================] - 0s 52ms/step - loss: 9.3833 - mse: 8.2652 - val_loss: 19.2250 - val_mse: 20.1691
Epoch 373/1000
2/2 [==============================] - 0s 49ms/step - loss: 37.2049 - mse: 40.8118 - val_loss: 17.2852 - val_mse: 16.4940
Epoch 374/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.8117 - mse: 4.9528 - val_loss: 9.5499 - val_mse: 7.1808
Epoch 375/1000
2/2 [==============================] - 0s 50ms/step - loss: 10.0411 - mse: 10.5173 - val_loss: 22.2746 - val_mse: 22.4322
Epoch 376/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.5989 - mse: 11.1552 - val_loss: 26.3390 - val_mse: 29.6311
Epoch 377/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.5499 - mse: 3.4870 - val_loss: 10.8694 - val_mse: 8.4323
Epoch 378/1000
2/2 [==============================] - 0s 51ms/step - loss: 12.1817 - mse: 13.2839 - val_loss: 14.3344 - val_mse: 12.0611
Epoch 379/1000
2/2 [==============================] - 0s 49ms/step - loss: 7.7407 - mse: 6.5042 - val_loss: 11.6781 - val_mse: 11.5089
Epoch 380/1000
2/2 [==============================] - 0s 49ms/step - loss: 4.2553 - mse: 3.3273 - val_loss: 16.5824 - val_mse: 14.4609
Epoch 381/1000
2/2 [==============================] - 0s 53ms/step - loss: 6.2813 - mse: 4.3487 - val_loss: 9.5733 - val_mse: 8.5488
Epoch 382/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.2359 - mse: 7.9697 - val_loss: 11.3187 - val_mse: 9.1526
Epoch 383/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.7133 - mse: 11.0717 - val_loss: 13.5270 - val_mse: 12.1931
Epoch 384/1000
2/2 [==============================] - 0s 53ms/step - loss: 6.7253 - mse: 4.2211 - val_loss: 10.2160 - val_mse: 8.0767
Epoch 385/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.3419 - mse: 2.7444 - val_loss: 11.4899 - val_mse: 10.0237
Epoch 386/1000
2/2 [==============================] - 0s 52ms/step - loss: 11.7901 - mse: 9.6012 - val_loss: 12.9809 - val_mse: 11.2760
Epoch 387/1000
2/2 [==============================] - 0s 53ms/step - loss: 7.6470 - mse: 4.3678 - val_loss: 9.7123 - val_mse: 8.2159
Epoch 388/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.7601 - mse: 5.0394 - val_loss: 12.8126 - val_mse: 11.9825
Epoch 389/1000
2/2 [==============================] - 0s 52ms/step - loss: 2.9693 - mse: 3.3808 - val_loss: 20.3834 - val_mse: 20.3886
Epoch 390/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.2976 - mse: 2.8004 - val_loss: 21.8244 - val_mse: 21.6039
Epoch 391/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.3311 - mse: 2.4490 - val_loss: 9.5259 - val_mse: 8.0797
Epoch 392/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.3322 - mse: 9.2104 - val_loss: 12.7302 - val_mse: 11.4972
Epoch 393/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.1675 - mse: 6.7241 - val_loss: 15.3344 - val_mse: 14.8069
Epoch 394/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.2985 - mse: 4.7523 - val_loss: 10.9104 - val_mse: 8.5802
Epoch 395/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.1740 - mse: 3.0719 - val_loss: 9.2559 - val_mse: 9.8007
Epoch 396/1000
2/2 [==============================] - 0s 52ms/step - loss: 7.6890 - mse: 6.6801 - val_loss: 10.5378 - val_mse: 9.4710
Epoch 397/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.4176 - mse: 3.4023 - val_loss: 9.6007 - val_mse: 9.0558
Epoch 398/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.6835 - mse: 3.5356 - val_loss: 9.6610 - val_mse: 8.0121
Epoch 399/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.9682 - mse: 2.6143 - val_loss: 32.1529 - val_mse: 33.5395
Epoch 400/1000
2/2 [==============================] - 0s 51ms/step - loss: 12.1618 - mse: 12.4876 - val_loss: 10.3875 - val_mse: 9.1652
Epoch 401/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.5898 - mse: 3.5418 - val_loss: 24.4625 - val_mse: 21.2506
Epoch 402/1000
2/2 [==============================] - 0s 54ms/step - loss: 6.3552 - mse: 3.9246 - val_loss: 13.5475 - val_mse: 13.4846
Epoch 403/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.3969 - mse: 3.3936 - val_loss: 29.8438 - val_mse: 30.0067
Epoch 404/1000
2/2 [==============================] - 0s 49ms/step - loss: 7.2731 - mse: 5.2665 - val_loss: 7.7495 - val_mse: 7.2715
Epoch 405/1000
2/2 [==============================] - 0s 53ms/step - loss: 16.2630 - mse: 16.2059 - val_loss: 10.9016 - val_mse: 8.8419
Epoch 406/1000
2/2 [==============================] - 0s 50ms/step - loss: 24.8999 - mse: 22.1371 - val_loss: 12.2417 - val_mse: 11.2473
Epoch 407/1000
2/2 [==============================] - 0s 54ms/step - loss: 5.2470 - mse: 3.9566 - val_loss: 16.6086 - val_mse: 16.5190
Epoch 408/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.9172 - mse: 2.5350 - val_loss: 11.2789 - val_mse: 8.1309
Epoch 409/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.3710 - mse: 7.8123 - val_loss: 10.1746 - val_mse: 10.8302
Epoch 410/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.9311 - mse: 2.1450 - val_loss: 9.6707 - val_mse: 9.3648
Epoch 411/1000
2/2 [==============================] - 0s 52ms/step - loss: 7.8509 - mse: 7.7699 - val_loss: 21.8742 - val_mse: 21.6669
Epoch 412/1000
2/2 [==============================] - 0s 52ms/step - loss: 4.7573 - mse: 3.8336 - val_loss: 28.1104 - val_mse: 22.5840
Epoch 413/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.8141 - mse: 2.0195 - val_loss: 15.3573 - val_mse: 12.8130
Epoch 414/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.8096 - mse: 6.8664 - val_loss: 11.2195 - val_mse: 7.8065
Epoch 415/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.4401 - mse: 5.1470 - val_loss: 10.6906 - val_mse: 10.6651
Epoch 416/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.4435 - mse: 5.8194 - val_loss: 42.4304 - val_mse: 42.8677
Epoch 417/1000
2/2 [==============================] - 0s 54ms/step - loss: 6.3378 - mse: 3.9397 - val_loss: 11.7960 - val_mse: 8.8928
Epoch 418/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.1856 - mse: 8.8785 - val_loss: 11.0399 - val_mse: 8.3310
Epoch 419/1000
2/2 [==============================] - 0s 55ms/step - loss: 4.3724 - mse: 2.1991 - val_loss: 11.0966 - val_mse: 10.7671
Epoch 420/1000
2/2 [==============================] - 0s 51ms/step - loss: 16.8527 - mse: 15.3444 - val_loss: 43.1724 - val_mse: 43.8770
Epoch 421/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.4629 - mse: 6.5006 - val_loss: 9.1517 - val_mse: 7.2595
Epoch 422/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.5864 - mse: 10.2106 - val_loss: 21.9505 - val_mse: 23.6241
Epoch 423/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.6924 - mse: 11.3295 - val_loss: 12.5251 - val_mse: 10.3798
Epoch 424/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.4832 - mse: 6.2785 - val_loss: 14.8673 - val_mse: 14.1536
Epoch 425/1000
2/2 [==============================] - 0s 50ms/step - loss: 13.6798 - mse: 12.9870 - val_loss: 9.4174 - val_mse: 9.3547
Epoch 426/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.7900 - mse: 8.5786 - val_loss: 8.7016 - val_mse: 7.8453
Epoch 427/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.9623 - mse: 9.1454 - val_loss: 9.2397 - val_mse: 7.4252
Epoch 428/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.7752 - mse: 6.4083 - val_loss: 10.5733 - val_mse: 8.1529
Epoch 429/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.3758 - mse: 5.2405 - val_loss: 11.5009 - val_mse: 7.8672
Epoch 430/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.5916 - mse: 1.9847 - val_loss: 9.4800 - val_mse: 7.6839
Epoch 431/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.4747 - mse: 2.3451 - val_loss: 11.6656 - val_mse: 9.4766
Epoch 432/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.0074 - mse: 2.9319 - val_loss: 13.9634 - val_mse: 12.4757
Epoch 433/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.2543 - mse: 9.2416 - val_loss: 9.1349 - val_mse: 9.0787
Epoch 434/1000
2/2 [==============================] - 0s 50ms/step - loss: 10.2579 - mse: 11.7238 - val_loss: 15.6351 - val_mse: 14.5545
Epoch 435/1000
2/2 [==============================] - 0s 50ms/step - loss: 21.3728 - mse: 20.6668 - val_loss: 12.4511 - val_mse: 12.4762
Epoch 436/1000
2/2 [==============================] - 0s 51ms/step - loss: 12.6090 - mse: 13.1826 - val_loss: 22.0916 - val_mse: 23.0532
Epoch 437/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.0718 - mse: 10.0544 - val_loss: 9.6697 - val_mse: 7.8394
Epoch 438/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.8508 - mse: 4.4452 - val_loss: 12.0886 - val_mse: 13.0064
Epoch 439/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.2522 - mse: 15.7404 - val_loss: 11.0503 - val_mse: 11.6157
Epoch 440/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.2630 - mse: 6.8923 - val_loss: 9.2686 - val_mse: 8.0459
Epoch 441/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.3480 - mse: 2.7981 - val_loss: 12.4287 - val_mse: 10.8477
Epoch 442/1000
2/2 [==============================] - 0s 52ms/step - loss: 7.0066 - mse: 5.1868 - val_loss: 17.7509 - val_mse: 19.1527
Epoch 443/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.7156 - mse: 5.4212 - val_loss: 16.6512 - val_mse: 16.6496
Epoch 444/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.7363 - mse: 3.5225 - val_loss: 19.3323 - val_mse: 20.6737
Epoch 445/1000
2/2 [==============================] - 0s 55ms/step - loss: 4.7512 - mse: 3.2839 - val_loss: 9.4644 - val_mse: 8.3137
Epoch 446/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.2612 - mse: 2.5678 - val_loss: 12.7491 - val_mse: 12.0945
Epoch 447/1000
2/2 [==============================] - 0s 53ms/step - loss: 11.4274 - mse: 9.2567 - val_loss: 14.0034 - val_mse: 12.9097
Epoch 448/1000
2/2 [==============================] - 0s 50ms/step - loss: 11.6370 - mse: 10.2049 - val_loss: 8.3012 - val_mse: 7.9387
Epoch 449/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.0188 - mse: 3.0052 - val_loss: 9.8812 - val_mse: 8.4007
Epoch 450/1000
2/2 [==============================] - 0s 50ms/step - loss: 8.2562 - mse: 8.0663 - val_loss: 10.2756 - val_mse: 8.5943
Epoch 451/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.1719 - mse: 4.1913 - val_loss: 11.3400 - val_mse: 12.3465
Epoch 452/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.0645 - mse: 1.6076 - val_loss: 9.1207 - val_mse: 8.2512
Epoch 453/1000
2/2 [==============================] - 0s 54ms/step - loss: 4.8770 - mse: 4.8429 - val_loss: 7.1349 - val_mse: 7.5202
Epoch 454/1000
2/2 [==============================] - 0s 53ms/step - loss: 6.7462 - mse: 5.4774 - val_loss: 9.3480 - val_mse: 8.8495
Epoch 455/1000
2/2 [==============================] - 0s 52ms/step - loss: 4.1682 - mse: 3.5655 - val_loss: 12.3999 - val_mse: 10.3203
Epoch 456/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.7573 - mse: 5.8067 - val_loss: 17.9103 - val_mse: 18.8666
Epoch 457/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.9795 - mse: 4.3772 - val_loss: 13.6123 - val_mse: 11.0370
Epoch 458/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.7904 - mse: 4.1880 - val_loss: 9.0449 - val_mse: 8.7896
Epoch 459/1000
2/2 [==============================] - 0s 53ms/step - loss: 5.6967 - mse: 4.6764 - val_loss: 22.2223 - val_mse: 21.8469
Epoch 460/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.5214 - mse: 2.6740 - val_loss: 8.8359 - val_mse: 7.6126
Epoch 461/1000
2/2 [==============================] - 0s 49ms/step - loss: 9.8452 - mse: 11.3659 - val_loss: 10.6234 - val_mse: 8.3447
Epoch 462/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.6432 - mse: 15.9597 - val_loss: 24.0281 - val_mse: 25.1825
Epoch 463/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.5843 - mse: 9.5564 - val_loss: 12.2322 - val_mse: 13.0663
Epoch 464/1000
2/2 [==============================] - 0s 51ms/step - loss: 17.3227 - mse: 16.1848 - val_loss: 30.5827 - val_mse: 30.8339
Epoch 465/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.2377 - mse: 4.8224 - val_loss: 8.0662 - val_mse: 7.3371
Epoch 466/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.4166 - mse: 16.2500 - val_loss: 14.1304 - val_mse: 13.8733
Epoch 467/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.3459 - mse: 6.3382 - val_loss: 23.4787 - val_mse: 23.8229
Epoch 468/1000
2/2 [==============================] - 0s 53ms/step - loss: 6.2371 - mse: 6.6533 - val_loss: 11.9749 - val_mse: 9.7028
Epoch 469/1000
2/2 [==============================] - 0s 50ms/step - loss: 8.1987 - mse: 5.5606 - val_loss: 22.8409 - val_mse: 22.8115
Epoch 470/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.2754 - mse: 2.5464 - val_loss: 15.4282 - val_mse: 15.0893
Epoch 471/1000
2/2 [==============================] - 0s 50ms/step - loss: 11.3981 - mse: 11.8502 - val_loss: 13.5072 - val_mse: 12.3699
Epoch 472/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.7808 - mse: 8.2320 - val_loss: 8.3075 - val_mse: 7.9013
Epoch 473/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.8422 - mse: 13.6854 - val_loss: 16.1574 - val_mse: 16.0596
Epoch 474/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.6062 - mse: 3.0786 - val_loss: 11.6077 - val_mse: 9.4951
Epoch 475/1000
2/2 [==============================] - 0s 51ms/step - loss: 18.7811 - mse: 21.8613 - val_loss: 12.0623 - val_mse: 12.1703
Epoch 476/1000
2/2 [==============================] - 0s 52ms/step - loss: 8.6140 - mse: 7.8333 - val_loss: 13.3367 - val_mse: 7.3894
Epoch 477/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.6720 - mse: 4.5205 - val_loss: 8.6102 - val_mse: 7.6618
Epoch 478/1000
2/2 [==============================] - 0s 52ms/step - loss: 11.1264 - mse: 8.4770 - val_loss: 17.2843 - val_mse: 17.9758
Epoch 479/1000
2/2 [==============================] - 0s 52ms/step - loss: 8.3424 - mse: 8.0140 - val_loss: 14.6393 - val_mse: 12.6344
Epoch 480/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.4967 - mse: 7.5178 - val_loss: 14.1602 - val_mse: 13.1181
Epoch 481/1000
2/2 [==============================] - 0s 52ms/step - loss: 9.9872 - mse: 8.9026 - val_loss: 10.7074 - val_mse: 8.0798
Epoch 482/1000
2/2 [==============================] - 0s 50ms/step - loss: 13.4520 - mse: 11.5016 - val_loss: 11.0734 - val_mse: 10.1177
Epoch 483/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.0064 - mse: 13.5357 - val_loss: 25.8939 - val_mse: 26.5348
Epoch 484/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.3086 - mse: 4.8062 - val_loss: 10.8490 - val_mse: 7.7645
Epoch 485/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.5935 - mse: 4.3919 - val_loss: 7.3356 - val_mse: 7.1131
Epoch 486/1000
2/2 [==============================] - 0s 51ms/step - loss: 13.3692 - mse: 13.2165 - val_loss: 9.2992 - val_mse: 7.3483
Epoch 487/1000
2/2 [==============================] - 0s 60ms/step - loss: 9.4417 - mse: 7.0001 - val_loss: 12.6329 - val_mse: 12.1331
Epoch 488/1000
2/2 [==============================] - 0s 53ms/step - loss: 11.5130 - mse: 11.6702 - val_loss: 11.6925 - val_mse: 11.4911
Epoch 489/1000
2/2 [==============================] - 0s 50ms/step - loss: 13.6885 - mse: 12.1161 - val_loss: 15.1543 - val_mse: 11.0532
Epoch 490/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.2377 - mse: 3.4252 - val_loss: 32.0743 - val_mse: 31.5596
Epoch 491/1000
2/2 [==============================] - 0s 50ms/step - loss: 16.6643 - mse: 18.3291 - val_loss: 6.8648 - val_mse: 7.4941
Epoch 492/1000
2/2 [==============================] - 0s 49ms/step - loss: 7.1113 - mse: 7.4317 - val_loss: 9.5084 - val_mse: 9.4864
Epoch 493/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.1908 - mse: 3.4453 - val_loss: 17.1937 - val_mse: 16.7630
Epoch 494/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.9874 - mse: 7.1053 - val_loss: 9.5884 - val_mse: 7.7737
Epoch 495/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.6429 - mse: 6.2653 - val_loss: 16.5197 - val_mse: 17.7291
Epoch 496/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.5427 - mse: 2.6949 - val_loss: 9.4770 - val_mse: 7.7956
Epoch 497/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.0732 - mse: 4.7557 - val_loss: 12.7951 - val_mse: 11.0368
Epoch 498/1000
2/2 [==============================] - 0s 52ms/step - loss: 7.6838 - mse: 9.3584 - val_loss: 9.5065 - val_mse: 7.5338
Epoch 499/1000
2/2 [==============================] - 0s 52ms/step - loss: 7.6436 - mse: 6.4139 - val_loss: 16.1049 - val_mse: 13.6072
Epoch 500/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.5301 - mse: 7.1743 - val_loss: 14.7725 - val_mse: 14.9314
Epoch 501/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.9878 - mse: 7.1993 - val_loss: 30.1435 - val_mse: 32.2107
Epoch 502/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.9828 - mse: 4.3499 - val_loss: 8.6689 - val_mse: 8.3849
Epoch 503/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.1085 - mse: 2.2773 - val_loss: 9.1141 - val_mse: 9.2120
Epoch 504/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.2257 - mse: 10.2861 - val_loss: 11.9537 - val_mse: 8.8975
Epoch 505/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.8747 - mse: 10.3217 - val_loss: 12.0872 - val_mse: 8.9308
Epoch 506/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.7108 - mse: 3.0506 - val_loss: 14.3995 - val_mse: 12.8281
Epoch 507/1000
2/2 [==============================] - 0s 54ms/step - loss: 4.4659 - mse: 3.6169 - val_loss: 9.0538 - val_mse: 8.9374
Epoch 508/1000
2/2 [==============================] - 0s 63ms/step - loss: 7.5229 - mse: 6.4378 - val_loss: 19.6968 - val_mse: 18.6406
Epoch 509/1000
2/2 [==============================] - 0s 59ms/step - loss: 4.3032 - mse: 2.8972 - val_loss: 14.9823 - val_mse: 14.5689
Epoch 510/1000
2/2 [==============================] - 0s 53ms/step - loss: 4.7542 - mse: 3.9288 - val_loss: 13.7358 - val_mse: 12.7201
Epoch 511/1000
2/2 [==============================] - 0s 51ms/step - loss: 19.6558 - mse: 21.9177 - val_loss: 11.0424 - val_mse: 10.0717
Epoch 512/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.1101 - mse: 5.1739 - val_loss: 12.3090 - val_mse: 9.4030
Epoch 513/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.3767 - mse: 7.2767 - val_loss: 12.3099 - val_mse: 11.2339
Epoch 514/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.8337 - mse: 3.0851 - val_loss: 10.3821 - val_mse: 8.4440
Epoch 515/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.7706 - mse: 5.6819 - val_loss: 16.0646 - val_mse: 15.3372
Epoch 516/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.0772 - mse: 11.4548 - val_loss: 8.0514 - val_mse: 7.9919
Epoch 517/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.7751 - mse: 3.6185 - val_loss: 9.9301 - val_mse: 7.8720
Epoch 518/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.6015 - mse: 2.4009 - val_loss: 16.8847 - val_mse: 15.0461
Epoch 519/1000
2/2 [==============================] - 0s 51ms/step - loss: 13.0272 - mse: 13.7137 - val_loss: 8.6210 - val_mse: 7.7886
Epoch 520/1000
2/2 [==============================] - 0s 51ms/step - loss: 15.3591 - mse: 13.0007 - val_loss: 10.7206 - val_mse: 10.8396
Epoch 521/1000
2/2 [==============================] - 0s 53ms/step - loss: 7.3148 - mse: 5.7127 - val_loss: 10.0193 - val_mse: 10.1000
Epoch 522/1000
2/2 [==============================] - 0s 54ms/step - loss: 6.3734 - mse: 6.2165 - val_loss: 26.8264 - val_mse: 23.1978
Epoch 523/1000
2/2 [==============================] - 0s 52ms/step - loss: 21.6043 - mse: 19.9646 - val_loss: 8.5498 - val_mse: 8.9345
Epoch 524/1000
2/2 [==============================] - 0s 54ms/step - loss: 15.7241 - mse: 14.9225 - val_loss: 24.3105 - val_mse: 24.7774
Epoch 525/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.7627 - mse: 5.6382 - val_loss: 8.0909 - val_mse: 6.9350
Epoch 526/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.2139 - mse: 8.9769 - val_loss: 30.0895 - val_mse: 27.5348
Epoch 527/1000
2/2 [==============================] - 0s 54ms/step - loss: 10.9076 - mse: 9.7863 - val_loss: 7.1618 - val_mse: 6.6833
Epoch 528/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.2720 - mse: 4.5844 - val_loss: 10.4921 - val_mse: 7.9979
Epoch 529/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.4340 - mse: 6.2697 - val_loss: 8.6349 - val_mse: 7.6654
Epoch 530/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.0998 - mse: 6.9989 - val_loss: 10.2631 - val_mse: 7.7042
Epoch 531/1000
2/2 [==============================] - 0s 51ms/step - loss: 18.4484 - mse: 19.3800 - val_loss: 9.3863 - val_mse: 7.8227
Epoch 532/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.9200 - mse: 4.3347 - val_loss: 9.0882 - val_mse: 7.3347
Epoch 533/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.7575 - mse: 10.8820 - val_loss: 9.4675 - val_mse: 9.1671
Epoch 534/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.3143 - mse: 3.4106 - val_loss: 11.1225 - val_mse: 9.1922
Epoch 535/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.7722 - mse: 2.9114 - val_loss: 13.5367 - val_mse: 12.2188
Epoch 536/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.5165 - mse: 4.7959 - val_loss: 9.6414 - val_mse: 7.8426
Epoch 537/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.8799 - mse: 9.2312 - val_loss: 14.5008 - val_mse: 16.0428
Epoch 538/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.1182 - mse: 2.1382 - val_loss: 10.5816 - val_mse: 9.7673
Epoch 539/1000
2/2 [==============================] - 0s 54ms/step - loss: 14.4364 - mse: 15.4204 - val_loss: 13.2652 - val_mse: 13.1303
Epoch 540/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.8254 - mse: 1.3983 - val_loss: 11.6105 - val_mse: 9.5725
Epoch 541/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.7095 - mse: 4.3525 - val_loss: 7.7233 - val_mse: 7.8581
Epoch 542/1000
2/2 [==============================] - 0s 51ms/step - loss: 15.2432 - mse: 16.6025 - val_loss: 14.7933 - val_mse: 15.9240
Epoch 543/1000
2/2 [==============================] - 0s 52ms/step - loss: 11.9100 - mse: 9.4980 - val_loss: 9.8703 - val_mse: 8.3066
Epoch 544/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.8382 - mse: 4.3878 - val_loss: 24.9388 - val_mse: 20.1608
Epoch 545/1000
2/2 [==============================] - 0s 54ms/step - loss: 5.2220 - mse: 4.1160 - val_loss: 11.5618 - val_mse: 10.2657
Epoch 546/1000
2/2 [==============================] - 0s 58ms/step - loss: 14.6858 - mse: 13.0532 - val_loss: 11.5741 - val_mse: 10.1419
Epoch 547/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.6649 - mse: 1.3873 - val_loss: 9.7305 - val_mse: 8.0337
Epoch 548/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.6371 - mse: 6.0667 - val_loss: 13.2422 - val_mse: 13.2561
Epoch 549/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.7977 - mse: 3.2014 - val_loss: 19.8450 - val_mse: 18.3088
Epoch 550/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.0019 - mse: 10.0711 - val_loss: 9.1980 - val_mse: 7.2976
Epoch 551/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.3252 - mse: 3.6493 - val_loss: 10.0456 - val_mse: 8.6913
Epoch 552/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.5873 - mse: 2.1160 - val_loss: 27.0697 - val_mse: 26.8785
Epoch 553/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.8845 - mse: 2.5171 - val_loss: 16.2993 - val_mse: 13.9883
Epoch 554/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.8207 - mse: 3.7760 - val_loss: 6.6354 - val_mse: 6.6231
Epoch 555/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.9425 - mse: 2.9785 - val_loss: 13.0286 - val_mse: 13.5768
Epoch 556/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.6201 - mse: 4.4561 - val_loss: 10.5410 - val_mse: 9.2956
Epoch 557/1000
2/2 [==============================] - 0s 53ms/step - loss: 8.7511 - mse: 7.6822 - val_loss: 26.0252 - val_mse: 24.4209
Epoch 558/1000
2/2 [==============================] - 0s 56ms/step - loss: 4.6832 - mse: 2.6358 - val_loss: 9.8240 - val_mse: 9.3868
Epoch 559/1000
2/2 [==============================] - 0s 52ms/step - loss: 15.4983 - mse: 16.0887 - val_loss: 9.7696 - val_mse: 8.0301
Epoch 560/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.7581 - mse: 3.0535 - val_loss: 11.3807 - val_mse: 11.2046
Epoch 561/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.2014 - mse: 4.6477 - val_loss: 8.7040 - val_mse: 7.6401
Epoch 562/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.7827 - mse: 7.2825 - val_loss: 9.6787 - val_mse: 7.5831
Epoch 563/1000
2/2 [==============================] - 0s 60ms/step - loss: 3.6543 - mse: 2.1303 - val_loss: 24.0968 - val_mse: 24.9478
Epoch 564/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.2418 - mse: 2.2409 - val_loss: 20.7819 - val_mse: 16.7990
Epoch 565/1000
2/2 [==============================] - 0s 49ms/step - loss: 3.3677 - mse: 1.8711 - val_loss: 11.2654 - val_mse: 9.2977
Epoch 566/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.4219 - mse: 2.0120 - val_loss: 12.6936 - val_mse: 10.2785
Epoch 567/1000
2/2 [==============================] - 0s 50ms/step - loss: 10.4265 - mse: 7.2994 - val_loss: 9.0859 - val_mse: 7.6137
Epoch 568/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.2626 - mse: 2.9740 - val_loss: 6.5330 - val_mse: 7.6407
Epoch 569/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.4492 - mse: 7.6859 - val_loss: 14.6610 - val_mse: 12.0829
Epoch 570/1000
2/2 [==============================] - 0s 50ms/step - loss: 16.4612 - mse: 17.6865 - val_loss: 19.0452 - val_mse: 20.5378
Epoch 571/1000
2/2 [==============================] - 0s 49ms/step - loss: 6.2903 - mse: 4.0564 - val_loss: 9.7775 - val_mse: 8.4779
Epoch 572/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.8770 - mse: 1.7248 - val_loss: 8.8607 - val_mse: 10.2531
Epoch 573/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.3294 - mse: 6.2359 - val_loss: 7.8760 - val_mse: 6.9924
Epoch 574/1000
2/2 [==============================] - 0s 50ms/step - loss: 10.2322 - mse: 8.8490 - val_loss: 10.1633 - val_mse: 9.2017
Epoch 575/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.8297 - mse: 6.0175 - val_loss: 11.0512 - val_mse: 10.8011
Epoch 576/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.2768 - mse: 10.4013 - val_loss: 9.4874 - val_mse: 7.0782
Epoch 577/1000
2/2 [==============================] - 0s 49ms/step - loss: 5.8574 - mse: 5.0982 - val_loss: 8.3192 - val_mse: 8.0154
Epoch 578/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.5323 - mse: 6.6005 - val_loss: 9.5053 - val_mse: 9.2076
Epoch 579/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.6515 - mse: 3.3257 - val_loss: 11.3591 - val_mse: 10.1963
Epoch 580/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.5150 - mse: 1.7413 - val_loss: 7.8972 - val_mse: 7.8611
Epoch 581/1000
2/2 [==============================] - 0s 52ms/step - loss: 9.0931 - mse: 7.0681 - val_loss: 17.9499 - val_mse: 17.3428
Epoch 582/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.7704 - mse: 4.7861 - val_loss: 19.3779 - val_mse: 20.2279
Epoch 583/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.4489 - mse: 1.4907 - val_loss: 16.9941 - val_mse: 15.5716
Epoch 584/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.3955 - mse: 2.6577 - val_loss: 17.2224 - val_mse: 17.4870
Epoch 585/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.6907 - mse: 3.2939 - val_loss: 13.4811 - val_mse: 13.0738
Epoch 586/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.8653 - mse: 4.5568 - val_loss: 12.8198 - val_mse: 11.6985
Epoch 587/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.4475 - mse: 1.4153 - val_loss: 13.3424 - val_mse: 13.2922
Epoch 588/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.0566 - mse: 2.7462 - val_loss: 8.9789 - val_mse: 8.8375
Epoch 589/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.9718 - mse: 2.5741 - val_loss: 15.8907 - val_mse: 17.2202
Epoch 590/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.7090 - mse: 6.2552 - val_loss: 8.3580 - val_mse: 7.8310
Epoch 591/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.6917 - mse: 5.2049 - val_loss: 12.8542 - val_mse: 10.8301
Epoch 592/1000
2/2 [==============================] - 0s 53ms/step - loss: 4.8059 - mse: 4.0308 - val_loss: 27.5078 - val_mse: 29.6349
Epoch 593/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.1422 - mse: 8.4576 - val_loss: 9.9543 - val_mse: 7.3886
Epoch 594/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.9681 - mse: 3.3376 - val_loss: 18.8250 - val_mse: 18.2934
Epoch 595/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.9714 - mse: 1.3718 - val_loss: 8.1564 - val_mse: 6.8111
Epoch 596/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.6049 - mse: 2.0132 - val_loss: 9.3340 - val_mse: 6.4382
Epoch 597/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.1689 - mse: 3.9307 - val_loss: 9.3440 - val_mse: 9.3669
Epoch 598/1000
2/2 [==============================] - 0s 53ms/step - loss: 5.6325 - mse: 4.7816 - val_loss: 10.2513 - val_mse: 7.4141
Epoch 599/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.6839 - mse: 9.1729 - val_loss: 11.3393 - val_mse: 11.0052
Epoch 600/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.3528 - mse: 3.6035 - val_loss: 11.3039 - val_mse: 11.2183
Epoch 601/1000
2/2 [==============================] - 0s 52ms/step - loss: 1.5448 - mse: 1.2359 - val_loss: 10.4534 - val_mse: 8.5059
Epoch 602/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.6975 - mse: 1.0982 - val_loss: 21.0841 - val_mse: 22.0610
Epoch 603/1000
2/2 [==============================] - 0s 54ms/step - loss: 10.7057 - mse: 10.2096 - val_loss: 9.1570 - val_mse: 7.3113
Epoch 604/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.7149 - mse: 1.1819 - val_loss: 10.4162 - val_mse: 10.1451
Epoch 605/1000
2/2 [==============================] - 0s 49ms/step - loss: 6.0791 - mse: 5.1722 - val_loss: 15.0765 - val_mse: 11.8296
Epoch 606/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.0654 - mse: 1.9570 - val_loss: 8.0678 - val_mse: 8.4733
Epoch 607/1000
2/2 [==============================] - 0s 50ms/step - loss: 16.4722 - mse: 16.1463 - val_loss: 10.9813 - val_mse: 8.4005
Epoch 608/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.2071 - mse: 1.5922 - val_loss: 29.1837 - val_mse: 30.6661
Epoch 609/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.9875 - mse: 1.2662 - val_loss: 8.7962 - val_mse: 7.4247
Epoch 610/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.6288 - mse: 6.6638 - val_loss: 9.1387 - val_mse: 8.3133
Epoch 611/1000
2/2 [==============================] - 0s 51ms/step - loss: 22.1689 - mse: 24.4513 - val_loss: 14.8808 - val_mse: 16.3471
Epoch 612/1000
2/2 [==============================] - 0s 51ms/step - loss: 15.1058 - mse: 16.0203 - val_loss: 13.5472 - val_mse: 9.5938
Epoch 613/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.2953 - mse: 10.3667 - val_loss: 7.7464 - val_mse: 8.6704
Epoch 614/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.5717 - mse: 8.9777 - val_loss: 13.3176 - val_mse: 12.3463
Epoch 615/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.6721 - mse: 4.0311 - val_loss: 13.5526 - val_mse: 9.6372
Epoch 616/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.4179 - mse: 1.7905 - val_loss: 11.3753 - val_mse: 10.0695
Epoch 617/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.4646 - mse: 6.0212 - val_loss: 9.5170 - val_mse: 7.1789
Epoch 618/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.5956 - mse: 8.4686 - val_loss: 14.4991 - val_mse: 13.6478
Epoch 619/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.9652 - mse: 4.7325 - val_loss: 11.6472 - val_mse: 12.6256
Epoch 620/1000
2/2 [==============================] - 0s 52ms/step - loss: 2.3337 - mse: 1.4487 - val_loss: 8.3633 - val_mse: 8.8026
Epoch 621/1000
2/2 [==============================] - 0s 52ms/step - loss: 1.8099 - mse: 1.4087 - val_loss: 10.8501 - val_mse: 8.0399
Epoch 622/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.2248 - mse: 8.9714 - val_loss: 10.1019 - val_mse: 8.4244
Epoch 623/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.2251 - mse: 4.5745 - val_loss: 8.7354 - val_mse: 7.5954
Epoch 624/1000
2/2 [==============================] - 0s 52ms/step - loss: 4.4889 - mse: 4.2221 - val_loss: 30.6191 - val_mse: 30.8081
Epoch 625/1000
2/2 [==============================] - 0s 53ms/step - loss: 12.5896 - mse: 10.8111 - val_loss: 21.4795 - val_mse: 18.4439
Epoch 626/1000
2/2 [==============================] - 0s 53ms/step - loss: 11.1323 - mse: 12.1401 - val_loss: 10.1052 - val_mse: 7.3751
Epoch 627/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.1300 - mse: 7.7007 - val_loss: 10.9834 - val_mse: 9.7307
Epoch 628/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.7894 - mse: 5.1007 - val_loss: 10.8446 - val_mse: 10.4322
Epoch 629/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.1420 - mse: 6.8673 - val_loss: 13.8312 - val_mse: 12.6651
Epoch 630/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.6403 - mse: 6.0089 - val_loss: 8.8318 - val_mse: 8.8546
Epoch 631/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.2674 - mse: 1.5895 - val_loss: 9.4660 - val_mse: 7.3169
Epoch 632/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.6313 - mse: 5.6194 - val_loss: 10.5843 - val_mse: 7.9935
Epoch 633/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.1300 - mse: 7.3608 - val_loss: 10.1318 - val_mse: 8.3009
Epoch 634/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.2221 - mse: 4.8143 - val_loss: 6.4908 - val_mse: 7.1898
Epoch 635/1000
2/2 [==============================] - 0s 51ms/step - loss: 15.8018 - mse: 12.3619 - val_loss: 9.7900 - val_mse: 9.8100
Epoch 636/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.7496 - mse: 5.6175 - val_loss: 8.0104 - val_mse: 6.6998
Epoch 637/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.2422 - mse: 4.5342 - val_loss: 12.7071 - val_mse: 8.3756
Epoch 638/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.8023 - mse: 3.7524 - val_loss: 12.1184 - val_mse: 11.7490
Epoch 639/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.9072 - mse: 5.2316 - val_loss: 9.9785 - val_mse: 9.1670
Epoch 640/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.3272 - mse: 2.7547 - val_loss: 18.6607 - val_mse: 19.0889
Epoch 641/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.2437 - mse: 8.6965 - val_loss: 18.6335 - val_mse: 20.1000
Epoch 642/1000
2/2 [==============================] - 0s 50ms/step - loss: 16.4105 - mse: 16.4182 - val_loss: 6.8130 - val_mse: 7.5862
Epoch 643/1000
2/2 [==============================] - 0s 50ms/step - loss: 20.5740 - mse: 16.6459 - val_loss: 8.4174 - val_mse: 7.5571
Epoch 644/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.8554 - mse: 2.2144 - val_loss: 8.8185 - val_mse: 7.5295
Epoch 645/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.7951 - mse: 2.2963 - val_loss: 9.1927 - val_mse: 9.5944
Epoch 646/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.8223 - mse: 3.8261 - val_loss: 9.8854 - val_mse: 7.4175
Epoch 647/1000
2/2 [==============================] - 0s 52ms/step - loss: 2.4419 - mse: 1.4922 - val_loss: 16.9110 - val_mse: 17.1386
Epoch 648/1000
2/2 [==============================] - 0s 53ms/step - loss: 5.9753 - mse: 3.9692 - val_loss: 10.1147 - val_mse: 8.2103
Epoch 649/1000
2/2 [==============================] - 0s 56ms/step - loss: 5.5768 - mse: 3.6611 - val_loss: 24.5947 - val_mse: 25.7321
Epoch 650/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.8112 - mse: 8.1825 - val_loss: 14.2698 - val_mse: 10.5299
Epoch 651/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.3270 - mse: 1.4326 - val_loss: 10.8509 - val_mse: 7.4656
Epoch 652/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.0829 - mse: 2.1474 - val_loss: 18.9986 - val_mse: 16.9920
Epoch 653/1000
2/2 [==============================] - 0s 59ms/step - loss: 7.9611 - mse: 6.1172 - val_loss: 15.3674 - val_mse: 15.9314
Epoch 654/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.9141 - mse: 5.9996 - val_loss: 32.5485 - val_mse: 28.3872
Epoch 655/1000
2/2 [==============================] - 0s 49ms/step - loss: 3.4072 - mse: 1.3123 - val_loss: 13.9139 - val_mse: 14.3628
Epoch 656/1000
2/2 [==============================] - 0s 49ms/step - loss: 13.5960 - mse: 12.1137 - val_loss: 9.0146 - val_mse: 8.4741
Epoch 657/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.9477 - mse: 2.9129 - val_loss: 9.8836 - val_mse: 8.9238
Epoch 658/1000
2/2 [==============================] - 0s 49ms/step - loss: 3.0688 - mse: 2.1715 - val_loss: 10.5147 - val_mse: 7.9218
Epoch 659/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.6376 - mse: 6.1279 - val_loss: 9.5596 - val_mse: 10.3426
Epoch 660/1000
2/2 [==============================] - 0s 53ms/step - loss: 5.1209 - mse: 4.4242 - val_loss: 10.5887 - val_mse: 8.3001
Epoch 661/1000
2/2 [==============================] - 0s 49ms/step - loss: 6.6771 - mse: 5.1995 - val_loss: 8.0259 - val_mse: 6.6520
Epoch 662/1000
2/2 [==============================] - 0s 52ms/step - loss: 1.5539 - mse: 1.8095 - val_loss: 7.0380 - val_mse: 6.7878
Epoch 663/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.1418 - mse: 3.0147 - val_loss: 21.5120 - val_mse: 18.0433
Epoch 664/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.1623 - mse: 1.7170 - val_loss: 13.0890 - val_mse: 9.7695
Epoch 665/1000
2/2 [==============================] - 0s 51ms/step - loss: 21.1973 - mse: 16.6468 - val_loss: 11.7293 - val_mse: 8.7668
Epoch 666/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.9488 - mse: 9.2547 - val_loss: 15.7190 - val_mse: 13.3180
Epoch 667/1000
2/2 [==============================] - 0s 51ms/step - loss: 15.0774 - mse: 16.0383 - val_loss: 16.2869 - val_mse: 16.1092
Epoch 668/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.6932 - mse: 3.1366 - val_loss: 13.3377 - val_mse: 11.2528
Epoch 669/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.1211 - mse: 2.1465 - val_loss: 12.3566 - val_mse: 11.4159
Epoch 670/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.7247 - mse: 5.2470 - val_loss: 14.4928 - val_mse: 14.7595
Epoch 671/1000
2/2 [==============================] - 0s 54ms/step - loss: 7.6215 - mse: 4.1412 - val_loss: 9.8465 - val_mse: 7.1768
Epoch 672/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.7949 - mse: 1.3633 - val_loss: 43.8881 - val_mse: 46.8530
Epoch 673/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.9728 - mse: 3.1196 - val_loss: 9.7193 - val_mse: 9.5059
Epoch 674/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.3213 - mse: 4.3869 - val_loss: 8.4586 - val_mse: 7.2421
Epoch 675/1000
2/2 [==============================] - 0s 55ms/step - loss: 10.7721 - mse: 11.7637 - val_loss: 7.0918 - val_mse: 7.0957
Epoch 676/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.0431 - mse: 5.8594 - val_loss: 10.6845 - val_mse: 9.4748
Epoch 677/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.6504 - mse: 1.6409 - val_loss: 8.5986 - val_mse: 7.3254
Epoch 678/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.4473 - mse: 1.4808 - val_loss: 10.1921 - val_mse: 8.4031
Epoch 679/1000
2/2 [==============================] - 0s 55ms/step - loss: 3.3012 - mse: 3.2962 - val_loss: 10.3122 - val_mse: 8.4048
Epoch 680/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.9897 - mse: 3.7340 - val_loss: 14.8997 - val_mse: 14.2238
Epoch 681/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.6062 - mse: 2.6454 - val_loss: 13.9474 - val_mse: 11.7665
Epoch 682/1000
2/2 [==============================] - 0s 53ms/step - loss: 3.6424 - mse: 1.7984 - val_loss: 10.0934 - val_mse: 8.4054
Epoch 683/1000
2/2 [==============================] - 0s 56ms/step - loss: 5.9608 - mse: 6.0539 - val_loss: 19.8057 - val_mse: 20.6214
Epoch 684/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.5365 - mse: 1.0283 - val_loss: 8.9578 - val_mse: 6.1305
Epoch 685/1000
2/2 [==============================] - 0s 52ms/step - loss: 11.5034 - mse: 10.3946 - val_loss: 11.2305 - val_mse: 9.8718
Epoch 686/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.6826 - mse: 9.9305 - val_loss: 8.8806 - val_mse: 9.5489
Epoch 687/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.6605 - mse: 3.3209 - val_loss: 17.3187 - val_mse: 18.5410
Epoch 688/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.8684 - mse: 6.4546 - val_loss: 10.6698 - val_mse: 9.1601
Epoch 689/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.6007 - mse: 11.8953 - val_loss: 18.4484 - val_mse: 20.1564
Epoch 690/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.1851 - mse: 1.1498 - val_loss: 6.2458 - val_mse: 6.7725
Epoch 691/1000
2/2 [==============================] - 0s 52ms/step - loss: 4.1312 - mse: 3.2111 - val_loss: 9.4005 - val_mse: 7.7190
Epoch 692/1000
2/2 [==============================] - 0s 55ms/step - loss: 2.7439 - mse: 1.4895 - val_loss: 12.4279 - val_mse: 9.4900
Epoch 693/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.7130 - mse: 1.9548 - val_loss: 7.2510 - val_mse: 6.9018
Epoch 694/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.9052 - mse: 6.7822 - val_loss: 10.8107 - val_mse: 8.0613
Epoch 695/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.3341 - mse: 4.9771 - val_loss: 12.7668 - val_mse: 12.3396
Epoch 696/1000
2/2 [==============================] - 0s 52ms/step - loss: 4.5219 - mse: 3.2062 - val_loss: 11.4890 - val_mse: 10.1122
Epoch 697/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.8339 - mse: 11.7501 - val_loss: 8.8128 - val_mse: 6.3621
Epoch 698/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.7434 - mse: 2.5113 - val_loss: 20.3392 - val_mse: 22.0843
Epoch 699/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.4080 - mse: 1.3494 - val_loss: 9.9411 - val_mse: 10.3346
Epoch 700/1000
2/2 [==============================] - 0s 60ms/step - loss: 2.3753 - mse: 2.4134 - val_loss: 10.0570 - val_mse: 9.8656
Epoch 701/1000
2/2 [==============================] - 0s 72ms/step - loss: 3.6249 - mse: 1.5991 - val_loss: 11.3727 - val_mse: 9.5308
Epoch 702/1000
2/2 [==============================] - 0s 53ms/step - loss: 4.8557 - mse: 3.8292 - val_loss: 12.2160 - val_mse: 10.2456
Epoch 703/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.1947 - mse: 2.0797 - val_loss: 9.2458 - val_mse: 8.3017
Epoch 704/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.5044 - mse: 1.8502 - val_loss: 20.6351 - val_mse: 20.2263
Epoch 705/1000
2/2 [==============================] - 0s 54ms/step - loss: 8.7966 - mse: 7.2147 - val_loss: 9.7559 - val_mse: 7.7401
Epoch 706/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.6001 - mse: 4.3434 - val_loss: 13.2256 - val_mse: 12.3398
Epoch 707/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.1722 - mse: 2.3416 - val_loss: 21.3412 - val_mse: 22.1714
Epoch 708/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.5858 - mse: 7.3768 - val_loss: 13.6119 - val_mse: 12.5516
Epoch 709/1000
2/2 [==============================] - 0s 52ms/step - loss: 9.5164 - mse: 9.8902 - val_loss: 11.3493 - val_mse: 11.9907
Epoch 710/1000
2/2 [==============================] - 0s 59ms/step - loss: 15.3478 - mse: 14.4098 - val_loss: 17.3070 - val_mse: 17.1711
Epoch 711/1000
2/2 [==============================] - 0s 53ms/step - loss: 3.1859 - mse: 1.5619 - val_loss: 11.8823 - val_mse: 9.3102
Epoch 712/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.3785 - mse: 1.2716 - val_loss: 12.3011 - val_mse: 10.6325
Epoch 713/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.0209 - mse: 9.3266 - val_loss: 11.4788 - val_mse: 8.0081
Epoch 714/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.9350 - mse: 3.3790 - val_loss: 14.5139 - val_mse: 12.9049
Epoch 715/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.5070 - mse: 2.0930 - val_loss: 21.9120 - val_mse: 24.8019
Epoch 716/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.1618 - mse: 9.4365 - val_loss: 13.5019 - val_mse: 9.7234
Epoch 717/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.1351 - mse: 5.9067 - val_loss: 22.7177 - val_mse: 18.7606
Epoch 718/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.6177 - mse: 5.1566 - val_loss: 12.3444 - val_mse: 9.5215
Epoch 719/1000
2/2 [==============================] - 0s 54ms/step - loss: 7.4801 - mse: 5.2644 - val_loss: 8.7010 - val_mse: 8.1539
Epoch 720/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.3586 - mse: 4.4248 - val_loss: 10.5165 - val_mse: 7.6865
Epoch 721/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.0611 - mse: 2.9079 - val_loss: 9.4913 - val_mse: 7.6354
Epoch 722/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.2120 - mse: 1.7867 - val_loss: 10.3503 - val_mse: 7.4414
Epoch 723/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.9724 - mse: 8.0057 - val_loss: 8.2318 - val_mse: 7.6546
Epoch 724/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.6245 - mse: 2.3869 - val_loss: 7.7535 - val_mse: 8.1068
Epoch 725/1000
2/2 [==============================] - 0s 57ms/step - loss: 3.9219 - mse: 1.6380 - val_loss: 7.2928 - val_mse: 7.7918
Epoch 726/1000
2/2 [==============================] - 0s 53ms/step - loss: 8.6553 - mse: 6.0954 - val_loss: 7.8673 - val_mse: 6.9753
Epoch 727/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.4274 - mse: 1.6284 - val_loss: 7.3155 - val_mse: 7.1266
Epoch 728/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.8384 - mse: 2.0519 - val_loss: 6.3635 - val_mse: 6.6488
Epoch 729/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.5965 - mse: 4.4936 - val_loss: 8.6015 - val_mse: 7.0028
Epoch 730/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.9525 - mse: 6.0961 - val_loss: 8.1924 - val_mse: 9.5148
Epoch 731/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.8386 - mse: 1.5972 - val_loss: 9.7327 - val_mse: 8.7075
Epoch 732/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.9996 - mse: 6.5237 - val_loss: 8.6924 - val_mse: 6.6202
Epoch 733/1000
2/2 [==============================] - 0s 51ms/step - loss: 1.9866 - mse: 1.7862 - val_loss: 8.9849 - val_mse: 6.6211
Epoch 734/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.2125 - mse: 2.4297 - val_loss: 19.0992 - val_mse: 17.3871
Epoch 735/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.8361 - mse: 2.8763 - val_loss: 6.9298 - val_mse: 7.9851
Epoch 736/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.0565 - mse: 6.5941 - val_loss: 10.2733 - val_mse: 7.8722
Epoch 737/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.5938 - mse: 5.1593 - val_loss: 13.7694 - val_mse: 10.9346
Epoch 738/1000
2/2 [==============================] - 0s 51ms/step - loss: 12.0431 - mse: 11.0268 - val_loss: 17.0153 - val_mse: 14.7630
Epoch 739/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.0982 - mse: 4.5578 - val_loss: 8.7457 - val_mse: 8.1418
Epoch 740/1000
2/2 [==============================] - 0s 51ms/step - loss: 12.6244 - mse: 13.4988 - val_loss: 13.2662 - val_mse: 11.5701
Epoch 741/1000
2/2 [==============================] - 0s 49ms/step - loss: 3.0658 - mse: 2.6909 - val_loss: 15.1590 - val_mse: 13.7514
Epoch 742/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.1360 - mse: 4.5969 - val_loss: 13.5412 - val_mse: 11.5034
Epoch 743/1000
2/2 [==============================] - 0s 52ms/step - loss: 9.9583 - mse: 8.1507 - val_loss: 11.1786 - val_mse: 9.4052
Epoch 744/1000
2/2 [==============================] - 0s 50ms/step - loss: 8.3657 - mse: 7.1734 - val_loss: 9.8234 - val_mse: 8.0671
Epoch 745/1000
2/2 [==============================] - 0s 53ms/step - loss: 17.5755 - mse: 19.2377 - val_loss: 11.1720 - val_mse: 11.7185
Epoch 746/1000
2/2 [==============================] - 0s 54ms/step - loss: 3.4905 - mse: 3.0166 - val_loss: 11.6562 - val_mse: 14.3352
Epoch 747/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.3067 - mse: 2.5915 - val_loss: 7.2690 - val_mse: 6.4467
Epoch 748/1000
2/2 [==============================] - 0s 51ms/step - loss: 12.2125 - mse: 10.0173 - val_loss: 11.6872 - val_mse: 9.5020
Epoch 749/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.0559 - mse: 2.6377 - val_loss: 8.8131 - val_mse: 7.7592
Epoch 750/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.2264 - mse: 2.8723 - val_loss: 11.4403 - val_mse: 8.6063
Epoch 751/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.3793 - mse: 3.2655 - val_loss: 9.4126 - val_mse: 7.6350
Epoch 752/1000
2/2 [==============================] - 0s 54ms/step - loss: 4.1634 - mse: 3.4657 - val_loss: 18.8498 - val_mse: 19.1707
Epoch 753/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.2443 - mse: 9.8647 - val_loss: 9.5240 - val_mse: 9.0041
Epoch 754/1000
2/2 [==============================] - 0s 51ms/step - loss: 13.6366 - mse: 10.4258 - val_loss: 9.8661 - val_mse: 7.8185
Epoch 755/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.4409 - mse: 3.7286 - val_loss: 20.3342 - val_mse: 19.7085
Epoch 756/1000
2/2 [==============================] - 0s 50ms/step - loss: 11.8264 - mse: 11.8992 - val_loss: 10.2283 - val_mse: 7.7692
Epoch 757/1000
2/2 [==============================] - 0s 51ms/step - loss: 14.9064 - mse: 13.9393 - val_loss: 9.3518 - val_mse: 8.1911
Epoch 758/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.5020 - mse: 3.7025 - val_loss: 13.5819 - val_mse: 11.6456
Epoch 759/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.6903 - mse: 4.4847 - val_loss: 36.6129 - val_mse: 38.3867
Epoch 760/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.4506 - mse: 5.9502 - val_loss: 8.5010 - val_mse: 6.7855
Epoch 761/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.6081 - mse: 3.8805 - val_loss: 16.6098 - val_mse: 16.8781
Epoch 762/1000
2/2 [==============================] - 0s 53ms/step - loss: 2.0964 - mse: 2.4279 - val_loss: 9.8923 - val_mse: 8.5954
Epoch 763/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.3552 - mse: 7.3498 - val_loss: 14.8189 - val_mse: 13.3542
Epoch 764/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.9152 - mse: 4.2258 - val_loss: 10.6963 - val_mse: 9.9943
Epoch 765/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.0543 - mse: 9.7483 - val_loss: 12.7189 - val_mse: 12.4054
Epoch 766/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.7999 - mse: 3.9195 - val_loss: 18.4404 - val_mse: 16.5645
Epoch 767/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.3705 - mse: 8.5689 - val_loss: 21.0872 - val_mse: 19.1404
Epoch 768/1000
2/2 [==============================] - 0s 51ms/step - loss: 1.7882 - mse: 1.6095 - val_loss: 18.0371 - val_mse: 16.9804
Epoch 769/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.9861 - mse: 0.8170 - val_loss: 8.3462 - val_mse: 7.5384
Epoch 770/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.8080 - mse: 4.7442 - val_loss: 25.8615 - val_mse: 25.3560
Epoch 771/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.3126 - mse: 3.7457 - val_loss: 8.8913 - val_mse: 8.6518
Epoch 772/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.4118 - mse: 3.4263 - val_loss: 20.1670 - val_mse: 20.9140
Epoch 773/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.7179 - mse: 1.9720 - val_loss: 14.0453 - val_mse: 13.5407
Epoch 774/1000
2/2 [==============================] - 0s 53ms/step - loss: 3.1566 - mse: 1.7983 - val_loss: 6.6265 - val_mse: 7.8161
Epoch 775/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.6301 - mse: 2.2567 - val_loss: 14.5060 - val_mse: 15.5156
Epoch 776/1000
2/2 [==============================] - 0s 53ms/step - loss: 13.2984 - mse: 11.7537 - val_loss: 10.2448 - val_mse: 8.3825
Epoch 777/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.9740 - mse: 2.0605 - val_loss: 21.4252 - val_mse: 21.7091
Epoch 778/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.2894 - mse: 3.5470 - val_loss: 9.1650 - val_mse: 9.5611
Epoch 779/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.2526 - mse: 2.3241 - val_loss: 12.7978 - val_mse: 13.3398
Epoch 780/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.1327 - mse: 2.8088 - val_loss: 10.4762 - val_mse: 10.1131
Epoch 781/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.7944 - mse: 4.4624 - val_loss: 6.6469 - val_mse: 6.6433
Epoch 782/1000
2/2 [==============================] - 0s 50ms/step - loss: 9.1142 - mse: 9.5888 - val_loss: 31.1868 - val_mse: 30.6986
Epoch 783/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.8461 - mse: 3.9224 - val_loss: 9.4494 - val_mse: 7.7399
Epoch 784/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.7878 - mse: 5.1337 - val_loss: 14.8889 - val_mse: 13.2100
Epoch 785/1000
2/2 [==============================] - 0s 52ms/step - loss: 2.9653 - mse: 1.5881 - val_loss: 18.2878 - val_mse: 13.4658
Epoch 786/1000
2/2 [==============================] - 0s 50ms/step - loss: 0.9321 - mse: 1.2930 - val_loss: 20.6271 - val_mse: 22.4077
Epoch 787/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.7175 - mse: 3.7695 - val_loss: 10.4081 - val_mse: 8.4634
Epoch 788/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.7382 - mse: 3.2115 - val_loss: 9.8008 - val_mse: 10.1726
Epoch 789/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.6698 - mse: 4.5949 - val_loss: 11.3344 - val_mse: 12.8710
Epoch 790/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.5319 - mse: 1.7574 - val_loss: 12.3664 - val_mse: 11.4156
Epoch 791/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.6051 - mse: 2.3492 - val_loss: 10.3671 - val_mse: 7.4395
Epoch 792/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.5702 - mse: 3.2843 - val_loss: 13.2266 - val_mse: 13.3259
Epoch 793/1000
2/2 [==============================] - 0s 52ms/step - loss: 1.2928 - mse: 0.9399 - val_loss: 8.2190 - val_mse: 7.2548
Epoch 794/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.0288 - mse: 5.9325 - val_loss: 8.7703 - val_mse: 7.2071
Epoch 795/1000
2/2 [==============================] - 0s 52ms/step - loss: 10.6971 - mse: 10.9086 - val_loss: 10.5469 - val_mse: 10.8969
Epoch 796/1000
2/2 [==============================] - 0s 52ms/step - loss: 4.1485 - mse: 1.5140 - val_loss: 7.3623 - val_mse: 7.4993
Epoch 797/1000
2/2 [==============================] - 0s 54ms/step - loss: 12.7103 - mse: 10.5075 - val_loss: 10.4358 - val_mse: 9.4776
Epoch 798/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.7111 - mse: 1.4032 - val_loss: 9.4611 - val_mse: 7.8998
Epoch 799/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.6562 - mse: 2.0618 - val_loss: 9.4616 - val_mse: 8.8419
Epoch 800/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.3309 - mse: 3.5692 - val_loss: 20.0673 - val_mse: 18.2377
Epoch 801/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.8044 - mse: 1.4976 - val_loss: 10.0515 - val_mse: 7.8016
Epoch 802/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.1868 - mse: 3.6339 - val_loss: 14.1414 - val_mse: 15.6614
Epoch 803/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.7530 - mse: 1.9895 - val_loss: 9.4315 - val_mse: 7.9385
Epoch 804/1000
2/2 [==============================] - 0s 51ms/step - loss: 17.3885 - mse: 16.8784 - val_loss: 11.2242 - val_mse: 9.6314
Epoch 805/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.0714 - mse: 2.3604 - val_loss: 11.7576 - val_mse: 9.3492
Epoch 806/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.7279 - mse: 3.7441 - val_loss: 9.4020 - val_mse: 7.5194
Epoch 807/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.8321 - mse: 1.7341 - val_loss: 18.0673 - val_mse: 19.8040
Epoch 808/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.9331 - mse: 1.5404 - val_loss: 9.3410 - val_mse: 7.5713
Epoch 809/1000
2/2 [==============================] - 0s 50ms/step - loss: 8.8741 - mse: 6.3677 - val_loss: 8.7509 - val_mse: 7.5737
Epoch 810/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.6679 - mse: 4.3903 - val_loss: 13.1876 - val_mse: 12.3811
Epoch 811/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.5660 - mse: 2.6184 - val_loss: 8.2410 - val_mse: 6.7878
Epoch 812/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.8163 - mse: 3.7961 - val_loss: 8.2193 - val_mse: 8.6047
Epoch 813/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.8432 - mse: 7.5339 - val_loss: 7.8250 - val_mse: 8.5903
Epoch 814/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.6647 - mse: 2.6193 - val_loss: 11.1256 - val_mse: 10.8711
Epoch 815/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.0949 - mse: 3.1311 - val_loss: 10.4407 - val_mse: 7.7580
Epoch 816/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.0742 - mse: 6.2878 - val_loss: 15.5064 - val_mse: 15.5376
Epoch 817/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.8795 - mse: 2.1116 - val_loss: 10.3269 - val_mse: 10.3854
Epoch 818/1000
2/2 [==============================] - 0s 50ms/step - loss: 15.1804 - mse: 13.1195 - val_loss: 11.9413 - val_mse: 12.2889
Epoch 819/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.6934 - mse: 3.0541 - val_loss: 12.4128 - val_mse: 11.4504
Epoch 820/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.5255 - mse: 4.0557 - val_loss: 9.0514 - val_mse: 7.6371
Epoch 821/1000
2/2 [==============================] - 0s 50ms/step - loss: 11.3021 - mse: 12.1052 - val_loss: 14.3444 - val_mse: 14.5310
Epoch 822/1000
2/2 [==============================] - 0s 50ms/step - loss: 7.5319 - mse: 7.6547 - val_loss: 12.3240 - val_mse: 10.2727
Epoch 823/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.1162 - mse: 1.4548 - val_loss: 11.2026 - val_mse: 8.6332
Epoch 824/1000
2/2 [==============================] - 0s 49ms/step - loss: 2.4086 - mse: 2.0895 - val_loss: 10.8522 - val_mse: 8.5125
Epoch 825/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.8632 - mse: 2.8813 - val_loss: 8.0194 - val_mse: 7.6372
Epoch 826/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.4062 - mse: 1.9989 - val_loss: 11.8617 - val_mse: 10.0262
Epoch 827/1000
2/2 [==============================] - 0s 53ms/step - loss: 3.7998 - mse: 3.1176 - val_loss: 8.8860 - val_mse: 8.0759
Epoch 828/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.1046 - mse: 3.1812 - val_loss: 9.5927 - val_mse: 6.6522
Epoch 829/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.5169 - mse: 2.3308 - val_loss: 11.6075 - val_mse: 13.3731
Epoch 830/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.6226 - mse: 2.1994 - val_loss: 8.1198 - val_mse: 6.9878
Epoch 831/1000
2/2 [==============================] - 0s 54ms/step - loss: 3.4538 - mse: 2.1350 - val_loss: 14.8585 - val_mse: 13.9598
Epoch 832/1000
2/2 [==============================] - 0s 52ms/step - loss: 2.5432 - mse: 0.7923 - val_loss: 11.7111 - val_mse: 10.4840
Epoch 833/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.4127 - mse: 4.4728 - val_loss: 5.1809 - val_mse: 6.1470
Epoch 834/1000
2/2 [==============================] - 0s 52ms/step - loss: 4.4128 - mse: 2.9709 - val_loss: 10.6278 - val_mse: 8.2669
Epoch 835/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.1774 - mse: 1.4886 - val_loss: 14.4219 - val_mse: 14.5050
Epoch 836/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.4089 - mse: 1.6201 - val_loss: 7.6142 - val_mse: 7.4973
Epoch 837/1000
2/2 [==============================] - 0s 53ms/step - loss: 2.2511 - mse: 0.9721 - val_loss: 10.8735 - val_mse: 9.7605
Epoch 838/1000
2/2 [==============================] - 0s 51ms/step - loss: 11.5733 - mse: 12.9566 - val_loss: 8.7903 - val_mse: 10.1413
Epoch 839/1000
2/2 [==============================] - 0s 51ms/step - loss: 1.6942 - mse: 1.7302 - val_loss: 19.0845 - val_mse: 17.0321
Epoch 840/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.2361 - mse: 2.7636 - val_loss: 9.3456 - val_mse: 8.2498
Epoch 841/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.5251 - mse: 1.5513 - val_loss: 9.2423 - val_mse: 7.8431
Epoch 842/1000
2/2 [==============================] - 0s 52ms/step - loss: 8.9426 - mse: 7.6453 - val_loss: 14.3422 - val_mse: 11.9598
Epoch 843/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.1590 - mse: 3.7202 - val_loss: 8.2321 - val_mse: 7.1936
Epoch 844/1000
2/2 [==============================] - 0s 50ms/step - loss: 10.8292 - mse: 9.0996 - val_loss: 8.2035 - val_mse: 7.6044
Epoch 845/1000
2/2 [==============================] - 0s 49ms/step - loss: 4.0120 - mse: 2.3142 - val_loss: 11.5098 - val_mse: 13.3296
Epoch 846/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.3553 - mse: 4.9723 - val_loss: 12.0300 - val_mse: 11.4452
Epoch 847/1000
2/2 [==============================] - 0s 52ms/step - loss: 8.9045 - mse: 7.8703 - val_loss: 13.8718 - val_mse: 13.7085
Epoch 848/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.3450 - mse: 4.9457 - val_loss: 10.0633 - val_mse: 7.0550
Epoch 849/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.0651 - mse: 6.2995 - val_loss: 9.8889 - val_mse: 8.9025
Epoch 850/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.2633 - mse: 4.1900 - val_loss: 14.3735 - val_mse: 15.2390
Epoch 851/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.4260 - mse: 4.2550 - val_loss: 13.1107 - val_mse: 11.1323
Epoch 852/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.1158 - mse: 2.3956 - val_loss: 8.2964 - val_mse: 7.9649
Epoch 853/1000
2/2 [==============================] - 0s 52ms/step - loss: 1.7234 - mse: 1.1421 - val_loss: 7.8003 - val_mse: 6.9939
Epoch 854/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.6595 - mse: 4.6296 - val_loss: 14.6376 - val_mse: 15.4244
Epoch 855/1000
2/2 [==============================] - 0s 51ms/step - loss: 1.5715 - mse: 1.5660 - val_loss: 8.7874 - val_mse: 6.7218
Epoch 856/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.6085 - mse: 1.2162 - val_loss: 9.8532 - val_mse: 11.7436
Epoch 857/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.0059 - mse: 1.6842 - val_loss: 10.0040 - val_mse: 9.4545
Epoch 858/1000
2/2 [==============================] - 0s 50ms/step - loss: 10.0952 - mse: 7.9620 - val_loss: 8.2378 - val_mse: 7.2239
Epoch 859/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.9676 - mse: 5.6799 - val_loss: 14.2527 - val_mse: 16.7685
Epoch 860/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.3836 - mse: 4.0151 - val_loss: 8.6984 - val_mse: 7.9910
Epoch 861/1000
2/2 [==============================] - 0s 52ms/step - loss: 2.7953 - mse: 2.6303 - val_loss: 12.8526 - val_mse: 9.9297
Epoch 862/1000
2/2 [==============================] - 0s 52ms/step - loss: 4.0744 - mse: 2.9696 - val_loss: 9.5784 - val_mse: 9.6098
Epoch 863/1000
2/2 [==============================] - 0s 50ms/step - loss: 6.4920 - mse: 4.5977 - val_loss: 8.1442 - val_mse: 7.6373
Epoch 864/1000
2/2 [==============================] - 0s 51ms/step - loss: 7.3353 - mse: 5.8433 - val_loss: 10.1309 - val_mse: 8.3911
Epoch 865/1000
2/2 [==============================] - 0s 56ms/step - loss: 3.2699 - mse: 3.1303 - val_loss: 9.9362 - val_mse: 6.8695
Epoch 866/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.7374 - mse: 5.5408 - val_loss: 11.8373 - val_mse: 10.3158
Epoch 867/1000
2/2 [==============================] - 0s 52ms/step - loss: 8.3563 - mse: 5.1611 - val_loss: 14.4439 - val_mse: 14.9749
Epoch 868/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.5103 - mse: 1.7853 - val_loss: 8.1406 - val_mse: 7.2100
Epoch 869/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.4168 - mse: 4.5623 - val_loss: 10.9021 - val_mse: 8.6871
Epoch 870/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.8782 - mse: 3.2242 - val_loss: 11.0309 - val_mse: 6.8315
Epoch 871/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.9952 - mse: 4.5357 - val_loss: 10.3657 - val_mse: 9.1600
Epoch 872/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.8131 - mse: 1.9175 - val_loss: 7.4694 - val_mse: 7.0736
Epoch 873/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.6316 - mse: 1.4681 - val_loss: 9.3851 - val_mse: 9.9291
Epoch 874/1000
2/2 [==============================] - 0s 50ms/step - loss: 2.3627 - mse: 1.4976 - val_loss: 12.1398 - val_mse: 9.2522
Epoch 875/1000
2/2 [==============================] - 0s 51ms/step - loss: 1.8394 - mse: 0.8040 - val_loss: 13.1558 - val_mse: 13.5525
Epoch 876/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.1378 - mse: 3.6902 - val_loss: 11.0675 - val_mse: 10.2811
Epoch 877/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.7828 - mse: 1.7277 - val_loss: 8.9460 - val_mse: 8.6227
Epoch 878/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.4957 - mse: 3.1693 - val_loss: 20.5597 - val_mse: 20.8759
Epoch 879/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.5426 - mse: 0.9356 - val_loss: 14.9614 - val_mse: 16.3764
Epoch 880/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.0041 - mse: 4.9290 - val_loss: 11.7847 - val_mse: 11.1238
Epoch 881/1000
2/2 [==============================] - 0s 54ms/step - loss: 2.6647 - mse: 2.3142 - val_loss: 7.0496 - val_mse: 8.1348
Epoch 882/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.9228 - mse: 7.6630 - val_loss: 7.3104 - val_mse: 7.7630
Epoch 883/1000
2/2 [==============================] - 0s 50ms/step - loss: 8.9534 - mse: 7.3211 - val_loss: 10.4695 - val_mse: 8.6496
Epoch 884/1000
2/2 [==============================] - 0s 53ms/step - loss: 5.3957 - mse: 5.4935 - val_loss: 9.3247 - val_mse: 6.5228
Epoch 885/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.5672 - mse: 1.3520 - val_loss: 19.7925 - val_mse: 19.1483
Epoch 886/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.6009 - mse: 4.3880 - val_loss: 10.1744 - val_mse: 9.7519
Epoch 887/1000
2/2 [==============================] - 0s 50ms/step - loss: 1.6151 - mse: 1.3195 - val_loss: 11.7598 - val_mse: 10.9238
Epoch 888/1000
2/2 [==============================] - 0s 51ms/step - loss: 6.2027 - mse: 7.1812 - val_loss: 18.8690 - val_mse: 20.2530
Epoch 889/1000
2/2 [==============================] - 0s 57ms/step - loss: 6.8992 - mse: 6.5666 - val_loss: 16.9140 - val_mse: 16.2840
Epoch 890/1000
2/2 [==============================] - 0s 53ms/step - loss: 5.4451 - mse: 3.7394 - val_loss: 7.9818 - val_mse: 7.2034
Epoch 891/1000
2/2 [==============================] - 0s 51ms/step - loss: 1.9924 - mse: 0.9456 - val_loss: 9.1120 - val_mse: 7.1674
Epoch 892/1000
2/2 [==============================] - 0s 60ms/step - loss: 3.8441 - mse: 3.6501 - val_loss: 9.9001 - val_mse: 9.7790
Epoch 893/1000
2/2 [==============================] - 0s 62ms/step - loss: 5.7272 - mse: 3.3213 - val_loss: 7.2839 - val_mse: 8.5211
Epoch 894/1000
2/2 [==============================] - 0s 57ms/step - loss: 3.7489 - mse: 3.9421 - val_loss: 9.4874 - val_mse: 8.9050
Epoch 895/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.6868 - mse: 2.7170 - val_loss: 11.5919 - val_mse: 10.1455
Epoch 896/1000
2/2 [==============================] - 0s 51ms/step - loss: 13.8405 - mse: 11.0098 - val_loss: 10.4305 - val_mse: 7.6272
Epoch 897/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.9527 - mse: 2.4244 - val_loss: 10.9815 - val_mse: 7.3553
Epoch 898/1000
2/2 [==============================] - 0s 54ms/step - loss: 2.5094 - mse: 1.7655 - val_loss: 8.8600 - val_mse: 8.0064
Epoch 899/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.3770 - mse: 1.9177 - val_loss: 13.9524 - val_mse: 11.9987
Epoch 900/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.2817 - mse: 2.4515 - val_loss: 9.5283 - val_mse: 7.6258
Epoch 901/1000
2/2 [==============================] - 0s 51ms/step - loss: 8.2116 - mse: 6.5306 - val_loss: 11.2556 - val_mse: 9.6827
Epoch 902/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.0242 - mse: 1.6540 - val_loss: 8.8584 - val_mse: 7.6837
Epoch 903/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.3771 - mse: 2.9254 - val_loss: 15.4714 - val_mse: 16.9832
Epoch 904/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.9960 - mse: 2.7794 - val_loss: 11.2228 - val_mse: 10.6429
Epoch 905/1000
2/2 [==============================] - 0s 50ms/step - loss: 18.2091 - mse: 16.2981 - val_loss: 9.8271 - val_mse: 8.9011
Epoch 906/1000
2/2 [==============================] - 0s 51ms/step - loss: 1.6638 - mse: 1.3936 - val_loss: 6.8676 - val_mse: 6.6722
Epoch 907/1000
2/2 [==============================] - 0s 51ms/step - loss: 10.6702 - mse: 9.7320 - val_loss: 7.7156 - val_mse: 7.4921
Epoch 908/1000
2/2 [==============================] - 0s 54ms/step - loss: 5.5052 - mse: 3.6771 - val_loss: 21.0071 - val_mse: 21.8939
Epoch 909/1000
2/2 [==============================] - 0s 51ms/step - loss: 16.2561 - mse: 17.1789 - val_loss: 10.8811 - val_mse: 8.8299
Epoch 910/1000
2/2 [==============================] - 0s 52ms/step - loss: 5.6725 - mse: 4.5527 - val_loss: 10.3543 - val_mse: 8.9769
Epoch 911/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.4005 - mse: 0.7978 - val_loss: 7.7189 - val_mse: 7.5147
Epoch 912/1000
2/2 [==============================] - 0s 52ms/step - loss: 1.1546 - mse: 1.2638 - val_loss: 8.9349 - val_mse: 7.8573
Epoch 913/1000
2/2 [==============================] - 0s 51ms/step - loss: 2.7335 - mse: 1.1517 - val_loss: 14.1002 - val_mse: 14.6971
Epoch 914/1000
2/2 [==============================] - 0s 52ms/step - loss: 6.9186 - mse: 5.9809 - val_loss: 23.9731 - val_mse: 24.2244
Epoch 915/1000
2/2 [==============================] - 0s 58ms/step - loss: 4.4635 - mse: 2.7127 - val_loss: 9.3750 - val_mse: 7.1265
Epoch 916/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.4657 - mse: 1.4657 - val_loss: 7.2181 - val_mse: 6.6694
Epoch 917/1000
2/2 [==============================] - 0s 50ms/step - loss: 11.4452 - mse: 12.8605 - val_loss: 10.1336 - val_mse: 7.5157
Epoch 918/1000
2/2 [==============================] - 0s 52ms/step - loss: 3.3592 - mse: 2.6570 - val_loss: 10.6289 - val_mse: 10.8718
Epoch 919/1000
2/2 [==============================] - 0s 50ms/step - loss: 4.3033 - mse: 4.1301 - val_loss: 8.9272 - val_mse: 7.3315
Epoch 920/1000
2/2 [==============================] - 0s 50ms/step - loss: 5.3474 - mse: 5.1353 - val_loss: 9.5689 - val_mse: 9.2559
Epoch 921/1000
2/2 [==============================] - 0s 50ms/step - loss: 3.4186 - mse: 2.7553 - val_loss: 9.7611 - val_mse: 7.6874
Epoch 922/1000
2/2 [==============================] - 0s 49ms/step - loss: 3.9347 - mse: 2.7421 - val_loss: 12.5532 - val_mse: 13.9629
Epoch 923/1000
2/2 [==============================] - 0s 49ms/step - loss: 6.2281 - mse: 4.1324 - val_loss: 14.7918 - val_mse: 14.1324
Epoch 924/1000
2/2 [==============================] - 0s 49ms/step - loss: 4.4365 - mse: 2.4722 - val_loss: 10.8089 - val_mse: 8.7212
Epoch 925/1000
2/2 [==============================] - 0s 49ms/step - loss: 5.6754 - mse: 4.5810 - val_loss: 29.1051 - val_mse: 30.7319
Epoch 926/1000
2/2 [==============================] - 0s 50ms/step - loss: 10.3365 - mse: 9.2165 - val_loss: 12.3372 - val_mse: 11.8092
Epoch 927/1000
2/2 [==============================] - 0s 51ms/step - loss: 3.5639 - mse: 2.4182 - val_loss: 12.0993 - val_mse: 7.7192
Epoch 928/1000
2/2 [==============================] - 0s 51ms/step - loss: 5.7111 - mse: 6.2468 - val_loss: 13.5308 - val_mse: 10.6847
Epoch 929/1000
2/2 [==============================] - 0s 51ms/step - loss: 1.8325 - mse: 1.7695 - val_loss: 10.0080 - val_mse: 10.0159
Epoch 930/1000
2/2 [==============================] - 0s 51ms/step - loss: 4.9965 - mse: 4.2486 - val_loss: 8.0841 - val_mse: 8.8696
Epoch 931/1000
2/2 [==============================] - 0s 54ms/step - loss: 3.4979 - mse: 1.4661 - val_loss: 11.2338 - val_mse: 11.1322
Epoch 932/1000
2/2 [==============================] - 0s 51ms/step - loss: 9.8693 - mse: 10.6903 - val_loss: 9.9302 - val_mse: 7.8492
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
from student_utils import get_mean_std_from_preds
importlib.reload(student_utils)
m, s = get_mean_std_from_preds(diabetes_yhat)
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head()
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
importlib.reload(student_utils)
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations?
###Code
# AUC, F1, precision and recall
# Summary
from sklearn.metrics import roc_auc_score, f1_score, precision_score, recall_score
print('ROC-AUC:' , round(roc_auc_score(pred_test_df['score'],pred_test_df['label_value'] ),3))
print('F1-Score:' , round(f1_score(pred_test_df['score'],pred_test_df['label_value'] ),3))
print('Precision:' , round(precision_score(pred_test_df['score'],pred_test_df['label_value'] ),3))
print('Recall:' , round(recall_score(pred_test_df['score'],pred_test_df['label_value'] ),3))
###Output
ROC-AUC: 0.741
F1-Score: 0.638
Precision: 0.6
Recall: 0.682
###Markdown
Precison score is the measure of true identified result while Recall score measure the false identified result.We must take into account the precision-recall trade-off. A precision increases, recall decreases and viceversa. For this reason, we need to assess with metric we have to rely on. In this case, precision and recall have very similar values. We have things to improve in our model. Some areas of improvement are: - Trying other optimizers- Modifying the patience and epoch variables- Modifying learning rate- Modifying the features selected- Modifying the layers of the model 7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
/opt/conda/lib/python3.7/site-packages/aequitas/group.py:143: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df['score'] = df['score'].astype(float)
/opt/conda/lib/python3.7/site-packages/aequitas/group.py:30: FutureWarning: The pandas.np module is deprecated and will be removed from pandas in a future version. Import numpy directly instead
divide = lambda x, y: x / y if y != 0 else pd.np.nan
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
get_disparity_predefined_group()
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
p = aqp.plot_group_metric_all(xtab, metrics=['tpr', 'fpr', 'tnr', 'fnr'], ncols=4)
# Is there significant bias in your model for either race or gender?
###Output
_____no_output_____
###Markdown
Race comments: - TPR: higher fot Hispanic and African/American than Caucassian- FPR: Hispanics have a false positive rate- TNR: non hispacic falls in this category- FNR: higher for CaucasianGender comments: All the metrics have not much bias except for FPR and TNR Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
fpr_fairness = aqp.plot_fairness_group(fdf, group_metric='fpr', title=True)
fpr_disparity_fairness = aqp.plot_fairness_disparity(fdf, group_metric='tpr', attribute_name='race')
fpr_disparity_fairness = aqp.plot_fairness_disparity(fdf, group_metric='tpr', attribute_name='gender')
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import pandas as pd
import aequitas as ae
import seaborn as sns
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
###Output
_____no_output_____
###Markdown
Determine Level of Dataset (Line or Encounter)
###Code
df.head()
df.columns
df.encounter_id.nunique()
df.patient_nbr.nunique()
df.shape[0]
###Output
_____no_output_____
###Markdown
**Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked. **Student Response:** As shown above, encounter_id != total rows. So, given data is at line level. We can aggregate data on primary_diagnosis_code as it indicates the patients dieases code. Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. From data, it shown that there are slightly more females than male who are hospitalized and it is more affected in age 60-90.**Student**:**-a**: only ndc_code column has missing values **-b**: None off the numerical field is exact normal distribution, but "num_lab_procedures" has some extent of normal distribution which violets at some point.**-c**: other_diagnosis_codes, primary_diagnosis_code and ndc_code has high cardinalities. It has different codes related to dieases identified.**-d**: there is slightly more females than male who are hospitalized and it is more affected in age 60-90.
###Code
df.describe().transpose()
df.count()
numeric_field = [c for c in df.columns if df[c].dtype == "int64"]
numeric_field
for c in numeric_field:
sns.distplot(df[c], kde=False)
plt.title(c)
plt.show()
df.age.value_counts().plot(kind='bar')
df.gender.unique()
df.gender.value_counts().plot(kind="bar")
pd.DataFrame({'cardinality': df.nunique() } )
###Output
_____no_output_____
###Markdown
**OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project.
###Code
# ######NOTE: The visualization will only display in Chrome browser. ########
# full_data_stats = tfdv.generate_statistics_from_csv(data_location='./data/final_project_dataset.csv')
# tfdv.visualize_statistics(full_data_stats)
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
from student_utils import reduce_dimension_ndc
df.ndc_code
ndc_code_df.head()
ndc_code_df
reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
reduce_dim_df
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
Number of unique patients:71518
Number of unique encounters:71518
Tests passed!!
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
exclusion_list = ['generic_drug_name']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
len(grouping_field_list)
agg_drug_df['patient_nbr'].nunique()
agg_drug_df['encounter_id'].nunique()
len(agg_drug_df)
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection
###Code
df.weight.value_counts()
df.payer_code.value_counts()
df.medical_specialty.value_counts()
###Output
_____no_output_____
###Markdown
**Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice. Student response: From cells above, weight field has 139122 missing values. It's fine if we exclude in model training. For payer_code, it also has 54190 missing value and it not much affect on our training process just beacause it will not help to identify time_in_hospital.
###Code
numeric_field
categorical_field = df.columns.drop(numeric_field)
categorical_field
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
required_demo_col_list = ['race', 'gender', 'age']
student_categorical_col_list = [ "ndc_code", "readmitted", 'admission_type_id', 'discharge_disposition_id',
'max_glu_serum', 'admission_source_id', 'A1Cresult', 'primary_diagnosis_code',
'other_diagnosis_codes', 'change'] + required_demo_col_list + ndc_col_list
student_numerical_col_list = [ "num_procedures", "num_medications", 'number_diagnoses']
PREDICTOR_FIELD = 'time_in_hospital'
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return agg_drug_df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
/home/workspace/starter_code/utils.py:29: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[predictor] = df[predictor].astype(float)
/home/workspace/starter_code/utils.py:31: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[c] = cast_df(df, c, d_type=str)
/home/workspace/starter_code/utils.py:33: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[numerical_column] = impute_df(df, numerical_column, numerical_impute_value)
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
processed_df = pd.DataFrame(processed_df)
processed_df
from student_utils import patient_dataset_splitter
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
Test passed for number of unique patients being equal!
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 1464
2.0 1866
3.0 1989
4.0 1454
5.0 1052
6.0 806
7.0 643
8.0 465
9.0 293
10.0 246
11.0 187
12.0 148
13.0 136
14.0 105
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
gender
Female 5778
Male 5076
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
IndicatorColumn(categorical_column=VocabularyFileCategoricalColumn(key='ndc_code', vocabulary_file='./diabetes_vocab/ndc_code_vocab.txt', vocabulary_size=236, num_oov_buckets=1, dtype=tf.string, default_value=-1))
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4267: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4322: VocabularyFileCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
tf.Tensor(
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 1. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]], shape=(128, 237), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='num_procedures', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=functools.partial(<function normalize_numeric_with_zscore at 0x7fd9cf3a30e0>, mean=1.4190338728004177, std=1.7661595449689889))
tf.Tensor(
[[-1.]
[-1.]
[-1.]
[ 3.]
[-1.]
[ 0.]
[ 1.]
[ 2.]
[ 1.]
[ 3.]
[ 0.]
[-1.]
[ 4.]
[ 1.]
[-1.]
[ 1.]
[ 0.]
[-1.]
[-1.]
[ 0.]
[ 1.]
[-1.]
[-1.]
[-1.]
[ 0.]
[-1.]
[ 5.]
[ 0.]
[-1.]
[ 0.]
[ 1.]
[-1.]
[ 2.]
[-1.]
[ 0.]
[-1.]
[-1.]
[-1.]
[ 1.]
[-1.]
[-1.]
[ 0.]
[-1.]
[-1.]
[-1.]
[ 0.]
[-1.]
[ 2.]
[-1.]
[ 0.]
[-1.]
[ 0.]
[ 5.]
[-1.]
[ 0.]
[ 0.]
[ 2.]
[ 0.]
[-1.]
[ 2.]
[-1.]
[ 2.]
[ 5.]
[-1.]
[ 0.]
[ 1.]
[-1.]
[ 1.]
[ 1.]
[ 0.]
[ 1.]
[-1.]
[ 2.]
[-1.]
[ 0.]
[-1.]
[ 2.]
[ 0.]
[-1.]
[-1.]
[ 5.]
[-1.]
[-1.]
[-1.]
[-1.]
[ 0.]
[ 3.]
[ 1.]
[-1.]
[ 0.]
[ 2.]
[ 2.]
[ 4.]
[-1.]
[ 5.]
[-1.]
[ 5.]
[ 0.]
[-1.]
[ 0.]
[-1.]
[-1.]
[ 0.]
[ 0.]
[ 2.]
[ 3.]
[ 3.]
[-1.]
[-1.]
[-1.]
[ 2.]
[ 0.]
[ 2.]
[ 0.]
[-1.]
[-1.]
[-1.]
[ 1.]
[ 1.]
[ 0.]
[-1.]
[ 0.]
[-1.]
[-1.]
[ 3.]
[ 3.]
[-1.]
[-1.]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs, verbose=1)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=500)
###Output
Train for 255 steps, validate for 85 steps
Epoch 1/500
255/255 [==============================] - 16s 63ms/step - loss: 28.7022 - mse: 28.6230 - val_loss: 22.7435 - val_mse: 22.4548
Epoch 2/500
255/255 [==============================] - 10s 40ms/step - loss: 18.3928 - mse: 17.7504 - val_loss: 17.8175 - val_mse: 17.3194
Epoch 3/500
255/255 [==============================] - 11s 41ms/step - loss: 15.9209 - mse: 15.1019 - val_loss: 15.0507 - val_mse: 14.1956
Epoch 4/500
255/255 [==============================] - 11s 41ms/step - loss: 12.7872 - mse: 11.9589 - val_loss: 11.5684 - val_mse: 10.5219
Epoch 5/500
255/255 [==============================] - 10s 40ms/step - loss: 11.9249 - mse: 11.0861 - val_loss: 12.0692 - val_mse: 10.9643
Epoch 6/500
255/255 [==============================] - 10s 41ms/step - loss: 11.0035 - mse: 9.8640 - val_loss: 10.8800 - val_mse: 9.8412
Epoch 7/500
255/255 [==============================] - 11s 42ms/step - loss: 10.0939 - mse: 9.1671 - val_loss: 11.5913 - val_mse: 10.7791
Epoch 8/500
255/255 [==============================] - 10s 40ms/step - loss: 10.0236 - mse: 9.1841 - val_loss: 9.3778 - val_mse: 8.4103
Epoch 9/500
255/255 [==============================] - 10s 40ms/step - loss: 9.6047 - mse: 8.6308 - val_loss: 9.4612 - val_mse: 8.6677
Epoch 10/500
255/255 [==============================] - 10s 41ms/step - loss: 8.7655 - mse: 7.9002 - val_loss: 9.7647 - val_mse: 8.8000
Epoch 11/500
255/255 [==============================] - 10s 41ms/step - loss: 9.0500 - mse: 8.2779 - val_loss: 8.3894 - val_mse: 7.6811
Epoch 12/500
255/255 [==============================] - 10s 40ms/step - loss: 8.4610 - mse: 7.5809 - val_loss: 9.4741 - val_mse: 8.5107
Epoch 13/500
255/255 [==============================] - 10s 40ms/step - loss: 8.6067 - mse: 7.8350 - val_loss: 9.3030 - val_mse: 8.6363
Epoch 14/500
255/255 [==============================] - 10s 40ms/step - loss: 8.3046 - mse: 7.4487 - val_loss: 8.4526 - val_mse: 7.6111
Epoch 15/500
255/255 [==============================] - 10s 40ms/step - loss: 8.1174 - mse: 7.2388 - val_loss: 7.7185 - val_mse: 6.9114
Epoch 16/500
255/255 [==============================] - 10s 40ms/step - loss: 7.6876 - mse: 6.7407 - val_loss: 8.2381 - val_mse: 7.4335
Epoch 17/500
255/255 [==============================] - 10s 40ms/step - loss: 7.8874 - mse: 7.1151 - val_loss: 8.0016 - val_mse: 7.2824
Epoch 18/500
255/255 [==============================] - 10s 40ms/step - loss: 7.9888 - mse: 7.0106 - val_loss: 8.4903 - val_mse: 7.5617
Epoch 19/500
255/255 [==============================] - 11s 41ms/step - loss: 7.5901 - mse: 6.6981 - val_loss: 8.3119 - val_mse: 7.2685
Epoch 20/500
255/255 [==============================] - 10s 40ms/step - loss: 7.5137 - mse: 6.5327 - val_loss: 8.3361 - val_mse: 7.5012
Epoch 21/500
255/255 [==============================] - 10s 40ms/step - loss: 7.0036 - mse: 6.1970 - val_loss: 8.1574 - val_mse: 7.2704
Epoch 22/500
255/255 [==============================] - 10s 39ms/step - loss: 7.2968 - mse: 6.4379 - val_loss: 7.7288 - val_mse: 6.9543
Epoch 23/500
255/255 [==============================] - 10s 40ms/step - loss: 7.3476 - mse: 6.3918 - val_loss: 8.0169 - val_mse: 6.9789
Epoch 24/500
255/255 [==============================] - 10s 40ms/step - loss: 7.0438 - mse: 6.1461 - val_loss: 8.2690 - val_mse: 7.4478
Epoch 25/500
255/255 [==============================] - 10s 41ms/step - loss: 6.8019 - mse: 5.9874 - val_loss: 7.6600 - val_mse: 7.0440
Epoch 26/500
255/255 [==============================] - 10s 40ms/step - loss: 6.9808 - mse: 6.0505 - val_loss: 7.8186 - val_mse: 6.9690
Epoch 27/500
255/255 [==============================] - 10s 39ms/step - loss: 6.7483 - mse: 5.9552 - val_loss: 8.1165 - val_mse: 7.0183
Epoch 28/500
255/255 [==============================] - 10s 41ms/step - loss: 6.9448 - mse: 5.9174 - val_loss: 7.9773 - val_mse: 6.9617
Epoch 29/500
255/255 [==============================] - 10s 40ms/step - loss: 6.8959 - mse: 5.9611 - val_loss: 7.8916 - val_mse: 7.0629
Epoch 30/500
255/255 [==============================] - 10s 40ms/step - loss: 6.6495 - mse: 5.7519 - val_loss: 8.0830 - val_mse: 7.0001
Epoch 31/500
255/255 [==============================] - 10s 40ms/step - loss: 6.6016 - mse: 5.4636 - val_loss: 7.1506 - val_mse: 6.5479
Epoch 32/500
255/255 [==============================] - 10s 40ms/step - loss: 6.5351 - mse: 5.6259 - val_loss: 8.1337 - val_mse: 7.2529
Epoch 33/500
255/255 [==============================] - 10s 41ms/step - loss: 6.3288 - mse: 5.4803 - val_loss: 7.8403 - val_mse: 6.9373
Epoch 34/500
255/255 [==============================] - 10s 41ms/step - loss: 6.4700 - mse: 5.4447 - val_loss: 7.9134 - val_mse: 6.7548
Epoch 35/500
255/255 [==============================] - 11s 41ms/step - loss: 6.4675 - mse: 5.4433 - val_loss: 8.5100 - val_mse: 7.2290
Epoch 36/500
255/255 [==============================] - 10s 41ms/step - loss: 6.0392 - mse: 5.1505 - val_loss: 7.9672 - val_mse: 6.9001
Epoch 37/500
255/255 [==============================] - 10s 41ms/step - loss: 6.0484 - mse: 5.1757 - val_loss: 8.1389 - val_mse: 7.0629
Epoch 38/500
255/255 [==============================] - 10s 40ms/step - loss: 6.1684 - mse: 5.2663 - val_loss: 8.2631 - val_mse: 7.2070
Epoch 39/500
255/255 [==============================] - 10s 41ms/step - loss: 6.1091 - mse: 5.1774 - val_loss: 8.3987 - val_mse: 7.3687
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
preds
from student_utils import get_mean_std_from_preds
m, s = get_mean_std_from_preds(diabetes_yhat)
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head()
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations?
###Code
from sklearn.metrics import confusion_matrix
print(confusion_matrix(pred_test_df['label_value'], pred_test_df['score']))
# AUC, F1, precision and recall
# Summary
from sklearn.metrics import classification_report
print(classification_report(pred_test_df['label_value'], pred_test_df['score']))
from sklearn.metrics import auc, f1_score, roc_auc_score, recall_score, precision_score
print("AUC score : ",roc_auc_score(pred_test_df['label_value'], pred_test_df['score']))
print("F1 score : ", f1_score(pred_test_df['label_value'], pred_test_df['score'], average='weighted'))
print("Precision score: ", precision_score(pred_test_df['label_value'], pred_test_df['score'], average='micro'))
print("Recall score : ", recall_score(pred_test_df['label_value'], pred_test_df['score'], average='micro'))
###Output
AUC score : 0.6666790457939554
F1 score : 0.7066583881060599
Precision score: 0.732540998710153
Recall score : 0.732540998710153
###Markdown
Precison score is the measure of true identified result while Recall score measure the false identified result.In our problem, we need to identified patients who satisfy our criteria as well as we don't want to interept patients who can't be part of our testing due to low hospitalize time.So, both precison and recall are important measure.For more improvement of model performance, can add more complex layers in model architecture with data. 7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
model_id, score_thresholds 1 {'rank_abs': [2096]}
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
get_disparity_predefined_group()
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Is there significant bias in your model for either race or gender?
tpr = aqp.plot_group_metric(clean_xtab, 'tpr')
fpr = aqp.plot_group_metric(clean_xtab, 'fpr')
tnr = aqp.plot_group_metric(clean_xtab, 'tnr')
precision = aqp.plot_group_metric(clean_xtab, 'precision')
###Output
_____no_output_____
###Markdown
Bias analysis finds that Asian are more precisly identified than any other region. There are some un-identified regions have true classfication. African American and Caucasian(Reference) has almost same True classfied and false identified rate based on samples. Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
# Reference group fairness plot
fpr_fairness = aqp.plot_fairness_group(fdf, group_metric='fpr', title=True)
fpr_disparity_fairness = aqp.plot_fairness_disparity(fdf, group_metric='tpr', attribute_name='race')
tpr_disparity_fairness = aqp.plot_fairness_disparity(fdf, group_metric='tpr', attribute_name='gender')
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import pandas as pd
import aequitas as ae
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
df.head()
###Output
_____no_output_____
###Markdown
Determine Level of Dataset (Line or Encounter)
###Code
print(df.shape, df.encounter)
###Output
_____no_output_____
###Markdown
**Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked. Student Response:?? Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project. **Student Response**: ??
###Code
######NOTE: The visualization will only display in Chrome browser. ########
full_data_stats = tfdv.generate_statistics_from_csv(data_location='./data/final_project_dataset.csv')
tfdv.visualize_statistics(full_data_stats)
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
from student_utils import reduce_dimension_ndc
reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
_____no_output_____
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
exclusion_list = ['generic_drug_name']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice. Student response: ??
###Code
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
# required_demo_col_list = ['race', 'gender', 'age']
# student_categorical_col_list = [ "feature_A", "feature_B", .... ] + required_demo_col_list + ndc_col_list
# student_numerical_col_list = [ "feature_A", "feature_B", .... ]
# PREDICTOR_FIELD = ''
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return agg_drug_df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
_____no_output_____
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
from student_utils import patient_dataset_splitter
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
_____no_output_____
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
_____no_output_____
###Markdown
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
_____no_output_____
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
_____no_output_____
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=10)
###Output
_____no_output_____
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
from student_utils import get_mean_std_from_preds
m, s = get_mean_std_from_preds(diabetes_yhat)
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head()
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations?
###Code
# AUC, F1, precision and recall
# Summary
###Output
_____no_output_____
###Markdown
7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
_____no_output_____
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
_____no_output_____
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
# Is there significant bias in your model for either race or gender?
###Output
_____no_output_____
###Markdown
Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
# Reference group fairness plot
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import aequitas as ae
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
###Output
_____no_output_____
###Markdown
Determine Level of Dataset (Line or Encounter) **Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked.
###Code
df_copy = df.copy()
df_copy.head()
# Line Test
try:
assert len(df) > df['encounter_id'].nunique()
print("Dataset could be at the line level")
except:
print("Dataset is not at the line level")
# Encounter Test
try:
assert len(df) == df['encounter_id'].nunique()
print("Dataset could be at the encounter level")
except:
print("Dataset is not at the encounter level")
###Output
Dataset is not at the encounter level
###Markdown
Student Response: Dataset is at the line level because there are more records than encounters in the dataset. Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values weight, payer_code, medical_specialty, ndc_code have a very high number of missing values. race and other_diagnosis_code also have many missing values. - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? Some of the attributes resemble skewed normal distributions, but none other than num_lab_procedures are close to a balanced normal distribution. For example, time_in_hospital is a highly skewed distribution with a long right tail. The rest of the columns are not normally distributed. - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) As expected, encounter_id and patient_nbr have high cardinality (for each encounter and each patient). primary_diagnosis_code, other_diagnosis_code, num_lab_procedures, ndc_code, and num_medications all have high cardinality as well due to high number of possible disease classifications, procedures, and medications for those diseases. - d. Please describe the demographic distributions in the dataset for the age and gender fields. See below for graphs and associated commentary.
###Code
len(df_copy)
###Output
_____no_output_____
###Markdown
**Part A**: Missing values
###Code
df_copy = df_copy.replace(regex=r'\?.*', value=np.NaN) # matches any unknown values like ? or ?|?...
df_copy.isnull().sum()
df_weight = df_copy['weight'].notnull() # weight is a categorical value
df_copy[df_weight].head()
###Output
_____no_output_____
###Markdown
**Part B**: Distributions
###Code
numeric_fields = [feature for feature in df_copy.columns if df_copy[feature].dtype == 'int64']
numeric_fields
for field in numeric_fields:
plt.hist(df_copy[field])
plt.title(field)
plt.show()
###Output
_____no_output_____
###Markdown
**Part C**: High cardinality features.
###Code
df_copy.apply(pd.Series.nunique)
###Output
_____no_output_____
###Markdown
**Part D**: Age and gender demographics. The two genders are almost balanced with more females. There is a negligible amount of unknown values for gender.
###Code
sns.countplot(x='gender', data=df_copy)
###Output
_____no_output_____
###Markdown
There are only 5 unknown values for gender.
###Code
df_copy['gender'].value_counts()
###Output
_____no_output_____
###Markdown
We see the majority of patients are between the ages of 50 and 90.
###Code
sns.countplot(y='age', data=df_copy)
###Output
_____no_output_____
###Markdown
For most age buckets, the genders are fairly balanced, except for the 80-90 and 90-100 buckets which have more females.
###Code
sns.countplot(y='age', data=df_copy, hue='gender')
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
df.head()
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
ndc_code_df.head()
from student_utils import reduce_dimension_ndc
reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
reduce_dim_df.tail(10)
len(reduce_dim_df)
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
Number of unique patients:71518
Number of unique encounters:71518
Tests passed!!
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
exclusion_list = ['generic_drug_name']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
ndc_col_list
len(agg_drug_df)
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
agg_drug_df.head()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice.
###Code
sns.countplot(y='weight', hue='time_in_hospital', data=agg_drug_df)
sns.countplot(y='payer_code', hue='time_in_hospital', data=agg_drug_df)
###Output
_____no_output_____
###Markdown
Student response: weight and payer_code have a high number of unknown values (as shown above and in question 2a), indicating they likely won't help the model. Normally weight would be helpful in medical diagnoses, but due to the high number of missing values for it, it seems problematic to our workflow.
###Code
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
required_demo_col_list = ['race', 'gender', 'age']
student_categorical_col_list = [ "admission_type_id", "discharge_disposition_id", "primary_diagnosis_code", "max_glu_serum", "readmitted", "A1Cresult"] + required_demo_col_list + ndc_col_list
student_numerical_col_list = ["num_procedures", "number_diagnoses", "num_medications"]
PREDICTOR_FIELD = "time_in_hospital"
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return agg_drug_df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
processed_df
###Output
_____no_output_____
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
from student_utils import patient_dataset_splitter
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
print(len(d_train), len(d_val), len(d_test), len(processed_df))
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
Test passed for number of unique patients being equal!
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions? Yes
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 2151
2.0 2472
3.0 2531
4.0 1874
5.0 1347
6.0 1056
7.0 799
8.0 598
9.0 415
10.0 313
11.0 271
12.0 197
13.0 159
14.0 120
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
gender
Female 7501
Male 6801
Unknown/Invalid 1
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
vocab_file_list
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
IndicatorColumn(categorical_column=VocabularyFileCategoricalColumn(key='admission_type_id', vocabulary_file='./diabetes_vocab/admission_type_id_vocab.txt', vocabulary_size=9, num_oov_buckets=1, dtype=tf.string, default_value=-1))
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4267: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4322: VocabularyFileCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
tf.Tensor(
[[0. 0. 1. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 1. 0. ... 0. 0. 0.]
...
[0. 1. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 1. ... 0. 0. 0.]], shape=(128, 10), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='num_procedures', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=functools.partial(<function normalize_numeric_with_zscore at 0x7f40c8968050>, mean=1.434270932861038, std=1.7610480315239259))
tf.Tensor(
[[ 1.]
[ 2.]
[-1.]
[-1.]
[-1.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 5.]
[-1.]
[ 1.]
[ 3.]
[-1.]
[ 0.]
[-1.]
[-1.]
[-1.]
[ 0.]
[ 1.]
[ 0.]
[-1.]
[-1.]
[ 0.]
[ 5.]
[ 1.]
[ 1.]
[ 4.]
[ 0.]
[ 2.]
[ 4.]
[-1.]
[-1.]
[ 4.]
[ 1.]
[ 1.]
[ 1.]
[ 2.]
[-1.]
[-1.]
[ 2.]
[-1.]
[-1.]
[ 1.]
[-1.]
[ 2.]
[-1.]
[-1.]
[-1.]
[ 2.]
[-1.]
[-1.]
[-1.]
[-1.]
[-1.]
[-1.]
[ 0.]
[-1.]
[-1.]
[ 1.]
[-1.]
[ 1.]
[-1.]
[ 2.]
[ 1.]
[-1.]
[-1.]
[-1.]
[ 1.]
[ 0.]
[-1.]
[ 0.]
[-1.]
[ 3.]
[-1.]
[-1.]
[-1.]
[-1.]
[-1.]
[ 0.]
[-1.]
[ 2.]
[ 1.]
[-1.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[-1.]
[ 0.]
[ 0.]
[ 0.]
[ 5.]
[ 0.]
[-1.]
[-1.]
[ 0.]
[-1.]
[-1.]
[-1.]
[ 0.]
[ 0.]
[-1.]
[-1.]
[-1.]
[ 0.]
[-1.]
[ 0.]
[ 0.]
[-1.]
[ 4.]
[ 2.]
[-1.]
[ 5.]
[-1.]
[ 1.]
[-1.]
[ 5.]
[-1.]
[-1.]
[ 0.]
[ 1.]
[ 3.]
[-1.]
[-1.]
[ 3.]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(200, activation='relu'),
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_and_compile(feature_layer, loss_metric):
model = build_sequential_model(feature_layer)
lr_schedule = tf.keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=1e-2,
decay_steps=10000,
decay_rate=0.9,
staircase=True
)
optimizer = tf.keras.optimizers.Adam(learning_rate=lr_schedule)
model.compile(optimizer=optimizer, loss=loss_metric, metrics=[loss_metric])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, checkpoint_path, epochs=5, loss_metric='mse'):
model = build_and_compile(feature_layer, loss_metric)
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3)
saving = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop, saving],
epochs=epochs, verbose=1)
return model, history
def load_model(feature_layer, checkpoint_path, loss_metric='mse'):
model = build_and_compile(feature_layer, loss_metric)
model.load_weights(checkpoint_path)
return model
def model(train_ds, val_ds, feature_layer, checkpoint_path, epochs=100, train=True):
if train:
model, history = build_diabetes_model(train_ds, val_ds,
feature_layer, epochs=epochs, checkpoint_path=checkpoint_path)
return model, history
else:
return load_model(feature_layer, checkpoint_path), None
checkpoint_path = "training_2/cp.ckpt"
train = True
diabetes_model = None
if train:
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds,
claim_feature_layer, epochs=100, checkpoint_path=checkpoint_path)
else:
diabetes_model = load_model(claim_feature_layer, checkpoint_path)
diabetes_model2, history2 = model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, 'training_3/cp.ckpt')
model = diabetes_model2
history = history2
###Output
_____no_output_____
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = model(diabetes_x_tst)
preds = model.predict(diabetes_test_ds)
from student_utils import get_mean_std_from_preds
m, s = get_mean_std_from_preds(diabetes_yhat)
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head()
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model. Precision indicates the proportion of patients that the model identified as staying at least 5 days who really stayed at least 5 days. Thus, precision measures how well the model picks up on positive cases. Recall indicates, among all actual positive patients (true positive and false negative), how many did we correctly label as positively staying at least 5 days. Thus, recall measures how sensitive the model is. Ideally, we return many of the patients who stayed at least 5 days and they are mostly correctly labelled as such. A threshold of 5 days and up for time in hospital was chosen as a positive indicator for selecting the patient for the study. A threshold of 6 days yielded lower precision and recall while a threshold of 4 days yielded the same precision and recall, as well as higher AUC and F1, but the criteria is patients staying at least 5-7 days. Thus 5 days was chosen. This yielded a precision and recall of 0.75 each. - What are some areas of improvement for future iterations? Analyzing the impact of each feature on the performance of the model, and removing features that do not meaningfully contribute could create a simpler model that is easier to train. Furthermore, a deeper architecture may yield better results, especially when coupled with simpler features, or the current architecture may learn better with simpler features.
###Code
# AUC, F1, precision and recall
# Summary
from sklearn.metrics import roc_curve, roc_auc_score, precision_recall_curve, f1_score, recall_score, precision_score, confusion_matrix
def plot_auc(t_y, p_y):
fpr, tpr, thresholds = roc_curve(t_y, p_y, pos_label=1)
plt.plot(fpr, tpr, color='darkorange', lw=2)
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
return fpr, tpr, thresholds
def plot_precision_recall(t_y, p_y):
precision, recall, thresholds = precision_recall_curve(t_y, p_y, pos_label=1)
plt.plot(recall, precision, color='darkorange', lw=2)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curve')
plt.show()
return precision, recall, thresholds
true, pred = pred_test_df.label_value, pred_test_df.score
print("AUC score: ", roc_auc_score(true, pred))
print("F1 score: ", f1_score(true, pred, average='weighted'))
print("Precision score: ", precision_score(true, pred,average='micro'))
print("Recall score: ", recall_score(true, pred, average='micro'))
tn, fp, fn, tp = confusion_matrix(true, pred).ravel()
print(f"TN: {tn}, FP: {fp}, FN: {fn}, TP: {tp}")
###Output
AUC score: 0.7418787469001128
F1 score: 0.7662119833237732
Precision score: 0.7688596797874572
Recall score: 0.7688596797874572
TN: 7626, FP: 1402, FN: 1904, TP: 3371
###Markdown
The closer the orange curve is to the top left corner, the better the algorithm can differentiate between positive and negative cases. In our case, we see the curve is leaning to the top left by a bit. There is definitely room for improvement, but the model learned something.
###Code
plot_auc(pred_test_df.label_value, pred_test_df.score);
###Output
_____no_output_____
###Markdown
7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
_____no_output_____
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
_____no_output_____
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement. Is there significant bias in your model for either race or gender? All the metrics ('tpr', 'fpr', 'tnr', 'fnr') are fairly balanced between groups across gender and race, so there doesn't appear to be significant bias. However, upon further analysis of disparity, Asians, Hispanics, and other groups are much less likely to be falsely identified. This may be because the sample sizes for those groups are much smaller than the sample size for caucasians (population sizes are show in parentheses in the plots). In the last plot, we see the green and red bars which indicate fairness. For gender, the bars are green which indicates that the model is fair for gender for false identifications. The fairness plot also corroborates the disparity plot as the bars are red for Hispanic, Asian, and other groups.
###Code
p = aqp.plot_group_metric_all(xtab, metrics=['tpr', 'fpr', 'tnr', 'fnr'], ncols=2)
###Output
_____no_output_____
###Markdown
Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity. We see that compared to the reference group, African Americans are 0.82x as likely as caucasians to be falsely identified, and so on for the other groups.
###Code
# Reference group fairness plot
fpr_disparity = aqp.plot_disparity(bdf, group_metric='fpr_disparity',
attribute_name='race')
###Output
_____no_output_____
###Markdown
Green bar = the model is fair, red bar = the model is not fair
###Code
fpr_fairness = aqp.plot_fairness_group(fdf, group_metric='fpr', title=True)
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import pandas as pd
import aequitas as ae
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
df
###Output
_____no_output_____
###Markdown
Determine Level of Dataset (Line or Encounter) **Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked. - Line: Total number of rows > Number of Unique Encounters- Encounter level: Total Number of Rows = Number of Unique Encounters
###Code
# Line Test
try:
assert len(df) > df['encounter_id'].nunique()
print("Dataset could be at the line level")
except:
print("Dataset is not at the line level")
# Encounter Test
try:
assert len(df) == df['encounter_id'].nunique()
print("Dataset could be at the encounter level")
except:
print("Dataset is not at the encounter level")
###Output
Dataset could be at the line level
Dataset is not at the encounter level
###Markdown
Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project.
###Code
def check_null_values(df):
null_df = pd.DataFrame({'columns': df.columns,
'percent_null': df.isnull().sum() * 100 / len(df),
'percent_zero': df.isin([0]).sum() * 100 / len(df),
'percent_missing': df.isin(['?']).sum()*100/len(df)
} )
return null_df
check_null_values(df)
###Output
_____no_output_____
###Markdown
Above we can see that we only have only a few null values in the field 'nbc_code', many zero values in the fields 'number_outpatient', 'number_inpatient', 'number_emergency', 'num_procedures', and missing values mainly in the fields 'weight', 'payer_code', 'medical_specialty', and a few in the field 'primary_diagnosis_code', and 'race'
###Code
# df.dtypes
numer_cols=df._get_numeric_data().columns
print(numer_cols)
import seaborn as sns
for i in numer_cols:
hist = sns.distplot(df[i], kde=False )
plt.show()
###Output
Index(['encounter_id', 'patient_nbr', 'admission_type_id',
'discharge_disposition_id', 'admission_source_id', 'time_in_hospital',
'number_outpatient', 'number_inpatient', 'number_emergency',
'num_lab_procedures', 'number_diagnoses', 'num_medications',
'num_procedures'],
dtype='object')
###Markdown
From the above it can be seen that, from the numeric features, those with a normal distribution shape are the 'num_lab_procedures', and the 'num_medications' How do we define a field with high cardinality?โข Determine if it is a categorical feature.โข Determine if it has a high number of unique values. This can be a bit subjective but we can probably agree that for a field with 2 unique values would not have high cardinality whereas a field like diagnosis codes might have tens of thousands of unique values would have high cardinality.โข Use the nunique() method to return the number of unique values for the categorical categories above.
###Code
#SOLUTION 1
categ_feat=np.setdiff1d(df.columns,numer_cols) # Categorical are the columns that are not numerical
for i in categ_feat:
print("Feature {} has {} unique values".format(i,df[i].nunique()))
#SOLUTION 2
def create_cardinality_feature(df):
num_rows = len(df)
random_code_list = np.arange(100, 1000, 1)
return np.random.choice(random_code_list, num_rows)
def count_unique_values(df, cat_col_list):
cat_df = df[cat_col_list]
# cat_df['principal_diagnosis_code'] = create_cardinality_feature(cat_df)
# #add feature with high cardinality
val_df = pd.DataFrame({'columns': cat_df.columns,
'cardinality': cat_df.nunique() } )
return val_df
count_unique_values(df, categ_feat)
###Output
_____no_output_____
###Markdown
Given that we consider 'ndc_code' as a feature with high cardinality, other features that fall in that category are the 'other_diagnosis_codes' and the 'primary_diagnosis_code'.
###Code
sns.set(rc={'figure.figsize':(11.7,5)})
ax = sns.countplot(x="age", data=df)
ax = sns.countplot(x="gender", data=df)
ax = sns.countplot(x="age", hue="gender", data=df)
###Output
_____no_output_____
###Markdown
From the age-only plot we can see that the shape is that of a (skewed) normal distribution , with most of the individuals in the range 50-90 years old. From the gender-only plot we can see that we have almost the same number of individuals from each gender, with a few more females than males. From the combined age-gender plot we can see that each gender has again a normal distribution shape for the age range.
###Code
######NOTE: The visualization will only display in Chrome browser. ########
# full_data_stats = tfdv.generate_statistics_from_csv(data_location='./data/final_project_dataset.csv')
# tfdv.visualize_statistics(full_data_stats)
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
ndc_code_df
ndc_code_df.nunique()
check_null_values(ndc_code_df)
ndc_code_df
from student_utils import reduce_dimension_ndc
reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
reduce_dim_df
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
reduce_dim_df['encounter_id']
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
first_encounter_df
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
Number of unique patients:71518
Number of unique encounters:71518
Tests passed!!
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
exclusion_list = ['generic_drug_name']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice.
###Code
def check_null_values2(df):
null_df = pd.DataFrame({'columns': df.columns,
'percent_null': df.isnull().sum() * 100 / len(df),
'percent_zero': df.isin([0]).sum() * 100 / len(df),
'percent_missing': df.isin(['?']).sum()*100/len(df),
'percent_none': df.isin(['None']).sum() * 100 / len(df),
} )
return null_df
agg=agg_drug_df.copy()
del agg["generic_drug_name_array"]
check_null_values2(agg)
###Output
_____no_output_____
###Markdown
From the above we see that we should exclude 'weight' since 96% of the values are missing. The same applies for the 'payer_code' (42% are missing).
###Code
count_unique_values(agg,agg.select_dtypes('object').columns)
###Output
_____no_output_____
###Markdown
We can see that 'medical_specialty' has many missing values, 'max_glu_serum' and 'A1Cresult' many 'None' values, 'ndc_code' is already included based on the 'ndc_col_list', 'other_diagnosis_code' has high cardinality.Therefore 'primary_diagnosis code' as well as "change" and "readmitted" will be kept
###Code
agg
agg.select_dtypes('int64').columns
###Output
_____no_output_____
###Markdown
'encounter_id', 'patient_nbr', 'admission_type_id', 'discharge_disposition_id', 'admission_source_id' since they are just identifiers for each patient and each procedure. The rest are kept.
###Code
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
required_demo_col_list = ['race', 'gender', 'age']
student_categorical_col_list = [ "primary_diagnosis_code", "change", "readmitted"] + required_demo_col_list + ndc_col_list
student_numerical_col_list = [ 'number_outpatient', 'number_inpatient', 'number_emergency','num_lab_procedures',
'number_diagnoses', 'num_medications','num_procedures' ]
PREDICTOR_FIELD = 'time_in_hospital'
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return agg_drug_df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
C:\Users\soyrl\Desktop\project\starter_code\utils.py:29: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[predictor] = df[predictor].astype(float)
C:\Users\soyrl\Desktop\project\starter_code\utils.py:31: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[c] = cast_df(df, c, d_type=str)
C:\Users\soyrl\Desktop\project\starter_code\utils.py:33: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[numerical_column] = impute_df(df, numerical_column, numerical_impute_value)
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
from student_utils import patient_dataset_splitter
# from sklearn.model_selection import train_test_split
# d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
d_train, d_val, d_test = patient_dataset_splitter(processed_df, student_numerical_col_list)
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
Test passed for number of unique patients being equal!
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 1400
2.0 1880
3.0 2007
4.0 1553
5.0 1080
6.0 780
7.0 588
8.0 470
9.0 304
10.0 233
11.0 173
12.0 156
13.0 136
14.0 94
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
gender
Female 5694
Male 5159
Unknown/Invalid 1
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
IndicatorColumn(categorical_column=VocabularyFileCategoricalColumn(key='primary_diagnosis_code', vocabulary_file='./diabetes_vocab/primary_diagnosis_code_vocab.txt', vocabulary_size=610, num_oov_buckets=1, dtype=tf.string, default_value=-1))
tf.Tensor(
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]], shape=(128, 611), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='number_outpatient', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=functools.partial(<function normalize_numeric_with_zscore at 0x0000024D0567D1F0>, mean=0.2917314446939598, std=1.080297387679425))
tf.Tensor(
[[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 1.5812948 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 0.6556237 ]
[-0.27004737]
[-0.27004737]
[ 0.6556237 ]
[-0.27004737]
[ 2.5069659 ]
[-0.27004737]
[-0.27004737]
[ 0.6556237 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 0.6556237 ]
[ 3.432637 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 0.6556237 ]
[-0.27004737]
[-0.27004737]
[ 3.432637 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 2.5069659 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 0.6556237 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 0.6556237 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 0.6556237 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 2.5069659 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 1.5812948 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 2.5069659 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 4.358308 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 1.5812948 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[ 3.432637 ]
[-0.27004737]
[-0.27004737]
[ 0.6556237 ]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]
[-0.27004737]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=10)
###Output
Epoch 1/10
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'dict'> input: {'patient_nbr': <tf.Tensor 'ExpandDims_30:0' shape=(None, 1) dtype=int64>, 'primary_diagnosis_code': <tf.Tensor 'ExpandDims_31:0' shape=(None, 1) dtype=string>, 'change': <tf.Tensor 'ExpandDims_21:0' shape=(None, 1) dtype=string>, 'readmitted': <tf.Tensor 'ExpandDims_33:0' shape=(None, 1) dtype=string>, 'race': <tf.Tensor 'ExpandDims_32:0' shape=(None, 1) dtype=string>, 'gender': <tf.Tensor 'ExpandDims_22:0' shape=(None, 1) dtype=string>, 'age': <tf.Tensor 'ExpandDims_20:0' shape=(None, 1) dtype=string>, 'Acarbose': <tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=string>, 'Glimepiride': <tf.Tensor 'ExpandDims_1:0' shape=(None, 1) dtype=string>, 'Glipizide': <tf.Tensor 'ExpandDims_2:0' shape=(None, 1) dtype=string>, 'Glipizide_And_Metformin_Hcl': <tf.Tensor 'ExpandDims_3:0' shape=(None, 1) dtype=string>, 'Glipizide_And_Metformin_Hydrochloride': <tf.Tensor 'ExpandDims_4:0' shape=(None, 1) dtype=string>, 'Glyburide': <tf.Tensor 'ExpandDims_5:0' shape=(None, 1) dtype=string>, 'Glyburide_And_Metformin_Hydrochloride': <tf.Tensor 'ExpandDims_7:0' shape=(None, 1) dtype=string>, 'Glyburide-metformin_Hydrochloride': <tf.Tensor 'ExpandDims_6:0' shape=(None, 1) dtype=string>, 'Human_Insulin': <tf.Tensor 'ExpandDims_8:0' shape=(None, 1) dtype=string>, 'Insulin_Human': <tf.Tensor 'ExpandDims_9:0' shape=(None, 1) dtype=string>, 'Metformin_Hcl': <tf.Tensor 'ExpandDims_10:0' shape=(None, 1) dtype=string>, 'Metformin_Hydrochloride': <tf.Tensor 'ExpandDims_11:0' shape=(None, 1) dtype=string>, 'Miglitol': <tf.Tensor 'ExpandDims_12:0' shape=(None, 1) dtype=string>, 'Nateglinide': <tf.Tensor 'ExpandDims_13:0' shape=(None, 1) dtype=string>, 'Pioglitazone': <tf.Tensor 'ExpandDims_14:0' shape=(None, 1) dtype=string>, 'Pioglitazone_Hydrochloride_And_Glimepiride': <tf.Tensor 'ExpandDims_15:0' shape=(None, 1) dtype=string>, 'Repaglinide': <tf.Tensor 'ExpandDims_16:0' shape=(None, 1) dtype=string>, 'Rosiglitazone_Maleate': <tf.Tensor 'ExpandDims_17:0' shape=(None, 1) dtype=string>, 'Tolazamide': <tf.Tensor 'ExpandDims_18:0' shape=(None, 1) dtype=string>, 'Tolbutamide': <tf.Tensor 'ExpandDims_19:0' shape=(None, 1) dtype=string>, 'number_outpatient': <tf.Tensor 'ExpandDims_29:0' shape=(None, 1) dtype=float64>, 'number_inpatient': <tf.Tensor 'ExpandDims_28:0' shape=(None, 1) dtype=float64>, 'number_emergency': <tf.Tensor 'ExpandDims_27:0' shape=(None, 1) dtype=float64>, 'num_lab_procedures': <tf.Tensor 'ExpandDims_23:0' shape=(None, 1) dtype=float64>, 'number_diagnoses': <tf.Tensor 'ExpandDims_26:0' shape=(None, 1) dtype=float64>, 'num_medications': <tf.Tensor 'ExpandDims_24:0' shape=(None, 1) dtype=float64>, 'num_procedures': <tf.Tensor 'ExpandDims_25:0' shape=(None, 1) dtype=float64>}
Consider rewriting this model with the Functional API.
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'dict'> input: {'patient_nbr': <tf.Tensor 'ExpandDims_30:0' shape=(None, 1) dtype=int64>, 'primary_diagnosis_code': <tf.Tensor 'ExpandDims_31:0' shape=(None, 1) dtype=string>, 'change': <tf.Tensor 'ExpandDims_21:0' shape=(None, 1) dtype=string>, 'readmitted': <tf.Tensor 'ExpandDims_33:0' shape=(None, 1) dtype=string>, 'race': <tf.Tensor 'ExpandDims_32:0' shape=(None, 1) dtype=string>, 'gender': <tf.Tensor 'ExpandDims_22:0' shape=(None, 1) dtype=string>, 'age': <tf.Tensor 'ExpandDims_20:0' shape=(None, 1) dtype=string>, 'Acarbose': <tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=string>, 'Glimepiride': <tf.Tensor 'ExpandDims_1:0' shape=(None, 1) dtype=string>, 'Glipizide': <tf.Tensor 'ExpandDims_2:0' shape=(None, 1) dtype=string>, 'Glipizide_And_Metformin_Hcl': <tf.Tensor 'ExpandDims_3:0' shape=(None, 1) dtype=string>, 'Glipizide_And_Metformin_Hydrochloride': <tf.Tensor 'ExpandDims_4:0' shape=(None, 1) dtype=string>, 'Glyburide': <tf.Tensor 'ExpandDims_5:0' shape=(None, 1) dtype=string>, 'Glyburide_And_Metformin_Hydrochloride': <tf.Tensor 'ExpandDims_7:0' shape=(None, 1) dtype=string>, 'Glyburide-metformin_Hydrochloride': <tf.Tensor 'ExpandDims_6:0' shape=(None, 1) dtype=string>, 'Human_Insulin': <tf.Tensor 'ExpandDims_8:0' shape=(None, 1) dtype=string>, 'Insulin_Human': <tf.Tensor 'ExpandDims_9:0' shape=(None, 1) dtype=string>, 'Metformin_Hcl': <tf.Tensor 'ExpandDims_10:0' shape=(None, 1) dtype=string>, 'Metformin_Hydrochloride': <tf.Tensor 'ExpandDims_11:0' shape=(None, 1) dtype=string>, 'Miglitol': <tf.Tensor 'ExpandDims_12:0' shape=(None, 1) dtype=string>, 'Nateglinide': <tf.Tensor 'ExpandDims_13:0' shape=(None, 1) dtype=string>, 'Pioglitazone': <tf.Tensor 'ExpandDims_14:0' shape=(None, 1) dtype=string>, 'Pioglitazone_Hydrochloride_And_Glimepiride': <tf.Tensor 'ExpandDims_15:0' shape=(None, 1) dtype=string>, 'Repaglinide': <tf.Tensor 'ExpandDims_16:0' shape=(None, 1) dtype=string>, 'Rosiglitazone_Maleate': <tf.Tensor 'ExpandDims_17:0' shape=(None, 1) dtype=string>, 'Tolazamide': <tf.Tensor 'ExpandDims_18:0' shape=(None, 1) dtype=string>, 'Tolbutamide': <tf.Tensor 'ExpandDims_19:0' shape=(None, 1) dtype=string>, 'number_outpatient': <tf.Tensor 'ExpandDims_29:0' shape=(None, 1) dtype=float64>, 'number_inpatient': <tf.Tensor 'ExpandDims_28:0' shape=(None, 1) dtype=float64>, 'number_emergency': <tf.Tensor 'ExpandDims_27:0' shape=(None, 1) dtype=float64>, 'num_lab_procedures': <tf.Tensor 'ExpandDims_23:0' shape=(None, 1) dtype=float64>, 'number_diagnoses': <tf.Tensor 'ExpandDims_26:0' shape=(None, 1) dtype=float64>, 'num_medications': <tf.Tensor 'ExpandDims_24:0' shape=(None, 1) dtype=float64>, 'num_procedures': <tf.Tensor 'ExpandDims_25:0' shape=(None, 1) dtype=float64>}
Consider rewriting this model with the Functional API.
271/272 [============================>.] - ETA: 0s - loss: 28.5621 - mse: 28.4603WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'dict'> input: {'patient_nbr': <tf.Tensor 'ExpandDims_30:0' shape=(None, 1) dtype=int64>, 'primary_diagnosis_code': <tf.Tensor 'ExpandDims_31:0' shape=(None, 1) dtype=string>, 'change': <tf.Tensor 'ExpandDims_21:0' shape=(None, 1) dtype=string>, 'readmitted': <tf.Tensor 'ExpandDims_33:0' shape=(None, 1) dtype=string>, 'race': <tf.Tensor 'ExpandDims_32:0' shape=(None, 1) dtype=string>, 'gender': <tf.Tensor 'ExpandDims_22:0' shape=(None, 1) dtype=string>, 'age': <tf.Tensor 'ExpandDims_20:0' shape=(None, 1) dtype=string>, 'Acarbose': <tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=string>, 'Glimepiride': <tf.Tensor 'ExpandDims_1:0' shape=(None, 1) dtype=string>, 'Glipizide': <tf.Tensor 'ExpandDims_2:0' shape=(None, 1) dtype=string>, 'Glipizide_And_Metformin_Hcl': <tf.Tensor 'ExpandDims_3:0' shape=(None, 1) dtype=string>, 'Glipizide_And_Metformin_Hydrochloride': <tf.Tensor 'ExpandDims_4:0' shape=(None, 1) dtype=string>, 'Glyburide': <tf.Tensor 'ExpandDims_5:0' shape=(None, 1) dtype=string>, 'Glyburide_And_Metformin_Hydrochloride': <tf.Tensor 'ExpandDims_7:0' shape=(None, 1) dtype=string>, 'Glyburide-metformin_Hydrochloride': <tf.Tensor 'ExpandDims_6:0' shape=(None, 1) dtype=string>, 'Human_Insulin': <tf.Tensor 'ExpandDims_8:0' shape=(None, 1) dtype=string>, 'Insulin_Human': <tf.Tensor 'ExpandDims_9:0' shape=(None, 1) dtype=string>, 'Metformin_Hcl': <tf.Tensor 'ExpandDims_10:0' shape=(None, 1) dtype=string>, 'Metformin_Hydrochloride': <tf.Tensor 'ExpandDims_11:0' shape=(None, 1) dtype=string>, 'Miglitol': <tf.Tensor 'ExpandDims_12:0' shape=(None, 1) dtype=string>, 'Nateglinide': <tf.Tensor 'ExpandDims_13:0' shape=(None, 1) dtype=string>, 'Pioglitazone': <tf.Tensor 'ExpandDims_14:0' shape=(None, 1) dtype=string>, 'Pioglitazone_Hydrochloride_And_Glimepiride': <tf.Tensor 'ExpandDims_15:0' shape=(None, 1) dtype=string>, 'Repaglinide': <tf.Tensor 'ExpandDims_16:0' shape=(None, 1) dtype=string>, 'Rosiglitazone_Maleate': <tf.Tensor 'ExpandDims_17:0' shape=(None, 1) dtype=string>, 'Tolazamide': <tf.Tensor 'ExpandDims_18:0' shape=(None, 1) dtype=string>, 'Tolbutamide': <tf.Tensor 'ExpandDims_19:0' shape=(None, 1) dtype=string>, 'number_outpatient': <tf.Tensor 'ExpandDims_29:0' shape=(None, 1) dtype=float64>, 'number_inpatient': <tf.Tensor 'ExpandDims_28:0' shape=(None, 1) dtype=float64>, 'number_emergency': <tf.Tensor 'ExpandDims_27:0' shape=(None, 1) dtype=float64>, 'num_lab_procedures': <tf.Tensor 'ExpandDims_23:0' shape=(None, 1) dtype=float64>, 'number_diagnoses': <tf.Tensor 'ExpandDims_26:0' shape=(None, 1) dtype=float64>, 'num_medications': <tf.Tensor 'ExpandDims_24:0' shape=(None, 1) dtype=float64>, 'num_procedures': <tf.Tensor 'ExpandDims_25:0' shape=(None, 1) dtype=float64>}
Consider rewriting this model with the Functional API.
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
diabetes_yhat
from student_utils import get_mean_std_from_preds
m, s = get_mean_std_from_preds(diabetes_yhat)
np.unique(m)
s
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head()
prob_output_df.max()
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
student_binary_prediction
np.unique(student_binary_prediction)
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
np.unique(d_test['time_in_hospital'])
d_test['time_in_hospital']
def add_pred_to_test(test_df, pred_np, demo_col_list):
# for c in demo_col_list:
# test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=6 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations? There are a lot of parameters that can influence the performance of our algorithm. Learning rate can severely infuence the performance. A small lr will make the algorith converge extremely slow and it may take a very long time until it makes progress. On the other hand, a high value could cause the model to bounce many times when it is close to a local/global minimum and it may even fail to converge. This is why tuning the lr is an important task that has to be done. We can even start with a relatively high lr and decrease it in the course of training.Number of epochs can affect the performance of the algorithm. Too many epochs and the model will start to overfit the training data, while too few epochs will result in a model that is not yet fully trained.A loss function can be representative of the kind of problem that we want to solve. Since here we want to see how close our predicted value is to the actual value mse seems like a valid choice. Other problems may need other loss functions. For example, a two class classification problem may use binary-cross entropy. Normalization can also affect the convergence of our algorithm. Having the values in the range 0-1 can help the model converge faster. Precision is the fraction of correct positives among the total predicted positives. Recall is the fraction of correct positives among the total positives in the dataset. There might be points of positive class which are closer to the negative class and vice versa. In such cases, shifting the decision boundary can either increase the precision or recall but not both. Increasing one parameter leads to decreasing of the other. Precision-recall tradeoff occur due to increasing one of the parameter(precision or recall) while keeping the model same.
###Code
# AUC, F1, precision and recall
# Summary
from sklearn.metrics import brier_score_loss, accuracy_score, f1_score, classification_report, roc_auc_score, roc_curve
print("F1 score is {}".format(f1_score(pred_test_df['label_value'], pred_test_df['score'], average='weighted')))
print("ROC AUC score is {}".format(roc_auc_score(pred_test_df['label_value'], pred_test_df['score'])))
print(classification_report(pred_test_df['label_value'], pred_test_df['score']))
#There is always a trade-off in precision and recall since when one is improved the other will become worse.
#Depending on each application we decide which one is more important and we try to improve towards a specific metric improvement
###Output
F1 score is 0.7292689000205925
ROC AUC score is 0.7504926566276259
precision recall f1-score support
0 0.91 0.67 0.77 7920
1 0.48 0.83 0.61 2934
accuracy 0.71 10854
macro avg 0.70 0.75 0.69 10854
weighted avg 0.80 0.71 0.73 10854
###Markdown
7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
C:\Users\soyrl\anaconda3\lib\site-packages\pandas\core\indexing.py:1745: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
isetter(ilocs[0], value)
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
get_disparity_predefined_group()
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
# Is there significant bias in your model for either race or gender?
tpr = aqp.plot_group_metric(clean_xtab, 'tpr', min_group_size=0.05)
fpr = aqp.plot_group_metric(clean_xtab, 'fpr', min_group_size=0.05)
tnr = aqp.plot_group_metric(clean_xtab, 'tnr', min_group_size=0.05)
###Output
_____no_output_____
###Markdown
Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
# Reference group fairness plot
fpr_disparity = aqp.plot_disparity(bdf, group_metric='fpr_disparity',
attribute_name='race')
fpr_disparity = aqp.plot_disparity(bdf, group_metric='fpr_disparity',
attribute_name='gender')
fpr_disparity = aqp.plot_fairness_group(fdf, group_metric='fpr')
fpr_disparity_fairness = aqp.plot_fairness_disparity(fdf, group_metric='fpr', attribute_name='gender')
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import tensorflow_data_validation as tfdv
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import aequitas as ae
from sklearn.model_selection import train_test_split
import functools
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
# pip install apache-beam[interactive]
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path, na_values = ['?', '?|?', 'Unknown/Invalid'])
df.head()
###Output
_____no_output_____
###Markdown
Determine Level of Dataset (Line or Encounter) **Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked.
###Code
# print(df.encounter_id.value_counts(dropna=False))
# print(df.patient_nbr.value_counts(dropna=False))
print(len(df) == df.encounter_id.nunique())
###Output
False
###Markdown
Student Response: I can see that there are multiple instances of both encounter_id and patient_nbr, which means that this dataset is at the line level. In addition, the number of rows does not match the number of unqiue encounter_id'sMaybe we should also aggregate on diagnosis code? Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields.
###Code
# Explore fields with missing values
print(df.info())
#ย I am interested to know why weight is object dtype - it is because it is categorical rather than continuous
df.weight.value_counts(dropna=False)
df.describe()
# Based off the frequency histogram for each numerical field,
# which numerical field(s) has/have a Gaussian(normal) distribution shape?
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
df_num = df.select_dtypes(include=numerics)
for col in df_num:
plt.hist(x=col, data=df_num)
plt.title(col)
plt.show()
## Which field(s) have high cardinality and why (HINT: ndc_code is one feature)
objects = ['object']
df_obj = df.select_dtypes(include=objects)
for col in df_obj:
print(col, df_obj[col].nunique())
# Please describe the demographic distributions in the dataset for the age and gender fields
from matplotlib.pyplot import figure
figure(figsize=(8, 4), dpi=80)
plt.hist(x='age', data=df_obj)
plt.title('Age distribution in bins of 10 years')
sns.countplot(x='gender', data=df_obj[df_obj['gender'].notnull()])
plt.title('gender count')
# Explore demographic distributions for unique patients
df_unique = df.drop_duplicates('patient_nbr')
figure(figsize=(8, 4), dpi=80)
plt.hist(x='age', data=df_unique)
plt.title('Age distribution in bins of 10 years, for unique patients')
sns.countplot(x='gender', data=df_unique[df_unique['gender'].notnull()])
plt.title('gender count for unique patients')
###Output
_____no_output_____
###Markdown
**OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project. **Student Response**: Missing data: From looking at the head of the dataframe I can see that some missing values are filled with '?' or '?|?' so I have filled these with NaN while reading in the dataframe from csv. I noticed that 'max_glu_serum' and 'A1Cresult' have 'None" variables but without domain knowledge I am not sure if this indicates missing values or not so I have not changed those values.Guassian distribution: num_lab_procedures is the only normally distributed column out of all the numerical features.Cardinality: 'other_diagnosis_codes', 'primary_diagnosis_code', 'ndc_code', 'medical_specialty', 'payer_code' - all of these features have cardinality which is likely to be too high because it will drastically increase the number of features. Even 10 unique values might be too high for some datasets/modelling.Demographic distributions: The age range of the dataset is slightly skewed towards the older ages (50+) and there is a slight bias towards females in the sample (i.e. more females). This is true for the whole dataset (line level) and when only including each unique patient number.
###Code
# ######NOTE: The visualization will only display in Chrome browser. ########
# full_data_stats = tfdv.generate_statistics_from_csv(data_location='./data/final_project_dataset.csv')
# tfdv.visualize_statistics(full_data_stats)
# #ย Note this is not necessary and maybe just remove?
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
# NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
ndc_code_df.head()
print("We have", ndc_code_df.NDC_Code.nunique(), "unique NDC codes, but only", ndc_code_df['Non-proprietary Name'].nunique(), "non-proprietary names")
from student_utils import reduce_dimension_ndc
reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
reduce_dim_df.head()
# Number of unique values should be less for the new output field
assert reduce_dim_df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.NOTE: These instructions are slightly wrong as they ask for first encounter. It becomes apparent later on that only the first line of the first encounter should be kept, leaving one line for each patient.
###Code
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
first_encounter_df.sort_values(['patient_nbr', 'encounter_id']).head()
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
Number of unique patients:56133
Number of unique encounters:56133
Tests passed!!
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
exclusion_list = ['generic_drug_name']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
first_encounter_df.sort_values(['patient_nbr', 'encounter_id'])
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice. Student response: The weight categories are unbalanced and so we may choose to transform the variables before including them in a model. In addition, weight is a measure which is likely to be correlated with gender and also requires some normalisation because it is relative to body size e.g. a medium weight for a woman could mean she is obese, but the same weight in a man could be healthy. BMI for example may be a better metric which we could calculate if we had height and weight. However, for this model I will exclude weight.The payer code feature is also very imbalanced between groups and so I will exclude it for now.
###Code
sns.countplot(x='payer_code', data=agg_drug_df)
plt.title('payer code count for unique patients')
sns.countplot(x='weight', data=agg_drug_df)
plt.title('weight category count for unique patients')
agg_drug_df.max_glu_serum.value_counts()
agg_drug_df.columns
# '''
# Please update the list to include the features you think are appropriate for the model
# and the field that we will be using to train the model. There are three required demographic features for the model
# and I have inserted a list with them already in the categorical list.
# These will be required for later steps when analyzing data splits and model biases.
# '''
# required_demo_col_list = ['race', 'gender', 'age']
# student_categorical_col_list = ['readmitted', 'admission_type_id', 'discharge_disposition_id',
# 'admission_source_id', 'A1Cresult', 'change',
# 'primary_diagnosis_code', 'other_diagnosis_codes'] + required_demo_col_list + ndc_col_list
# student_numerical_col_list = ['num_lab_procedures', 'number_diagnoses', 'num_medications', 'num_procedures' ]
# PREDICTOR_FIELD = "time_in_hospital"
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
required_demo_col_list = ['race', 'gender', 'age']
student_categorical_col_list = ['readmitted',
'primary_diagnosis_code', 'other_diagnosis_codes'] + required_demo_col_list + ndc_col_list
student_numerical_col_list = ['num_lab_procedures', 'number_diagnoses', 'num_medications', 'num_procedures' ]
PREDICTOR_FIELD = "time_in_hospital"
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return agg_drug_df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
student_categorical_col_list
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it? RECOMMENDATION:For continuous variables I suggest using a mean imputation approach rather than imputing zero. Using zero is going to skew the data and potentially affect the results. Mean imputation will not have such a severe impact, since it will assume an average value and should not badly affect the regression line.For categorical data I would suggest using median imputation.In these particular continuous feature columns I do not detect missing values.
###Code
from student_utils import count_missing
count_missing(selected_features_df,student_numerical_col_list, student_categorical_col_list, PREDICTOR_FIELD)
# Not necessary
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
/home/workspace/starter_code/utils.py:31: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[predictor] = df[predictor].astype(float)
/home/workspace/starter_code/utils.py:33: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[c] = cast_df(df, c, d_type=str)
/home/workspace/starter_code/utils.py:35: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[numerical_column] = impute_df(df, numerical_column, numerical_impute_value)
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
from student_utils import patient_dataset_splitter
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
Test passed for number of unique patients being equal!
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 11
2.0 22
3.0 30
4.0 21
5.0 12
6.0 6
7.0 8
8.0 4
9.0 1
10.0 2
11.0 2
12.0 2
13.0 2
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
gender
Female 58
Male 65
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
There is a slight difference in gender distribution in the train and test samples Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
vocab_file_list
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
# vocab_dir='./diabetes_vocab/'
# output_tf_list = []
# for c in student_categorical_col_list:
# vocab_file_path = os.path.join(vocab_dir, c + "_vocab.txt")
# '''
# Which TF function allows you to read from a text file and create a categorical feature
# You can use a pattern like this below...
# tf_categorical_feature_column = tf.feature_column.......
# '''
# tf_categorical_feature_column = tf.feature_column.categorical_column_with_vocabulary_file(key= c, vocabulary_file = vocab_file_path)
# output_tf_list.append(tf_categorical_feature_column)
# return output_tf_list
from student_utils import create_tf_categorical_feature_cols
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
EmbeddingColumn(categorical_column=VocabularyFileCategoricalColumn(key='readmitted', vocabulary_file='./diabetes_vocab/readmitted_vocab.txt', vocabulary_size=4, num_oov_buckets=0, dtype=tf.string, default_value=-1), dimension=10, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x7f7ce41b7b50>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True)
tf.Tensor(
[[-0.48055136 0.5130067 0.25701338 ... -0.46413308 0.01794795
-0.15923622]
[-0.48055136 0.5130067 0.25701338 ... -0.46413308 0.01794795
-0.15923622]
[-0.18950401 -0.01453512 0.09378203 ... 0.42164963 -0.00364839
0.38343868]
...
[-0.18950401 -0.01453512 0.09378203 ... 0.42164963 -0.00364839
0.38343868]
[-0.48055136 0.5130067 0.25701338 ... -0.46413308 0.01794795
-0.15923622]
[-0.48055136 0.5130067 0.25701338 ... -0.46413308 0.01794795
-0.15923622]], shape=(128, 10), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='num_lab_procedures', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=functools.partial(<function normalize_numeric_with_zscore at 0x7f7cd8e925f0>, mean=52.74590163934426, std=19.746154045681354))
tf.Tensor(
[[ 1.3684211 ]
[ 0.8947368 ]
[ 1.7894737 ]
[-1.1578947 ]
[-0.36842105]
[ 0.84210527]
[-2.5263157 ]
[-1. ]
[-0.36842105]
[-0.36842105]
[ 1. ]
[ 1.0526316 ]
[ 1.4210526 ]
[ 0.94736844]
[ 1.2105263 ]
[ 1.4210526 ]
[-0.05263158]
[ 0.8947368 ]
[-0.47368422]
[ 0.36842105]
[ 1. ]
[ 1.4736842 ]
[ 0.7368421 ]
[-1.1578947 ]
[-2.6842105 ]
[ 0.8947368 ]
[-1.2105263 ]
[-0.47368422]
[ 0.15789473]
[-1.0526316 ]
[ 1.2631578 ]
[-0.7894737 ]
[-0.84210527]
[-2.5263157 ]
[-0.8947368 ]
[ 0.47368422]
[ 1.4736842 ]
[ 0.31578946]
[-1.1578947 ]
[ 0.21052632]
[-0.21052632]
[-0.10526316]
[-2.4736843 ]
[ 1.0526316 ]
[-0.21052632]
[-0.2631579 ]
[ 0.31578946]
[-1.2631578 ]
[ 0.10526316]
[ 2.1578948 ]
[-0.21052632]
[ 1. ]
[ 0.8947368 ]
[ 1. ]
[ 0.6315789 ]
[ 0.94736844]
[-0.6315789 ]
[-1.0526316 ]
[ 0.15789473]
[ 0.05263158]
[-0.15789473]
[ 0.94736844]
[ 0.05263158]
[-0.21052632]
[-0.5263158 ]
[-0.42105263]
[-0.68421054]
[-1. ]
[-0.31578946]
[-0.6315789 ]
[ 1.3684211 ]
[-2.6842105 ]
[-0.21052632]
[-2.3157895 ]
[ 0.36842105]
[ 1.2631578 ]
[-0.94736844]
[ 0.8947368 ]
[ 0.84210527]
[ 0.68421054]
[-0.6315789 ]
[-0.15789473]
[-0.2631579 ]
[ 0.05263158]
[ 1.5263158 ]
[ 1.0526316 ]
[-0.6315789 ]
[-0.84210527]
[ 1.0526316 ]
[ 0.94736844]
[-2.1052632 ]
[-0.68421054]
[ 1.5789474 ]
[-1.4210526 ]
[-1.2105263 ]
[-1.2631578 ]
[-0.7368421 ]
[ 1.6315789 ]
[-0.94736844]
[-0.84210527]
[ 1.1052631 ]
[ 0.84210527]
[ 0.21052632]
[ 0.7894737 ]
[ 1.0526316 ]
[ 1.0526316 ]
[-1. ]
[ 1.3157895 ]
[-0.36842105]
[-0.10526316]
[-0.7894737 ]
[ 0.05263158]
[ 1.3157895 ]
[-0.8947368 ]
[-1.8947369 ]
[ 0.94736844]
[ 0.21052632]
[ 0.68421054]
[ 0.15789473]
[ 0.8947368 ]
[-0.36842105]
[ 1.8421053 ]
[ 1.5789474 ]
[-0.36842105]
[ 1. ]
[ 1.9473684 ]
[-2.631579 ]
[-0.8947368 ]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
tf_cat_col_list
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
opt = tf.keras.optimizers.RMSprop(learning_rate=1e-5)
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer=opt, loss=loss_metric, metrics=[loss_metric])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=500)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=500)
history.history['loss']
def plot_loss(loss,val_loss):
plt.figure()
plt.plot(loss)
plt.plot(val_loss)
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper right')
plt.show()
plot_loss(history.history['loss'], history.history['val_loss'])
###Output
_____no_output_____
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
from student_utils import get_mean_std_from_preds
m, s = get_mean_std_from_preds(diabetes_yhat)
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head()
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
prob_output_df
from student_utils import get_student_binary_prediction
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations?
###Code
# AUC, F1, precision and recall
# Summary
from sklearn.metrics import accuracy_score, f1_score, classification_report, roc_auc_score, confusion_matrix
from sklearn.metrics import precision_recall_curve, plot_precision_recall_curve, roc_curve
pred_test_df.head(2)
y_pred = pred_test_df['score']
y_true = pred_test_df['label_value']
accuracy_score(y_true, y_pred)
print(classification_report(y_true, y_pred))
roc_auc_score(y_true, y_pred)
###Output
_____no_output_____
###Markdown
Summary of resultsThis model has used various hospital data to predicts the expected days of hospitalization time. I have then converted this to a binary outcome to decide whther a patient could be included or excluded from a clinical trial for a diabetes drug.The ROC AUC score is 0.34The weighted F1 score is 0.43For those identified as appropriate for the clinical trial precision and recall were 0For those identified as not appropriate for the clinical trial precision was 0.59 and recall 0.68 Precision-Recall trade offRecall - this metric tells us "out of all patients who could be included in the trial, how many has this model identified"Precision - this metric tells us "out of all patients who have been identified by the model as being appropriate for inclusion in the trial, how many truly are appropriate for inclusion"The tradeoff between these two metrics is important because a single model is unlikely to be able to achieve all of our goals. We need to decide whether we find it more acceptable to miss-identify patients that could have been included in the trial, as not appropriate for the trial (i.e. False Negative), or if we find it more acceptable to identify and potentially include patients in the trial, whose symptoms/hospitalisation time or other metric, did not truly necessitate receiving the drug at this early clinical trial stage (i.e. False Positive). In the former, we would prioritise a higher precision value, in the latter we would prioritise a higher recall value. In practice, niether outcome is ideal as we really want to identify the right patients for the trial. The alternative is that we do not identify the correct patients, leading to patients in need, not being able to access the promising new therepeutic drug as well as the data from the clinical trial not being optimal since the trial did not collect data from an optimised group of patients. In order to strike a balance between precision and recall, we can use the F1-score which is the harmonic mean of precision and recall. The harmonic mean is always closer to the lower number (precision or recall), this means that if we have a F1-score of 0.3, we know that this is not a well balanced model because one of our metrics (precision or recall) is low. Conversely, an F1-score of 0.8, means that we have good precision and recall scores. F1 is a preferrable metric than accuracy, because accuracy can hide the fact one of our scores is low as it is simply the avergae of the two scores e.g. precision 0.4, recall 0.9 = accuracy of 0.7 which seems quite good but masks the fact that precision is low. Areas for improvement 7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
/opt/conda/lib/python3.7/site-packages/aequitas/group.py:143: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df['score'] = df['score'].astype(float)
/opt/conda/lib/python3.7/site-packages/aequitas/group.py:30: FutureWarning: The pandas.np module is deprecated and will be removed from pandas in a future version. Import numpy directly instead
divide = lambda x, y: x / y if y != 0 else pd.np.nan
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
get_disparity_predefined_group()
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
# Is there significant bias in your model for either race or gender?
###Output
_____no_output_____
###Markdown
Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
# Reference group fairness plot
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/tree/master/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis with Tensorflow Data Analysis and Validation library. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import pandas as pd
import aequitas as ae
import tensorflow_data_validation as tfdv
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/starter_code/data/final_project_dataset.csv". Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/tree/master/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
###Output
_____no_output_____
###Markdown
Determine Level of Dataset (Line or Encounter) **Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked. Student Response:?? Analyze Dataset **Question 2**: The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. Below we will use it to inspect the full project dataset.Based off of your analysis of the visualization, please provide answers to the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **Student Response**: ??
###Code
######NOTE: The visualization will only display in Chrome browser. ########
full_data_stats = tfdv.generate_statistics_from_csv(data_location='./data/final_project_dataset.csv')
tfdv.visualize_statistics(full_data_stats)
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/tree/master/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
from student_utils import reduce_dimension_ndc
reduce_dim_df = reduce_dimension_ndc(df, ndc_df)
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
_____no_output_____
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
exclusion_list = ['generic_drug_name']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice. Student response: ??
###Code
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
# required_demo_col_list = ['race', 'gender', 'age']
# student_categorical_col_list = [ "feature_A", "feature_B", .... ] + required_demo_col_list + ndc_col_list
# student_numerical_col_list = [ "feature_A", "feature_B", .... ]
# PREDICTOR_FIELD = ''
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return agg_drug_df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
_____no_output_____
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to sure that Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
from student_utils import patient_dataset_splitter
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
_____no_output_____
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
_____no_output_____
###Markdown
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
_____no_output_____
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
_____no_output_____
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=10)
###Output
_____no_output_____
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
from student_utils import get_mean_std_from_preds
m, s = get_mean_std_from_preds(diabetes_yhat)
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head()
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
student_binary_prediction = get_student_binary_prediction(m)
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations?
###Code
# AUC, F1, precision and recall
# Summary
###Output
_____no_output_____
###Markdown
7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
_____no_output_____
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
_____no_output_____
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
# Is there significant bias in your model for either race or gender?
###Output
_____no_output_____
###Markdown
Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
# Reference group fairness plot
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import tensorflow_data_validation as tfdv
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import aequitas as ae
from sklearn.metrics import classification_report, roc_curve, auc, roc_auc_score, \
average_precision_score, recall_score, precision_recall_curve, \
precision_score, accuracy_score, f1_score, r2_score, mean_squared_error
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
###Output
_____no_output_____
###Markdown
Determine Level of Dataset (Line or Encounter)
###Code
display(df.info())
display(df.describe())
df.head()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 143424 entries, 0 to 143423
Data columns (total 26 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 encounter_id 143424 non-null int64
1 patient_nbr 143424 non-null int64
2 race 143424 non-null object
3 gender 143424 non-null object
4 age 143424 non-null object
5 weight 143424 non-null object
6 admission_type_id 143424 non-null int64
7 discharge_disposition_id 143424 non-null int64
8 admission_source_id 143424 non-null int64
9 time_in_hospital 143424 non-null int64
10 payer_code 143424 non-null object
11 medical_specialty 143424 non-null object
12 primary_diagnosis_code 143424 non-null object
13 other_diagnosis_codes 143424 non-null object
14 number_outpatient 143424 non-null int64
15 number_inpatient 143424 non-null int64
16 number_emergency 143424 non-null int64
17 num_lab_procedures 143424 non-null int64
18 number_diagnoses 143424 non-null int64
19 num_medications 143424 non-null int64
20 num_procedures 143424 non-null int64
21 ndc_code 119962 non-null object
22 max_glu_serum 143424 non-null object
23 A1Cresult 143424 non-null object
24 change 143424 non-null object
25 readmitted 143424 non-null object
dtypes: int64(13), object(13)
memory usage: 28.5+ MB
###Markdown
**Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked.
###Code
# check the level of the dataset
print(f'There are {len(df)} lines and \n{len(df["encounter_id"].unique())} unique encounters in the dataset.')
if len(df) == len(df['encounter_id'].unique()):
print('It is encouter level dataset.')
elif len(df) > len(df['encounter_id'].unique()):
print('It is a line level dataset.')
else: print('It coluld be a longituginal dataset')
###Output
There are 143424 lines and
101766 unique encounters in the dataset.
It is a line level dataset.
###Markdown
**Response Q1:** This is a **line level** dataset because of the number of records is more than the number of unique encounters. The dataset may represent of all the things that might happen in a medical visit or encounter. Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project.
###Code
#a. Field(s) with high amount of missing/zero values
df_new = df.replace('?', np.nan).replace('None', np.nan)
missing_info = df_new.isnull().mean().sort_values(ascending=False)
print('The ratio of missing data for each column if there is any:\n')
print(missing_info[missing_info > 0.].to_string(header=None))
print('\n\nColumns without missing data:\n')
col_no_miss_ls = list(missing_info[missing_info == 0.].keys())
for x in col_no_miss_ls: print(x)
###Output
The ratio of missing data for each column if there is any:
weight 0.970005
max_glu_serum 0.951089
A1Cresult 0.820295
medical_specialty 0.484319
payer_code 0.377831
ndc_code 0.163585
race 0.023071
primary_diagnosis_code 0.000230
Columns without missing data:
patient_nbr
gender
age
admission_type_id
discharge_disposition_id
admission_source_id
time_in_hospital
readmitted
change
other_diagnosis_codes
number_outpatient
number_inpatient
number_emergency
num_lab_procedures
number_diagnoses
num_medications
num_procedures
encounter_id
###Markdown
**Response Q2a**: The columns with missing data are: weight (97%), max_glu_serum (95%), A1Cresult (82%), medical_specialty (48%), payer_code (38%), ndc_code (16%), race (2.3%), primary_diagnosis_code (0.02%). It is worth to remove columns with the missed values ratio more than 0.5, then we will think about what to do with the rest.
###Code
# remove columns with more thatn 50% missed values
df_new = df_new.drop(columns = ['weight', 'max_glu_serum', 'A1Cresult'])
display(df_new.info())
display(df_new.describe())
# b. Based off the frequency histogram for each numerical field,
# which numerical field(s) has/have a Gaussian(normal) distribution shape?
num_col_ls = list(df_new.select_dtypes(['int64']).columns)
for col_name in num_col_ls:
sns.distplot(df_new[col_name], kde=False)
plt.title(col_name)
plt.show()
###Output
_____no_output_____
###Markdown
**Response Q2b**: The following numerical fields have a Gaussian distribution shapes:- ecounter_id (skewed right)- num_lab_procedures (slightly skewed left)- num_medications (slightly skewed right)
###Code
# c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature)
cat_col_ls = list(df_new.select_dtypes(['object']).columns)
cat_col_ls.extend(['admission_type_id','discharge_disposition_id', 'admission_source_id'])
df_new[cat_col_ls] = df_new[cat_col_ls].astype('str')
cardinality_df = pd.DataFrame({'columns': cat_col_ls, 'cardinality': df_new[cat_col_ls].nunique() } ).sort_values('cardinality', ascending=False)
cardinality_df
###Output
_____no_output_____
###Markdown
**Response Q2c**: The cardinality of all categorical fields are shown in the table above. The field with highest cardinality are:- other_diagnosis_codes (19374)- primary_diagnosis_code (717)- ndc_code (252)
###Code
# d. Please describe the demographic distributions in the dataset for the age and gender fields.
plt.figure(figsize=(7, 5))
sns.countplot(x='age', data=df_new)
plt.title('Age distribution')
plt.show()
plt.figure(figsize=(5, 5))
sns.countplot(x='gender', data=df_new)
plt.title('Gender distribution')
plt.show()
plt.figure(figsize=(7, 5))
sns.countplot(x='age', hue="gender", data=df_new)
plt.title('Age distribution by gender')
plt.show()
###Output
_____no_output_____
###Markdown
**Response Q2d**: The demographic distributions in the dataset for the age and gender fields are shown above. The quick analysis revealed that the dataset mostly represented by male and female in the age range between 40 and 90 years old. The ratio of female and male of age 0 to 70 are about the same. The number of female patients in age between 70 and 100 are higher than males in the same age range.
###Code
######NOTE: The visualization will only display in Chrome browser. ########
#full_data_stats = tfdv.generate_statistics_from_csv(data_location='./data/final_project_dataset.csv')
#tfdv.visualize_statistics(full_data_stats)
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
ndc_code_df.head()
df_new.head()
from student_utils import reduce_dimension_ndc
%autoreload
reduce_dim_df = reduce_dimension_ndc(df_new, ndc_code_df)
# Number of unique values should be less for the new output field
assert df_new['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
reduce_dim_df.nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
from student_utils import select_first_encounter
%autoreload
first_encounter_df = select_first_encounter(reduce_dim_df)
first_encounter_df.info()
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
Number of unique patients:56133
Number of unique encounters:56133
Tests passed!!
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
exclusion_list = ['generic_drug_name']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
agg_drug_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 56133 entries, 0 to 56132
Data columns (total 57 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 patient_nbr 56133 non-null int64
1 encounter_id 56133 non-null int64
2 race 56133 non-null object
3 gender 56133 non-null object
4 age 56133 non-null object
5 admission_type_id 56133 non-null object
6 discharge_disposition_id 56133 non-null object
7 admission_source_id 56133 non-null object
8 time_in_hospital 56133 non-null int64
9 payer_code 56133 non-null object
10 medical_specialty 56133 non-null object
11 primary_diagnosis_code 56133 non-null object
12 other_diagnosis_codes 56133 non-null object
13 number_outpatient 56133 non-null int64
14 number_inpatient 56133 non-null int64
15 number_emergency 56133 non-null int64
16 num_lab_procedures 56133 non-null int64
17 number_diagnoses 56133 non-null int64
18 num_medications 56133 non-null int64
19 num_procedures 56133 non-null int64
20 ndc_code 56133 non-null object
21 change 56133 non-null object
22 readmitted 56133 non-null object
23 generic_drug_name_array 56133 non-null object
24 Acarbose 56133 non-null uint8
25 Afrezza 56133 non-null uint8
26 Amaryl 56133 non-null uint8
27 Avandia_2MG 56133 non-null uint8
28 Avandia_4MG 56133 non-null uint8
29 Glimepiride 56133 non-null uint8
30 Glipizide 56133 non-null uint8
31 Glipizide_And_Metformin_Hydrochloride 56133 non-null uint8
32 Glucophage 56133 non-null uint8
33 Glucophage_XR 56133 non-null uint8
34 Glucotrol 56133 non-null uint8
35 Glucotrol_XL 56133 non-null uint8
36 Glyburide 56133 non-null uint8
37 Glyburide_And_Metformin_Hydrochloride 56133 non-null uint8
38 Glyburide-metformin_Hydrochloride 56133 non-null uint8
39 Glynase 56133 non-null uint8
40 Glyset 56133 non-null uint8
41 Humulin_R 56133 non-null uint8
42 Metformin_Hcl 56133 non-null uint8
43 Metformin_Hydrochloride 56133 non-null uint8
44 Metformin_Hydrochloride_Extended_Release 56133 non-null uint8
45 Miglitol 56133 non-null uint8
46 Nateglinide 56133 non-null uint8
47 Novolin_R 56133 non-null uint8
48 Pioglitazone 56133 non-null uint8
49 Pioglitazone_Hydrochloride_And_Glimepiride 56133 non-null uint8
50 Prandin 56133 non-null uint8
51 Repaglinide 56133 non-null uint8
52 Riomet 56133 non-null uint8
53 Riomet_Er 56133 non-null uint8
54 Starlix 56133 non-null uint8
55 Tolazamide 56133 non-null uint8
56 Tolbutamide 56133 non-null uint8
dtypes: int64(10), object(14), uint8(33)
memory usage: 12.0+ MB
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice.
###Code
# Let's check agg_drug_df for NaN
nan_df = (agg_drug_df == 'nan').mean().sort_values(ascending=False)*100
print(nan_df[nan_df > 0.].to_string(header=None))
nan_df[nan_df > 0.].plot(kind='bar', )
plt.title('Missing values percentage')
plt.show()
missed_primary_code = agg_drug_df[agg_drug_df['primary_diagnosis_code'] == 'nan']
missed_primary_code
missed_gender = agg_drug_df[(agg_drug_df['gender'] != 'Male') & (agg_drug_df['gender'] != 'Female')]
missed_gender
###Output
_____no_output_____
###Markdown
**Response Q5**: Previously we have removed columns with more that 50% missed data. Here are the list of these features:- weight- max_glu_serum- A1CresultIn the aggregated dataset the medical_specialty (48% missed values) and payer_code (41% missed values) needed to be removed too because of high percentages of missed values.There are 8 patients with missed primary_diagnosis_code, which will be removed from the final dataset.There are 2 patients with Unknown/Invalid gender, which will be removed from the final dataset too.
###Code
agg_drug_df_final = agg_drug_df.drop(columns = ['medical_specialty', 'payer_code'])
agg_drug_df_final = agg_drug_df_final.drop(index=missed_primary_code.index)
agg_drug_df_final = agg_drug_df_final.drop(index=missed_gender.index)
display(agg_drug_df_final.info())
display(agg_drug_df_final.describe())
agg_drug_df_final.head()
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
required_col_list = ['race', 'gender', 'age']
categorical_col_list = ['primary_diagnosis_code'] + required_col_list + ndc_col_list
numerical_col_list = ['num_lab_procedures', 'number_diagnoses', 'num_medications', 'num_procedures']
PREDICTOR_FIELD = 'time_in_hospital'
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df_final, categorical_col_list, numerical_col_list,
PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, categorical_col_list,
numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
/home/workspace/starter_code/utils.py:29: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[predictor] = df[predictor].astype(float)
/home/workspace/starter_code/utils.py:31: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[c] = cast_df(df, c, d_type=str)
/home/workspace/starter_code/utils.py:33: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[numerical_column] = impute_df(df, numerical_column, numerical_impute_value)
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
from student_utils import patient_dataset_splitter
%autoreload
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df_final['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
Test passed for number of unique patients being equal!
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 1517
2.0 1852
3.0 1955
4.0 1531
5.0 1170
6.0 818
7.0 681
8.0 494
9.0 310
10.0 266
11.0 213
12.0 176
13.0 130
14.0 112
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
gender
Female 5885
Male 5340
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, categorical_col_list)
vocab_file_list
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
%autoreload
tf_cat_col_list = create_tf_categorical_feature_cols(categorical_col_list)
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
IndicatorColumn(categorical_column=VocabularyFileCategoricalColumn(key='primary_diagnosis_code', vocabulary_file='./diabetes_vocab/primary_diagnosis_code_vocab.txt', vocabulary_size=604, num_oov_buckets=1, dtype=tf.string, default_value=-1))
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4267: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4322: VocabularyFileCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
tf.Tensor(
[[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]], shape=(128, 605), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
%autoreload
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='num_lab_procedures', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=functools.partial(<function normalize_numeric_with_zscore at 0x7fa3b3330cb0>, mean=43.59650164820479, std=20.05243236103567))
tf.Tensor(
[[ 1.1 ]
[ 1.2 ]
[-0.2 ]
[-0.6 ]
[-0.6 ]
[ 0.05]
[ 1.4 ]
[ 0.85]
[-0.45]
[-0.45]
[ 1. ]
[-0.85]
[ 0.3 ]
[ 1.95]
[ 0.15]
[-2.1 ]
[ 0.15]
[ 0.75]
[ 1.15]
[-0.35]
[-0.15]
[ 0.9 ]
[ 1.75]
[ 1.3 ]
[ 1.1 ]
[ 0.7 ]
[-2.1 ]
[ 1.35]
[ 1.2 ]
[ 0.2 ]
[ 0.95]
[-0.2 ]
[ 0.3 ]
[ 0.25]
[ 1.45]
[ 2. ]
[ 1.4 ]
[-0.9 ]
[ 0. ]
[-0.35]
[-1.15]
[-2.05]
[ 0.35]
[-0.55]
[ 0.5 ]
[-1.55]
[-2.1 ]
[ 0.95]
[-0.45]
[ 0.75]
[ 0.25]
[-0.55]
[ 0.5 ]
[-0.15]
[ 1.3 ]
[ 0.1 ]
[ 0.55]
[-0.45]
[ 1.45]
[-0.35]
[ 0.55]
[ 0.65]
[-1.3 ]
[-0.75]
[-1.95]
[ 1.4 ]
[-1.1 ]
[-0.15]
[-0.15]
[-0.35]
[-0.9 ]
[-0.6 ]
[ 1.35]
[ 0.55]
[ 2.1 ]
[-1.2 ]
[ 0.6 ]
[ 0.75]
[ 0.25]
[-1.45]
[-1.7 ]
[ 0.1 ]
[ 0.15]
[ 0.15]
[-0.4 ]
[-0.2 ]
[-0.5 ]
[-2.05]
[-0.6 ]
[ 1.1 ]
[-1.7 ]
[-0.35]
[ 0. ]
[-0.7 ]
[ 0.05]
[ 0.15]
[-1.5 ]
[ 0.65]
[ 0.6 ]
[-1.15]
[-0.85]
[ 1.3 ]
[ 0. ]
[-1.1 ]
[-1.1 ]
[-1.6 ]
[ 0.25]
[ 1.55]
[-1.35]
[-0.65]
[ 0.85]
[ 2.1 ]
[-0.65]
[ 0.05]
[-1. ]
[ 1.15]
[-0.35]
[ 1.45]
[-1.6 ]
[-1.15]
[ 0.05]
[-0.05]
[-1.15]
[-1.7 ]
[-2.05]
[-1.7 ]
[-0.4 ]
[-1.1 ]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below is a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dropout(.2),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(.2),
tf.keras.layers.Dense(64, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric, 'mae'])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=5)
reduce_l_rate = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss',
verbose=1,
factor=0.5,
patience=2,
min_lr=1e-7)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop, reduce_l_rate],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=100)
###Output
Train for 264 steps, validate for 88 steps
Epoch 1/100
264/264 [==============================] - 14s 54ms/step - loss: 29.2853 - mse: 29.1603 - mae: 4.2545 - val_loss: 22.7783 - val_mse: 22.5085 - val_mae: 3.6382
Epoch 2/100
264/264 [==============================] - 9s 35ms/step - loss: 20.8779 - mse: 20.3832 - mae: 3.4029 - val_loss: 16.4091 - val_mse: 15.4874 - val_mae: 2.9666
Epoch 3/100
264/264 [==============================] - 10s 37ms/step - loss: 15.6352 - mse: 14.9052 - mae: 2.8913 - val_loss: 12.7456 - val_mse: 11.7415 - val_mae: 2.5919
Epoch 4/100
264/264 [==============================] - 9s 34ms/step - loss: 13.2143 - mse: 12.2767 - mae: 2.6106 - val_loss: 12.9018 - val_mse: 12.0318 - val_mae: 2.5497
Epoch 5/100
264/264 [==============================] - 9s 35ms/step - loss: 12.4908 - mse: 11.5807 - mae: 2.5313 - val_loss: 11.0703 - val_mse: 10.1492 - val_mae: 2.3281
Epoch 6/100
264/264 [==============================] - 9s 35ms/step - loss: 11.9415 - mse: 11.0857 - mae: 2.4639 - val_loss: 10.7413 - val_mse: 9.8993 - val_mae: 2.3323
Epoch 7/100
264/264 [==============================] - 10s 37ms/step - loss: 10.9491 - mse: 9.9918 - mae: 2.3376 - val_loss: 11.3369 - val_mse: 10.4561 - val_mae: 2.3475
Epoch 8/100
264/264 [==============================] - 9s 34ms/step - loss: 9.7680 - mse: 8.8885 - mae: 2.2141 - val_loss: 10.6532 - val_mse: 10.0156 - val_mae: 2.4314
Epoch 9/100
264/264 [==============================] - 9s 35ms/step - loss: 10.2636 - mse: 9.5999 - mae: 2.2998 - val_loss: 8.9009 - val_mse: 7.9687 - val_mae: 2.0756
Epoch 10/100
264/264 [==============================] - 9s 34ms/step - loss: 10.1430 - mse: 9.3591 - mae: 2.2616 - val_loss: 9.1357 - val_mse: 8.2664 - val_mae: 2.1078
Epoch 11/100
264/264 [==============================] - 9s 34ms/step - loss: 9.8149 - mse: 8.9186 - mae: 2.2224 - val_loss: 8.5864 - val_mse: 7.9312 - val_mae: 2.1057
Epoch 12/100
264/264 [==============================] - 10s 36ms/step - loss: 9.1984 - mse: 8.3491 - mae: 2.1567 - val_loss: 9.4680 - val_mse: 8.6690 - val_mae: 2.1458
Epoch 13/100
263/264 [============================>.] - ETA: 0s - loss: 9.0027 - mse: 8.2986 - mae: 2.1492
Epoch 00013: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.
264/264 [==============================] - 9s 35ms/step - loss: 8.9859 - mse: 8.2977 - mae: 2.1491 - val_loss: 9.6088 - val_mse: 8.7083 - val_mae: 2.1250
Epoch 14/100
264/264 [==============================] - 9s 34ms/step - loss: 8.5635 - mse: 7.8021 - mae: 2.0771 - val_loss: 8.5613 - val_mse: 7.8397 - val_mae: 2.0589
Epoch 15/100
264/264 [==============================] - 9s 35ms/step - loss: 8.6609 - mse: 7.9139 - mae: 2.0815 - val_loss: 9.0106 - val_mse: 8.2486 - val_mae: 2.0825
Epoch 16/100
263/264 [============================>.] - ETA: 0s - loss: 8.4095 - mse: 7.6628 - mae: 2.0635
Epoch 00016: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.
264/264 [==============================] - 9s 34ms/step - loss: 8.3973 - mse: 7.6622 - mae: 2.0635 - val_loss: 9.3414 - val_mse: 8.5680 - val_mae: 2.1210
Epoch 17/100
264/264 [==============================] - 9s 35ms/step - loss: 8.4756 - mse: 7.7317 - mae: 2.0634 - val_loss: 9.3617 - val_mse: 8.4508 - val_mae: 2.1142
Epoch 18/100
264/264 [==============================] - 9s 34ms/step - loss: 8.2144 - mse: 7.5250 - mae: 2.0384 - val_loss: 8.1122 - val_mse: 7.3467 - val_mae: 2.0354
Epoch 19/100
264/264 [==============================] - 9s 34ms/step - loss: 8.3406 - mse: 7.5102 - mae: 2.0425 - val_loss: 8.0384 - val_mse: 7.3123 - val_mae: 1.9938
Epoch 20/100
264/264 [==============================] - 9s 35ms/step - loss: 8.3116 - mse: 7.5310 - mae: 2.0405 - val_loss: 8.5217 - val_mse: 7.7324 - val_mae: 2.0520
Epoch 21/100
261/264 [============================>.] - ETA: 0s - loss: 8.2445 - mse: 7.4510 - mae: 2.0303
Epoch 00021: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814.
264/264 [==============================] - 9s 35ms/step - loss: 8.2647 - mse: 7.4837 - mae: 2.0339 - val_loss: 8.4619 - val_mse: 7.5789 - val_mae: 2.0573
Epoch 22/100
264/264 [==============================] - 9s 35ms/step - loss: 7.8901 - mse: 7.1640 - mae: 1.9968 - val_loss: 8.4591 - val_mse: 7.7273 - val_mae: 2.0738
Epoch 23/100
263/264 [============================>.] - ETA: 0s - loss: 8.0384 - mse: 7.3854 - mae: 2.0334
Epoch 00023: ReduceLROnPlateau reducing learning rate to 6.25000029685907e-05.
264/264 [==============================] - 9s 35ms/step - loss: 8.0258 - mse: 7.3848 - mae: 2.0333 - val_loss: 8.1436 - val_mse: 7.3397 - val_mae: 1.9921
Epoch 24/100
264/264 [==============================] - 9s 34ms/step - loss: 8.0553 - mse: 7.2281 - mae: 1.9974 - val_loss: 8.6492 - val_mse: 7.9860 - val_mae: 2.0871
Epoch 25/100
262/264 [============================>.] - ETA: 0s - loss: 8.0074 - mse: 7.3096 - mae: 2.0168
Epoch 00025: ReduceLROnPlateau reducing learning rate to 3.125000148429535e-05.
264/264 [==============================] - 9s 35ms/step - loss: 7.9819 - mse: 7.3013 - mae: 2.0161 - val_loss: 8.4705 - val_mse: 7.7390 - val_mae: 2.0376
Epoch 26/100
264/264 [==============================] - 9s 34ms/step - loss: 7.7774 - mse: 7.0380 - mae: 1.9777 - val_loss: 8.1639 - val_mse: 7.4755 - val_mae: 2.0376
Epoch 27/100
262/264 [============================>.] - ETA: 0s - loss: 7.9320 - mse: 7.3664 - mae: 2.0279
Epoch 00027: ReduceLROnPlateau reducing learning rate to 1.5625000742147677e-05.
264/264 [==============================] - 9s 35ms/step - loss: 7.9142 - mse: 7.3639 - mae: 2.0282 - val_loss: 8.6696 - val_mse: 7.9225 - val_mae: 2.0726
Epoch 28/100
264/264 [==============================] - 9s 35ms/step - loss: 8.0654 - mse: 7.2922 - mae: 2.0056 - val_loss: 8.2533 - val_mse: 7.9152 - val_mae: 2.0419
Epoch 29/100
262/264 [============================>.] - ETA: 0s - loss: 8.1651 - mse: 7.4574 - mae: 2.0326
Epoch 00029: ReduceLROnPlateau reducing learning rate to 7.812500371073838e-06.
264/264 [==============================] - 9s 35ms/step - loss: 8.1296 - mse: 7.4465 - mae: 2.0313 - val_loss: 8.1906 - val_mse: 7.4268 - val_mae: 2.0192
Epoch 30/100
264/264 [==============================] - 9s 35ms/step - loss: 7.9655 - mse: 7.3955 - mae: 2.0272 - val_loss: 8.6307 - val_mse: 8.1102 - val_mae: 2.0970
Epoch 31/100
263/264 [============================>.] - ETA: 0s - loss: 7.9489 - mse: 7.2726 - mae: 2.0094
Epoch 00031: ReduceLROnPlateau reducing learning rate to 3.906250185536919e-06.
264/264 [==============================] - 9s 36ms/step - loss: 7.9416 - mse: 7.2721 - mae: 2.0093 - val_loss: 8.3593 - val_mse: 7.5595 - val_mae: 2.0150
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = categorical_col_list + numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
from student_utils import get_mean_std_from_preds
%autoreload
m, s = get_mean_std_from_preds(diabetes_yhat)
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df
model_r2_score = r2_score(prob_output_df['actual_value'], prob_output_df['pred_mean'])
model_rmse = np.sqrt(mean_squared_error(prob_output_df['actual_value'], prob_output_df['pred_mean']))
print(f'Probablistic model evaluation metrics based on the mean of predictions using test dataset: \nRMSE = {model_rmse:.2f}, \nR2-score = {model_r2_score:.2f}')
###Output
Probablistic model evaluation metrics based on the mean of predictions using test dataset:
RMSE = 2.32,
R2-score = 0.39
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
%autoreload
threshold = 5
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean', threshold)
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(df, pred_np, demo_col_list):
test_df = df.copy()
for c in demo_col_list:
test_df[c] = df[c].astype(str)
test_df['label_value'] = df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
test_df.reset_index(inplace=True)
test_df['score'] = pred_np
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations?
###Code
# Extra helper functions to evaluate training process and the model
def plot_roc_curve(ground_truth, probability, legend='Estimated hospitalization time', f_name='roc_eht.png'):
'''
This fucntions accepts imputs:
ground_truth: list, array, or data series
probability: list, array, or data series
It plots ROC curve and calculates AUC
'''
fpr, tpr, _ = roc_curve(ground_truth, probability)
roc_auc = auc(fpr, tpr)
plt.figure(figsize=(5, 5))
plt.plot(fpr, tpr, color='darkorange',
lw=2, label=f'{legend} (area = {roc_auc:0.2f})')
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc=4)
plt.savefig(f_name)
plt.show()
return
# function to plot the precision_recall_curve. You can utilizat precision_recall_curve imported above
def plot_precision_recall_curve(ground_truth, probability, legend='Estimated hospitalization time', f_name='pr_rec_eht.png'):
'''
This fucntions accepts imputs:
ground_truth: list, array, or data series
probability: list, array, or data series
It plots Precision-Recall curve and caclulates average precision-recall score
'''
average_precision = average_precision_score(ground_truth, probability)
precision, recall, _ = precision_recall_curve(ground_truth, probability)
plt.figure(figsize=(5, 5))
plt.plot(recall, precision, color='darkblue',
lw=2, label=f'{legend} (AP score: {average_precision:0.2f})')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.title(f'Precision-Recall Curve')
plt.legend(loc=3)
plt.savefig(f_name)
plt.show()
return
#Also consider plotting the history of your model training:
def plot_history(history, f_name='hist_eht.png'):
x = range(len(history['val_loss']))
fig, axs= plt.subplots(1, 2, figsize=(14,7))
fig.suptitle('Training history plots')
axs[0].plot(x, history['loss'], color='r', label='train loss MSE')
axs[0].plot(x, history['val_loss'], color='b', label='valid loss MSE')
axs[0].set_title('Trainin/validation loss')
axs[0].legend(loc=0)
axs[1].plot(x, history['mae'], color='r', label='train MAE')
axs[1].plot(x, history['val_mae'], color='b', label='valid MAE')
axs[1].set_title('Trainin/validation MAE')
axs[1].legend(loc=0)
plt.savefig(f_name)
plt.show()
return
# AUC, F1, precision and recall
# Summary
y_true = pred_test_df['label_value'].values
y_pred = pred_test_df['score'].values
print(classification_report(y_true, y_pred))
plot_history(history.history)
plot_roc_curve(y_true, y_pred)
plot_precision_recall_curve(y_true, y_pred)
###Output
_____no_output_____
###Markdown
**Response Q11**: First of all, we need to take into account that we trained a regression model to predict estimated hospitalization time. The root mean squared error, RMSE, of the model on test dataset is 2.32 which is in the range of standard deviation of the dataset (2.2). The Coefficient of determination, R2-score, is 0.39, which means that the model can explain 39% of variance. Taking into account these values, we can assume that our model is OK, due to acceptable RMSE and R2-score, therefore it is a good starter model, and in order to deploy it, the model have to be improved.The regression model was converted to classification model this threshold of 5 days in the hospital. The model exibit good precision and recall (both weighted avgerage values are 0.76). The precision looks at the number of positive cases accurately identified divided by all of the cases identified as positive by the algorithm no matter whether they are identified right or wrong. A high precision test gives you more confidence that a positive test result is actually positive, however, does not take false negatives into account. A high precision test could still miss a lot of positive cases. High-precision tests are beneficial when you want to confirm a suspected diagnosis, and in our case is to confirm the hospitalization time of > 5 days. When a high recall test returns a negative result, you can be confident that the result is truly negative since a high recall test has low false negatives. Recall does not take false positives into account. Because of this, high recall tests are good when you want to make sure someone doesnโt have a disease, and in our case is to confirm hospitalization time less than 5 days. Optimizing one of these metrics usually comes at the expense of sacrificing the other. In fact, the mean of the time in the hospital is 4.4 and in order to improve classification model we may need to vary this threshold in order to maximize F1 score (harmonic mean of precision and recall) or other classification model evaluation metrics. It might be worth to use Matthewโs correlation coefficient (MCC), which is a good measure of model quality for binary classes because it takes into account all four values in the confusion matrix (TP, FP, TN, and FN), to find a better threshold.The training history plots clearly shows that we still can keep training to get a little bit better MSE. There is no observable overfitting. Therefore, it looks like the model needed to be tuned (different optimizer, some other callbacks) or architecture of the model have to be changed in order to get improvements. Also, we may need to reconsider the selected feature. In addition the ensemble approach could help to improve the model performance. 7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
/opt/conda/lib/python3.7/site-packages/aequitas/group.py:143: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df['score'] = df['score'].astype(float)
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
get_disparity_predefined_group()
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
# Is there significant bias in your model for either race or gender?
p = aqp.plot_group_metric_all(clean_xtab, metrics=['tpr', 'fpr', 'fnr', 'tnr', 'precision'], ncols=2)
###Output
_____no_output_____
###Markdown
**Response Q12**: For the gender and race fields, there is no significant bias in the model across any of the groups. Therefore, Hispanic and Asian race has lower recall and with higher FNR (error type II, falsely identified staying less than 5 days). The FPR for females are little bit higher (error type I), which means that the females will be more falsely identified for staying more than 5 days in the hospital compare to males. The precision is a little bit better for males, which means that the time staying in the hospital for males will be a little bit more accurate. Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
# Reference group fairness plot
fpr_disparity_fairness = aqp.plot_fairness_disparity(fdf, group_metric='fpr', attribute_name='race')
fpr_disparity_fairness = aqp.plot_fairness_disparity(fdf, group_metric='fpr', attribute_name='gender')
fpr_fairness = aqp.plot_fairness_group(fdf, group_metric='fpr', title=True)
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import aequitas as ae
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
df.head()
df.head()
###Output
_____no_output_____
###Markdown
Determine Level of Dataset (Line or Encounter) **Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked.
###Code
# Line Test
try:
assert len(df) > df['encounter_id'].nunique()
print('dataset is potentially at the line level')
except:
print('dataset is not at the line level')
###Output
dataset is potentially at the line level
###Markdown
Student Response: The dataset is at the line level. We can aggregate at the encounter_id or the patient_nbr if we needt to do a longetudinal analysis Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project.
###Code
# missing values
def check_null_values(df):
null_df = pd.DataFrame({
'columns': df.columns,
'percent_null': df.isnull().sum(axis=0) / len(df) * 100,
'percent_missing': df.isin(['?']).sum(axis=0) / len(df) * 100,
'percent_none': df.isin(['None']).sum(axis=0) / len(df) * 100,
'overall_missing': (df.isnull().sum(axis=0) + df.isin(['?', 'None']).sum(axis=0)) / len(df)* 100
})
return null_df
null_df = check_null_values(df)
null_df.sort_values(by='overall_missing', ascending = False)
# replace all missing / none with nan
#df.replace(to_replace=['None', '?'], value=np.NaN, inplace=True)
# distribution numerical columns
# select numerical columns
col_numerical = ['time_in_hospital', 'number_outpatient', 'number_inpatient', 'number_emergency', 'num_lab_procedures',\
'number_diagnoses', 'num_medications', 'num_procedures']
sns.pairplot(df.loc[:,col_numerical].sample(1000), diag_kind='kde')
# cardinality
# select categorical features
col_categorical = ['admission_type_id', 'race', 'gender', 'age', 'weight', 'payer_code', 'medical_specialty',
'primary_diagnosis_code', 'other_diagnosis_codes', 'ndc_code', 'max_glu_serum',
'A1Cresult', 'readmitted', 'discharge_disposition_id', 'admission_source_id']
def count_unique_values(df, cat_col_list):
cat_df = df[cat_col_list]
val_df = pd.DataFrame({'columns': cat_df.columns,
'cardinality': cat_df.nunique()
})
return val_df
val_df = count_unique_values(df, col_categorical)
val_df.sort_values(by='cardinality', ascending = False)
# age distributions
sns.countplot(x='age', data=df, orient= 'h')
# age distributions
sns.countplot(x='gender', data=df, orient= 'h')
# age distributions
sns.countplot(x='age', data=df, orient= 'h', hue='gender')
###Output
_____no_output_____
###Markdown
**Student Response**: - The columns with the most missing values are weight, max_glu_serum, A1Cresult, medical_specialty, payer_code, ndc_code, race, primary_diagnosis_code- Number lab procedures and number medications are quasi guassian- The largest cardinality are other_diagnosis_codes, primary_diagnosis_code, ndc_code, medical_specialty, discharge_disposition_id- for age, people over 50 years old are over represented. Gender is quite balanced with slightly more female. There are more old female than male
###Code
######NOTE: The visualization will only display in Chrome browser. ########
import tensorflow_data_validation as tfdv
full_data_stats = tfdv.generate_statistics_from_csv(data_location='./data/final_project_dataset.csv')
tfdv.visualize_statistics(full_data_stats)
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
from student_utils import reduce_dimension_ndc
reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
Number of unique patients:71518
Number of unique encounters:71518
Tests passed!!
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
exclusion_list = ['generic_drug_name']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice. Student response: there is too many missing data for weights to be included, payer code does not pertain to the patient so is has no reason to be included
###Code
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
required_demo_col_list = ['race', 'gender', 'age']
student_categorical_col_list = [ "admission_type_id", "discharge_disposition_id", "admission_source_id",
"medical_specialty", "primary_diagnosis_code"] + required_demo_col_list + ndc_col_list
student_numerical_col_list = ["number_outpatient", "num_medications"] #, "number_inpatient", "number_emergency",
#"num_lab_procedures", "number_diagnoses", "num_medications", "num_procedures"]
PREDICTOR_FIELD = 'time_in_hospital'
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return agg_drug_df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
/home/workspace/starter_code/utils.py:29: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[predictor] = df[predictor].astype(float)
/home/workspace/starter_code/utils.py:31: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[c] = cast_df(df, c, d_type=str)
/home/workspace/starter_code/utils.py:33: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[numerical_column] = impute_df(df, numerical_column, numerical_impute_value)
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
def patient_dataset_splitter(df, patient_key='patient_nbr', val_percentage = 0.2, test_percentage = 0.2):
'''
df: pandas dataframe, input dataset that will be split
patient_key: string, column that is the patient id
return:
- train: pandas dataframe,
- validation: pandas dataframe,
- test: pandas dataframe,
'''
df = df.iloc[np.random.permutation(len(df))]
unique_values = df[patient_key].unique()
total_values = len(unique_values)
train_size = round(total_values * (1 - (val_percentage + test_percentage)))
val_size = round(total_values * val_percentage)
train = df[df[patient_key].isin(unique_values[:train_size])].reset_index(drop=True)
validation = df[df[patient_key].isin(unique_values[train_size:(train_size+val_size)])].reset_index(drop=True)
test = df[df[patient_key].isin(unique_values[(train_size+val_size):])].reset_index(drop=True)
return train, validation, test
#from student_utils import patient_dataset_splitter
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
Test passed for number of unique patients being equal!
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions? It is quite simialr
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 1506
2.0 1774
3.0 1970
4.0 1484
5.0 1102
6.0 824
7.0 584
8.0 455
9.0 321
10.0 245
11.0 203
12.0 138
13.0 134
14.0 114
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
gender
Female 5641
Male 5212
Unknown/Invalid 1
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
EmbeddingColumn(categorical_column=VocabularyFileCategoricalColumn(key='admission_type_id', vocabulary_file='./diabetes_vocab/admission_type_id_vocab.txt', vocabulary_size=9, num_oov_buckets=0, dtype=tf.string, default_value=-1), dimension=10, combiner='mean', initializer=<tensorflow.python.ops.init_ops.TruncatedNormal object at 0x7f662da188d0>, ckpt_to_load_from=None, tensor_name_in_ckpt=None, max_norm=None, trainable=True)
tf.Tensor(
[[-0.15968303 -0.06628568 0.4604329 ... -0.3643807 0.08629031
-0.24483539]
[-0.33841926 0.44657055 0.2824366 ... -0.04972162 -0.11393847
0.4536369 ]
[-0.33841926 0.44657055 0.2824366 ... -0.04972162 -0.11393847
0.4536369 ]
...
[-0.01932695 0.33135852 0.52535665 ... -0.3675493 0.10708727
-0.02201625]
[ 0.38363177 -0.05970484 -0.29925 ... -0.15401514 0.13306576
-0.03270593]
[-0.01932695 0.33135852 0.52535665 ... -0.3675493 0.10708727
-0.02201625]], shape=(128, 10), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='number_outpatient', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=functools.partial(<function normalize_numeric_with_zscore at 0x7f6634d9d3b0>, mean=0.297300617264994, std=1.109905823948064))
tf.Tensor(
[[2.]
[1.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[2.]
[0.]
[0.]
[0.]
[0.]
[0.]
[4.]
[3.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[5.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[1.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[2.]
[0.]
[0.]
[0.]
[0.]
[1.]
[0.]
[6.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[1.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[1.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[3.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[1.]
[0.]
[0.]
[2.]
[0.]
[0.]
[0.]
[1.]
[0.]
[0.]
[1.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]
[0.]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=10)
###Output
Train for 255 steps, validate for 85 steps
Epoch 1/10
255/255 [==============================] - 15s 59ms/step - loss: 31.1600 - mse: 31.1200 - val_loss: 26.2177 - val_mse: 25.9390
Epoch 2/10
255/255 [==============================] - 5s 21ms/step - loss: 22.6185 - mse: 22.1616 - val_loss: 17.1350 - val_mse: 16.4823
Epoch 3/10
255/255 [==============================] - 5s 19ms/step - loss: 15.5134 - mse: 14.6462 - val_loss: 15.4619 - val_mse: 14.7787
Epoch 4/10
255/255 [==============================] - 5s 20ms/step - loss: 14.4328 - mse: 13.6318 - val_loss: 12.6629 - val_mse: 11.6112
Epoch 5/10
255/255 [==============================] - 5s 20ms/step - loss: 12.6600 - mse: 11.8987 - val_loss: 10.9261 - val_mse: 9.9204
Epoch 6/10
255/255 [==============================] - 5s 21ms/step - loss: 11.5307 - mse: 10.5918 - val_loss: 12.8441 - val_mse: 12.0077
Epoch 7/10
255/255 [==============================] - 5s 20ms/step - loss: 11.5340 - mse: 10.6558 - val_loss: 10.6842 - val_mse: 9.8981
Epoch 8/10
255/255 [==============================] - 5s 21ms/step - loss: 10.7250 - mse: 9.8242 - val_loss: 10.6679 - val_mse: 9.7773
Epoch 9/10
255/255 [==============================] - 5s 21ms/step - loss: 9.8633 - mse: 8.9080 - val_loss: 9.2134 - val_mse: 8.5113
Epoch 10/10
255/255 [==============================] - 5s 21ms/step - loss: 10.2737 - mse: 9.3968 - val_loss: 10.4610 - val_mse: 9.6349
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
from student_utils import get_mean_std_from_preds
m, s = get_mean_std_from_preds(diabetes_yhat)
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head()
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations?
###Code
# AUC, F1, precision and recall
from sklearn.metrics import accuracy_score, f1_score, classification_report, roc_auc_score
y_true = pred_test_df['label_value'].values
y_pred = pred_test_df['score'].values
# Summary
accuracy_score(y_true, y_pred)
print(classification_report(y_true, y_pred))
roc_auc_score(y_true, y_pred)
###Output
_____no_output_____
###Markdown
- The model has a decent performance with AUC of 0.66, when it predicts that the stay might be greater than 5 days it is usually good (high precision of 0.83) but it fails to identify a lot of cases for which the stay was greater than 5 days (low recall with 0.45)- Precision tells us of all the data points predicted as positive by the model how many of them were actually positive. Similarly Recall tells us, of all the data points actually positive how many of them were predicted correctly by the model. Also as we try to increase the value of precision by changing the threshold value for prediction, Recall suffers- In this particular case of selecting patient for drug testing, higher precision is better than recall. This is because we want to make sure that all identified patients will stay at the hospital for at least 5 days. The cost of low recall is null provided that we can identify already enough patients- To improve model and given that recall is low, we are probably missing some features to determine the length of duration so we could add new features to the model. Additionally, we can try to build a deeper NN to capture more complicated patterns. 7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
/opt/conda/lib/python3.7/site-packages/aequitas/group.py:143: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df['score'] = df['score'].astype(float)
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
get_disparity_predefined_group()
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
p = aqp.plot_group_metric_all(xtab, metrics=['ppr', 'fpr'], ncols=2)
# Is there significant bias in your model for either race or gender?
###Output
_____no_output_____
###Markdown
The model seems to outperform for Caucasian with a ppr well above all other ethnicities so there is a racial bias.Then gender bias is more moderate in favor of women Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
# Reference group fairness plot
fpr_disparity = aqp.plot_disparity(bdf, group_metric='fpr_disparity',
attribute_name='race')
###Output
_____no_output_____
###Markdown
Overview 1. Project Instructions & Prerequisites2. Learning Objectives3. Data Preparation4. Create Categorical Features with TF Feature Columns5. Create Continuous/Numerical Features with TF Feature Columns6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers7. Evaluating Potential Model Biases with Aequitas Toolkit 1. Project Instructions & Prerequisites Project Instructions **Context**: EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical industry and regulators to [make decisions on clinical trials](https://www.fda.gov/news-events/speeches-fda-officials/breaking-down-barriers-between-clinical-trials-and-clinical-care-incorporating-real-world-evidence). You are a data scientist for an exciting unicorn healthcare startup that has created a groundbreaking diabetes drug that is ready for clinical trial testing. It is a very unique and sensitive drug that requires administering the drug over at least 5-7 days of time in the hospital with frequent monitoring/testing and patient medication adherence training with a mobile application. You have been provided a patient dataset from a client partner and are tasked with building a predictive model that can identify which type of patients the company should focus their efforts testing this drug on. Target patients are people that are likely to be in the hospital for this duration of time and will not incur significant additional costs for administering this drug to the patient and monitoring. In order to achieve your goal you must build a regression model that can predict the estimated hospitalization time for a patient and use this to select/filter patients for your study. **Expected Hospitalization Time Regression Model:** Utilizing a synthetic dataset(denormalized at the line level augmentation) built off of the UCI Diabetes readmission dataset, students will build a regression model that predicts the expected days of hospitalization time and then convert this to a binary prediction of whether to include or exclude that patient from the clinical trial.This project will demonstrate the importance of building the right data representation at the encounter level, with appropriate filtering and preprocessing/feature engineering of key medical code sets. This project will also require students to analyze and interpret their model for biases across key demographic groups. Please see the project rubric online for more details on the areas your project will be evaluated. Dataset Due to healthcare PHI regulations (HIPAA, HITECH), there are limited number of publicly available datasets and some datasets require training and approval. So, for the purpose of this exercise, we are using a dataset from UC Irvine(https://archive.ics.uci.edu/ml/datasets/Diabetes+130-US+hospitals+for+years+1999-2008) that has been modified for this course. Please note that it is limited in its representation of some key features such as diagnosis codes which are usually an unordered list in 835s/837s (the HL7 standard interchange formats used for claims and remits). **Data Schema**The dataset reference information can be https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/. There are two CSVs that provide more details on the fields and some of the mapped values. Project Submission When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "student_project_submission.ipynb" and save another copy as an HTML file by clicking "File" -> "Download as.."->"html". Include the "utils.py" and "student_utils.py" files in your submission. The student_utils.py should be where you put most of your code that you write and the summary and text explanations should be written inline in the notebook. Once you download these files, compress them into one zip file for submission. Prerequisites - Intermediate level knowledge of Python- Basic knowledge of probability and statistics- Basic knowledge of machine learning concepts- Installation of Tensorflow 2.0 and other dependencies(conda environment.yml or virtualenv requirements.txt file provided) Environment Setup For step by step instructions on creating your environment, please go to https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/README.md. 2. Learning Objectives By the end of the project, you will be able to - Use the Tensorflow Dataset API to scalably extract, transform, and load datasets and build datasets aggregated at the line, encounter, and patient data levels(longitudinal) - Analyze EHR datasets to check for common issues (data leakage, statistical properties, missing values, high cardinality) by performing exploratory data analysis. - Create categorical features from Key Industry Code Sets (ICD, CPT, NDC) and reduce dimensionality for high cardinality features by using embeddings - Create derived features(bucketing, cross-features, embeddings) utilizing Tensorflow feature columns on both continuous and categorical input features - SWBAT use the Tensorflow Probability library to train a model that provides uncertainty range predictions that allow for risk adjustment/prioritization and triaging of predictions - Analyze and determine biases for a model for key demographic groups by evaluating performance metrics across groups by using the Aequitas framework 3. Data Preparation
###Code
# from __future__ import absolute_import, division, print_function, unicode_literals
import os
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_probability as tfp
import matplotlib.pyplot as plt
import pandas as pd
import aequitas as ae
import seaborn as sns
# Put all of the helper functions in utils
from utils import build_vocab_files, show_group_stats_viz, aggregate_dataset, preprocess_df, df_to_dataset, posterior_mean_field, prior_trainable
pd.set_option('display.max_columns', 500)
# this allows you to make changes and save in student_utils.py and the file is reloaded every time you run a code block
%load_ext autoreload
%autoreload
#OPEN ISSUE ON MAC OSX for TF model training
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
###Output
_____no_output_____
###Markdown
Dataset Loading and Schema Review Load the dataset and view a sample of the dataset along with reviewing the schema reference files to gain a deeper understanding of the dataset. The dataset is located at the following path https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/starter_code/data/final_project_dataset.csv. Also, review the information found in the data schema https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/
###Code
dataset_path = "./data/final_project_dataset.csv"
df = pd.read_csv(dataset_path)
###Output
_____no_output_____
###Markdown
Determine Level of Dataset (Line or Encounter) **Question 1**: Based off of analysis of the data, what level is this dataset? Is it at the line or encounter level? Are there any key fields besides the encounter_id and patient_nbr fields that we should use to aggregate on? Knowing this information will help inform us what level of aggregation is necessary for future steps and is a step that is often overlooked. Student Response: The dataset is at the line level. We can aggregate data on `primary_diagnosis_code`.
###Code
df.head()
df.primary_diagnosis_code.nunique()
df.shape
# Line Test
try:
assert len(df) > df['encounter_id'].nunique()
print("Dataset is at the line level")
except:
print("Dataset is not at the line level")
###Output
Dataset is at the line level
###Markdown
Analyze Dataset **Question 2**: Utilizing the library of your choice (recommend Pandas and Seaborn or matplotlib though), perform exploratory data analysis on the dataset. In particular be sure to address the following questions: - a. Field(s) with high amount of missing/zero values - b. Based off the frequency histogram for each numerical field, which numerical field(s) has/have a Gaussian(normal) distribution shape? - c. Which field(s) have high cardinality and why (HINT: ndc_code is one feature) - d. Please describe the demographic distributions in the dataset for the age and gender fields. **OPTIONAL**: Use the Tensorflow Data Validation and Analysis library to complete. - The Tensorflow Data Validation and Analysis library(https://www.tensorflow.org/tfx/data_validation/get_started) is a useful tool for analyzing and summarizing dataset statistics. It is especially useful because it can scale to large datasets that do not fit into memory. - Note that there are some bugs that are still being resolved with Chrome v80 and we have moved away from using this for the project. **Student Response**: - a. - missing values: `weight` `max_glu_serum` `A1Cresult` `medical_specialty` `payer_code` `ndc_code` `race` - zero values: `number_emergency` `number_outpatient` `number_inpatient` `num_procedures`- b. None of the numerical fields follow the normal distribution, but `num_lab_procedures` and `num_medications` can be considered as normally distributed- c. `other_diagnosis_codes`, `primary_diagnosis_code` and `ndc_code` have high cardinality because they represent different codes related to different diseases.- d. The age is skewed towards elderly people (above 60) and there is slightly more females than males.
###Code
df.describe().T
df.count()
df.isna().sum()
# Missing values
def check_null_values(df):
null_df = pd.DataFrame({'columns': df.columns,
'percent_null': df.isnull().sum() * 100 / len(df),
'percent_zero': df.isin([0]).sum() * 100 / len(df),
'percent_none': df.isin(['None']).sum() * 100 / len(df),
'percent_qmark': df.isin(['?']).sum() * 100 / len(df)
} )
return null_df
null_df = check_null_values(df)
null_df
null_df.sum(axis=1).sort_values(ascending=False)
len(df.columns)
num_cols = df._get_numeric_data().columns
print("Number of numerical fields:", len(num_cols))
num_cols
cat_cols = df.select_dtypes(include=['object']).columns
print("Number of categorical fields:", len(cat_cols))
cat_cols
fig, axes = plt.subplots(4, 4, figsize=(16, 20))
for row in range(5):
for col in range(4):
if row*4+col < len(num_cols):
field_name = num_cols[row*4+col]
axes[row, col].hist(df.dropna(subset=[field_name])[field_name])
axes[row, col].title.set_text(field_name)
sns.distplot(df['num_lab_procedures'])
sns.distplot(df['num_medications'])
pd.DataFrame({'cardinality': df.nunique() } ).sort_values(ascending=False, by = 'cardinality')
df.age.value_counts().plot(kind='bar')
df.gender.value_counts().plot(kind="bar")
######NOTE: The visualization will only display in Chrome browser. ########
# full_data_stats = tfdv.generate_statistics_from_csv(data_location='./data/final_project_dataset.csv')
# tfdv.visualize_statistics(full_data_stats)
###Output
_____no_output_____
###Markdown
Reduce Dimensionality of the NDC Code Feature **Question 3**: NDC codes are a common format to represent the wide variety of drugs that are prescribed for patient care in the United States. The challenge is that there are many codes that map to the same or similar drug. You are provided with the ndc drug lookup file https://github.com/udacity/nd320-c1-emr-data-starter/blob/master/project/data_schema_references/ndc_lookup_table.csv derived from the National Drug Codes List site(https://ndclist.com/). Please use this file to come up with a way to reduce the dimensionality of this field and create a new field in the dataset called "generic_drug_name" in the output dataframe.
###Code
#NDC code lookup file
ndc_code_path = "./medication_lookup_tables/final_ndc_lookup_table"
ndc_code_df = pd.read_csv(ndc_code_path)
from student_utils import reduce_dimension_ndc
ndc_code_df.head()
reduce_dim_df = reduce_dimension_ndc(df, ndc_code_df)
reduce_dim_df
# Number of unique values should be less for the new output field
assert df['ndc_code'].nunique() > reduce_dim_df['generic_drug_name'].nunique()
###Output
_____no_output_____
###Markdown
Select First Encounter for each Patient **Question 4**: In order to simplify the aggregation of data for the model, we will only select the first encounter for each patient in the dataset. This is to reduce the risk of data leakage of future patient encounters and to reduce complexity of the data transformation and modeling steps. We will assume that sorting in numerical order on the encounter_id provides the time horizon for determining which encounters come before and after another.
###Code
from student_utils import select_first_encounter
first_encounter_df = select_first_encounter(reduce_dim_df)
# unique patients in transformed dataset
unique_patients = first_encounter_df['patient_nbr'].nunique()
print("Number of unique patients:{}".format(unique_patients))
# unique encounters in transformed dataset
unique_encounters = first_encounter_df['encounter_id'].nunique()
print("Number of unique encounters:{}".format(unique_encounters))
original_unique_patient_number = reduce_dim_df['patient_nbr'].nunique()
# number of unique patients should be equal to the number of unique encounters and patients in the final dataset
assert original_unique_patient_number == unique_patients
assert original_unique_patient_number == unique_encounters
print("Tests passed!!")
###Output
Number of unique patients:71518
Number of unique encounters:71518
Tests passed!!
###Markdown
Aggregate Dataset to Right Level for Modeling In order to provide a broad scope of the steps and to prevent students from getting stuck with data transformations, we have selected the aggregation columns and provided a function to build the dataset at the appropriate level. The 'aggregate_dataset" function that you can find in the 'utils.py' file can take the preceding dataframe with the 'generic_drug_name' field and transform the data appropriately for the project. To make it simpler for students, we are creating dummy columns for each unique generic drug name and adding those are input features to the model. There are other options for data representation but this is out of scope for the time constraints of the course.
###Code
exclusion_list = ['ndc_code', 'generic_drug_name']
grouping_field_list = [c for c in first_encounter_df.columns if c not in exclusion_list]
agg_drug_df, ndc_col_list = aggregate_dataset(first_encounter_df, grouping_field_list, 'generic_drug_name')
assert len(agg_drug_df) == agg_drug_df['patient_nbr'].nunique() == agg_drug_df['encounter_id'].nunique()
###Output
_____no_output_____
###Markdown
Prepare Fields and Cast Dataset Feature Selection **Question 5**: After you have aggregated the dataset to the right level, we can do feature selection (we will include the ndc_col_list, dummy column features too). In the block below, please select the categorical and numerical features that you will use for the model, so that we can create a dataset subset. For the payer_code and weight fields, please provide whether you think we should include/exclude the field in our model and give a justification/rationale for this based off of the statistics of the data. Feel free to use visualizations or summary statistics to support your choice. Student response: Most of their values are missing, so they should be excluded.
###Code
df.weight.value_counts()
df.payer_code.value_counts()
num_cols
cat_cols
'''
Please update the list to include the features you think are appropriate for the model
and the field that we will be using to train the model. There are three required demographic features for the model
and I have inserted a list with them already in the categorical list.
These will be required for later steps when analyzing data splits and model biases.
'''
required_demo_col_list = ['race', 'gender', 'age']
student_categorical_col_list = ['medical_specialty', 'primary_diagnosis_code',
'other_diagnosis_codes', 'max_glu_serum', 'A1Cresult',
'change', 'readmitted'] \
+ required_demo_col_list + ndc_col_list
student_numerical_col_list = [ "num_procedures", "num_medications", 'number_diagnoses']
PREDICTOR_FIELD = 'time_in_hospital'
def select_model_features(df, categorical_col_list, numerical_col_list, PREDICTOR_FIELD, grouping_key='patient_nbr'):
selected_col_list = [grouping_key] + [PREDICTOR_FIELD] + categorical_col_list + numerical_col_list
return agg_drug_df[selected_col_list]
selected_features_df = select_model_features(agg_drug_df, student_categorical_col_list, student_numerical_col_list,
PREDICTOR_FIELD)
###Output
_____no_output_____
###Markdown
Preprocess Dataset - Casting and Imputing We will cast and impute the dataset before splitting so that we do not have to repeat these steps across the splits in the next step. For imputing, there can be deeper analysis into which features to impute and how to impute but for the sake of time, we are taking a general strategy of imputing zero for only numerical features. OPTIONAL: What are some potential issues with this approach? Can you recommend a better way and also implement it?
###Code
processed_df = preprocess_df(selected_features_df, student_categorical_col_list,
student_numerical_col_list, PREDICTOR_FIELD, categorical_impute_value='nan', numerical_impute_value=0)
###Output
/home/workspace/starter_code/utils.py:29: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[predictor] = df[predictor].astype(float)
/home/workspace/starter_code/utils.py:31: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[c] = cast_df(df, c, d_type=str)
/home/workspace/starter_code/utils.py:33: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df[numerical_column] = impute_df(df, numerical_column, numerical_impute_value)
###Markdown
Split Dataset into Train, Validation, and Test Partitions **Question 6**: In order to prepare the data for being trained and evaluated by a deep learning model, we will split the dataset into three partitions, with the validation partition used for optimizing the model hyperparameters during training. One of the key parts is that we need to be sure that the data does not accidently leak across partitions.Please complete the function below to split the input dataset into three partitions(train, validation, test) with the following requirements.- Approximately 60%/20%/20% train/validation/test split- Randomly sample different patients into each data partition- **IMPORTANT** Make sure that a patient's data is not in more than one partition, so that we can avoid possible data leakage.- Make sure that the total number of unique patients across the splits is equal to the total number of unique patients in the original dataset- Total number of rows in original dataset = sum of rows across all three dataset partitions
###Code
from student_utils import patient_dataset_splitter
d_train, d_val, d_test = patient_dataset_splitter(processed_df, 'patient_nbr')
assert len(d_train) + len(d_val) + len(d_test) == len(processed_df)
print("Test passed for number of total rows equal!")
assert (d_train['patient_nbr'].nunique() + d_val['patient_nbr'].nunique() + d_test['patient_nbr'].nunique()) == agg_drug_df['patient_nbr'].nunique()
print("Test passed for number of unique patients being equal!")
###Output
Test passed for number of unique patients being equal!
###Markdown
Demographic Representation Analysis of Split After the split, we should check to see the distribution of key features/groups and make sure that there is representative samples across the partitions. The show_group_stats_viz function in the utils.py file can be used to group and visualize different groups and dataframe partitions. Label Distribution Across Partitions Below you can see the distributution of the label across your splits. Are the histogram distribution shapes similar across partitions?
###Code
show_group_stats_viz(processed_df, PREDICTOR_FIELD)
show_group_stats_viz(d_train, PREDICTOR_FIELD)
show_group_stats_viz(d_test, PREDICTOR_FIELD)
###Output
time_in_hospital
1.0 2134
2.0 2474
3.0 2520
4.0 1956
5.0 1418
6.0 1041
7.0 805
8.0 521
9.0 422
10.0 286
11.0 258
12.0 188
13.0 155
14.0 127
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Demographic Group Analysis We should check that our partitions/splits of the dataset are similar in terms of their demographic profiles. Below you can see how we might visualize and analyze the full dataset vs. the partitions.
###Code
# Full dataset before splitting
patient_demo_features = ['race', 'gender', 'age', 'patient_nbr']
patient_group_analysis_df = processed_df[patient_demo_features].groupby('patient_nbr').head(1).reset_index(drop=True)
show_group_stats_viz(patient_group_analysis_df, 'gender')
# Training partition
show_group_stats_viz(d_train, 'gender')
# Test partition
show_group_stats_viz(d_test, 'gender')
###Output
gender
Female 7608
Male 6697
dtype: int64
AxesSubplot(0.125,0.125;0.775x0.755)
###Markdown
Convert Dataset Splits to TF Dataset We have provided you the function to convert the Pandas dataframe to TF tensors using the TF Dataset API. Please note that this is not a scalable method and for larger datasets, the 'make_csv_dataset' method is recommended -https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset.
###Code
# Convert dataset from Pandas dataframes to TF dataset
batch_size = 128
diabetes_train_ds = df_to_dataset(d_train, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_val_ds = df_to_dataset(d_val, PREDICTOR_FIELD, batch_size=batch_size)
diabetes_test_ds = df_to_dataset(d_test, PREDICTOR_FIELD, batch_size=batch_size)
# We use this sample of the dataset to show transformations later
diabetes_batch = next(iter(diabetes_train_ds))[0]
def demo(feature_column, example_batch):
feature_layer = layers.DenseFeatures(feature_column)
print(feature_layer(example_batch))
###Output
_____no_output_____
###Markdown
4. Create Categorical Features with TF Feature Columns Build Vocabulary for Categorical Features Before we can create the TF categorical features, we must first create the vocab files with the unique values for a given field that are from the **training** dataset. Below we have provided a function that you can use that only requires providing the pandas train dataset partition and the list of the categorical columns in a list format. The output variable 'vocab_file_list' will be a list of the file paths that can be used in the next step for creating the categorical features.
###Code
vocab_file_list = build_vocab_files(d_train, student_categorical_col_list)
###Output
_____no_output_____
###Markdown
Create Categorical Features with Tensorflow Feature Column API **Question 7**: Using the vocab file list from above that was derived fromt the features you selected earlier, please create categorical features with the Tensorflow Feature Column API, https://www.tensorflow.org/api_docs/python/tf/feature_column. Below is a function to help guide you.
###Code
from student_utils import create_tf_categorical_feature_cols
tf_cat_col_list = create_tf_categorical_feature_cols(student_categorical_col_list)
test_cat_var1 = tf_cat_col_list[0]
print("Example categorical field:\n{}".format(test_cat_var1))
demo(test_cat_var1, diabetes_batch)
###Output
Example categorical field:
IndicatorColumn(categorical_column=VocabularyFileCategoricalColumn(key='medical_specialty', vocabulary_file='./diabetes_vocab/medical_specialty_vocab.txt', vocabulary_size=68, num_oov_buckets=1, dtype=tf.string, default_value=-1))
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4267: IndicatorColumn._variable_shape (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
WARNING:tensorflow:From /opt/conda/lib/python3.7/site-packages/tensorflow_core/python/feature_column/feature_column_v2.py:4322: VocabularyFileCategoricalColumn._num_buckets (from tensorflow.python.feature_column.feature_column_v2) is deprecated and will be removed in a future version.
Instructions for updating:
The old _FeatureColumn APIs are being deprecated. Please use the new FeatureColumn APIs instead.
tf.Tensor(
[[0. 0. 1. ... 0. 0. 0.]
[0. 0. 1. ... 0. 0. 0.]
[0. 0. 1. ... 0. 0. 0.]
...
[0. 0. 1. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 0. ... 0. 0. 0.]], shape=(128, 69), dtype=float32)
###Markdown
5. Create Numerical Features with TF Feature Columns **Question 8**: Using the TF Feature Column API(https://www.tensorflow.org/api_docs/python/tf/feature_column/), please create normalized Tensorflow numeric features for the model. Try to use the z-score normalizer function below to help as well as the 'calculate_stats_from_train_data' function.
###Code
from student_utils import create_tf_numeric_feature
###Output
_____no_output_____
###Markdown
For simplicity the create_tf_numerical_feature_cols function below uses the same normalizer function across all features(z-score normalization) but if you have time feel free to analyze and adapt the normalizer based off the statistical distributions. You may find this as a good resource in determining which transformation fits best for the data https://developers.google.com/machine-learning/data-prep/transform/normalization.
###Code
def calculate_stats_from_train_data(df, col):
mean = df[col].describe()['mean']
std = df[col].describe()['std']
return mean, std
def create_tf_numerical_feature_cols(numerical_col_list, train_df):
tf_numeric_col_list = []
for c in numerical_col_list:
mean, std = calculate_stats_from_train_data(train_df, c)
tf_numeric_feature = create_tf_numeric_feature(c, mean, std)
tf_numeric_col_list.append(tf_numeric_feature)
return tf_numeric_col_list
tf_cont_col_list = create_tf_numerical_feature_cols(student_numerical_col_list, d_train)
test_cont_var1 = tf_cont_col_list[0]
print("Example continuous field:\n{}\n".format(test_cont_var1))
demo(test_cont_var1, diabetes_batch)
###Output
Example continuous field:
NumericColumn(key='num_procedures', shape=(1,), default_value=(0,), dtype=tf.float64, normalizer_fn=functools.partial(<function normalize_numeric_with_zscore at 0x7f2a8a73bf80>, mean=1.43299930086227, std=1.764152648140571))
tf.Tensor(
[[ 1.]
[ 1.]
[-1.]
[-1.]
[ 5.]
[ 0.]
[ 2.]
[ 2.]
[-1.]
[ 5.]
[-1.]
[ 2.]
[ 0.]
[-1.]
[-1.]
[-1.]
[ 2.]
[ 1.]
[-1.]
[ 2.]
[ 1.]
[-1.]
[-1.]
[ 2.]
[-1.]
[-1.]
[ 1.]
[-1.]
[ 0.]
[ 5.]
[ 1.]
[-1.]
[ 1.]
[-1.]
[ 0.]
[-1.]
[ 0.]
[-1.]
[ 0.]
[ 0.]
[ 1.]
[-1.]
[ 5.]
[-1.]
[-1.]
[ 0.]
[ 1.]
[-1.]
[-1.]
[-1.]
[ 0.]
[-1.]
[-1.]
[ 1.]
[-1.]
[ 0.]
[ 2.]
[ 0.]
[ 2.]
[ 0.]
[ 2.]
[-1.]
[-1.]
[ 2.]
[-1.]
[ 0.]
[ 0.]
[-1.]
[ 2.]
[ 1.]
[ 0.]
[ 1.]
[ 0.]
[ 0.]
[ 0.]
[ 2.]
[ 1.]
[ 0.]
[ 4.]
[ 1.]
[ 2.]
[ 2.]
[ 4.]
[ 2.]
[-1.]
[ 1.]
[ 1.]
[-1.]
[-1.]
[ 5.]
[-1.]
[ 4.]
[-1.]
[ 3.]
[ 5.]
[ 4.]
[ 4.]
[-1.]
[-1.]
[ 5.]
[-1.]
[-1.]
[-1.]
[ 0.]
[ 0.]
[ 1.]
[ 2.]
[-1.]
[-1.]
[-1.]
[ 0.]
[-1.]
[-1.]
[ 0.]
[ 2.]
[ 3.]
[ 4.]
[-1.]
[-1.]
[ 0.]
[-1.]
[-1.]
[-1.]
[ 1.]
[-1.]
[ 3.]
[ 0.]
[-1.]], shape=(128, 1), dtype=float32)
###Markdown
6. Build Deep Learning Regression Model with Sequential API and TF Probability Layers Use DenseFeatures to combine features for model Now that we have prepared categorical and numerical features using Tensorflow's Feature Column API, we can combine them into a dense vector representation for the model. Below we will create this new input layer, which we will call 'claim_feature_layer'.
###Code
claim_feature_columns = tf_cat_col_list + tf_cont_col_list
claim_feature_layer = tf.keras.layers.DenseFeatures(claim_feature_columns)
###Output
_____no_output_____
###Markdown
Build Sequential API Model from DenseFeatures and TF Probability Layers Below we have provided some boilerplate code for building a model that connects the Sequential API, DenseFeatures, and Tensorflow Probability layers into a deep learning model. There are many opportunities to further optimize and explore different architectures through benchmarking and testing approaches in various research papers, loss and evaluation metrics, learning curves, hyperparameter tuning, TF probability layers, etc. Feel free to modify and explore as you wish. **OPTIONAL**: Come up with a more optimal neural network architecture and hyperparameters. Share the process in discovering the architecture and hyperparameters.
###Code
def build_sequential_model(feature_layer):
model = tf.keras.Sequential([
feature_layer,
tf.keras.layers.Dense(150, activation='relu'),
tf.keras.layers.Dense(75, activation='relu'),
tfp.layers.DenseVariational(1+1, posterior_mean_field, prior_trainable),
tfp.layers.DistributionLambda(
lambda t:tfp.distributions.Normal(loc=t[..., :1],
scale=1e-3 + tf.math.softplus(0.01 * t[...,1:])
)
),
])
return model
def build_diabetes_model(train_ds, val_ds, feature_layer, epochs=5, loss_metric='mse'):
model = build_sequential_model(feature_layer)
model.compile(optimizer='rmsprop', loss=loss_metric, metrics=[loss_metric])
early_stop = tf.keras.callbacks.EarlyStopping(monitor=loss_metric, patience=3)
history = model.fit(train_ds, validation_data=val_ds,
callbacks=[early_stop],
epochs=epochs)
return model, history
diabetes_model, history = build_diabetes_model(diabetes_train_ds, diabetes_val_ds, claim_feature_layer, epochs=10)
###Output
Train for 336 steps, validate for 112 steps
Epoch 1/10
336/336 [==============================] - 20s 58ms/step - loss: 26.4788 - mse: 26.3736 - val_loss: 21.1405 - val_mse: 20.7571
Epoch 2/10
336/336 [==============================] - 15s 43ms/step - loss: 16.8917 - mse: 16.2208 - val_loss: 15.0622 - val_mse: 14.4143
Epoch 3/10
336/336 [==============================] - 15s 44ms/step - loss: 13.1844 - mse: 12.2936 - val_loss: 11.9025 - val_mse: 11.0950
Epoch 4/10
336/336 [==============================] - 14s 42ms/step - loss: 11.4688 - mse: 10.4919 - val_loss: 12.0444 - val_mse: 11.1572
Epoch 5/10
336/336 [==============================] - 14s 42ms/step - loss: 10.9421 - mse: 10.0278 - val_loss: 9.0770 - val_mse: 8.2187
Epoch 6/10
336/336 [==============================] - 14s 42ms/step - loss: 10.0541 - mse: 9.2117 - val_loss: 9.3794 - val_mse: 8.3291
Epoch 7/10
336/336 [==============================] - 14s 42ms/step - loss: 9.5871 - mse: 8.7296 - val_loss: 9.5821 - val_mse: 8.7455
Epoch 8/10
336/336 [==============================] - 14s 42ms/step - loss: 9.2402 - mse: 8.3207 - val_loss: 10.4398 - val_mse: 9.3203
Epoch 9/10
336/336 [==============================] - 14s 42ms/step - loss: 8.9946 - mse: 8.0942 - val_loss: 9.2380 - val_mse: 8.1695
Epoch 10/10
336/336 [==============================] - 14s 43ms/step - loss: 8.9650 - mse: 8.0283 - val_loss: 8.9260 - val_mse: 7.8368
###Markdown
Show Model Uncertainty Range with TF Probability **Question 9**: Now that we have trained a model with TF Probability layers, we can extract the mean and standard deviation for each prediction. Please fill in the answer for the m and s variables below. The code for getting the predictions is provided for you below.
###Code
feature_list = student_categorical_col_list + student_numerical_col_list
diabetes_x_tst = dict(d_test[feature_list])
diabetes_yhat = diabetes_model(diabetes_x_tst)
preds = diabetes_model.predict(diabetes_test_ds)
from student_utils import get_mean_std_from_preds
m, s = get_mean_std_from_preds(diabetes_yhat)
###Output
_____no_output_____
###Markdown
Show Prediction Output
###Code
prob_outputs = {
"pred": preds.flatten(),
"actual_value": d_test['time_in_hospital'].values,
"pred_mean": m.numpy().flatten(),
"pred_std": s.numpy().flatten()
}
prob_output_df = pd.DataFrame(prob_outputs)
prob_output_df.head()
###Output
_____no_output_____
###Markdown
Convert Regression Output to Classification Output for Patient Selection **Question 10**: Given the output predictions, convert it to a binary label for whether the patient meets the time criteria or does not (HINT: use the mean prediction numpy array). The expected output is a numpy array with a 1 or 0 based off if the prediction meets or doesnt meet the criteria.
###Code
from student_utils import get_student_binary_prediction
student_binary_prediction = get_student_binary_prediction(prob_output_df, 'pred_mean')
###Output
_____no_output_____
###Markdown
Add Binary Prediction to Test Dataframe Using the student_binary_prediction output that is a numpy array with binary labels, we can use this to add to a dataframe to better visualize and also to prepare the data for the Aequitas toolkit. The Aequitas toolkit requires that the predictions be mapped to a binary label for the predictions (called 'score' field) and the actual value (called 'label_value').
###Code
def add_pred_to_test(test_df, pred_np, demo_col_list):
for c in demo_col_list:
test_df[c] = test_df[c].astype(str)
test_df['score'] = pred_np
test_df['label_value'] = test_df['time_in_hospital'].apply(lambda x: 1 if x >=5 else 0)
return test_df
pred_test_df = add_pred_to_test(d_test, student_binary_prediction, ['race', 'gender'])
pred_test_df[['patient_nbr', 'gender', 'race', 'time_in_hospital', 'score', 'label_value']].head()
###Output
_____no_output_____
###Markdown
Model Evaluation Metrics **Question 11**: Now it is time to use the newly created binary labels in the 'pred_test_df' dataframe to evaluate the model with some common classification metrics. Please create a report summary of the performance of the model and be sure to give the ROC AUC, F1 score(weighted), class precision and recall scores. For the report please be sure to include the following three parts:- With a non-technical audience in mind, explain the precision-recall tradeoff in regard to how you have optimized your model.- What are some areas of improvement for future iterations?
###Code
from sklearn.metrics import f1_score, classification_report, roc_auc_score
# AUC, F1, precision and recall
# Summary
print(classification_report(pred_test_df['label_value'], pred_test_df['score']))
f1_score(pred_test_df['label_value'], pred_test_df['score'], average='weighted')
roc_auc_score(pred_test_df['label_value'], pred_test_df['score'])
###Output
_____no_output_____
###Markdown
Precision measures the accuracy or exactness of the model while recall measures the completeness of the model. Both should be as high as possible since we need to correctly identify patients as accurate as possible.The model may be improved by trying various hyper-parameters such as increasing the number of hidden layers, or neurons of the hidden layers. Also, we can use algorithms to select the best possible set of features. 7. Evaluating Potential Model Biases with Aequitas Toolkit Prepare Data For Aequitas Bias Toolkit Using the gender and race fields, we will prepare the data for the Aequitas Toolkit.
###Code
# Aequitas
from aequitas.preprocessing import preprocess_input_df
from aequitas.group import Group
from aequitas.plotting import Plot
from aequitas.bias import Bias
from aequitas.fairness import Fairness
ae_subset_df = pred_test_df[['race', 'gender', 'score', 'label_value']]
ae_df, _ = preprocess_input_df(ae_subset_df)
g = Group()
xtab, _ = g.get_crosstabs(ae_df)
absolute_metrics = g.list_absolute_metrics(xtab)
clean_xtab = xtab.fillna(-1)
aqp = Plot()
b = Bias()
###Output
/opt/conda/lib/python3.7/site-packages/aequitas/group.py:143: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
df['score'] = df['score'].astype(float)
###Markdown
Reference Group Selection Below we have chosen the reference group for our analysis but feel free to select another one.
###Code
# test reference group with Caucasian Male
bdf = b.get_disparity_predefined_groups(clean_xtab,
original_df=ae_df,
ref_groups_dict={'race':'Caucasian', 'gender':'Male'
},
alpha=0.05,
check_significance=False)
f = Fairness()
fdf = f.get_group_value_fairness(bdf)
###Output
get_disparity_predefined_group()
###Markdown
Race and Gender Bias Analysis for Patient Selection **Question 12**: For the gender and race fields, please plot two metrics that are important for patient selection below and state whether there is a significant bias in your model across any of the groups along with justification for your statement.
###Code
# Plot two metrics
tpr = aqp.plot_group_metric(clean_xtab, 'tpr', min_group_size=0.05)
fpr = aqp.plot_group_metric(clean_xtab, 'fpr', min_group_size=0.05)
tnr = aqp.plot_group_metric(clean_xtab, 'tnr', min_group_size=0.05)
###Output
_____no_output_____
###Markdown
**Is there significant bias in your model for either race or gender?** No, there is not. Fairness Analysis Example - Relative to a Reference Group **Question 13**: Earlier we defined our reference group and then calculated disparity metrics relative to this grouping. Please provide a visualization of the fairness evaluation for this reference group and analyze whether there is disparity.
###Code
aqp.plot_disparity(bdf, group_metric='fpr_disparity', attribute_name='race')
aqp.plot_fairness_disparity(fdf, group_metric='tpr', attribute_name='gender')
fpr_fairness = aqp.plot_fairness_group(fdf, group_metric='fpr', title=True)
###Output
_____no_output_____ |
databook/airflow/.ipynb_checkpoints/mlflow-checkpoint.ipynb | ###Markdown
MLFlow ๆบๅจๅญฆไน ๅทฅไฝๆต็จ-notebook- MLFlowไฝฟ็จๆ็จ๏ผhttps://my.oschina.net/u/2306127/blog/1825690- MLFlowๅฎๆนๆๆกฃ๏ผhttps://www.mlflow.org/docs/latest/quickstart.html- ๅฟซ้ๅฎ่ฃ
: ** pip install mlflow **
###Code
#ไธ่ฝฝไปฃ็
#!git clone https://github.com/databricks/mlflow
#%%!
#export https_proxy=http://192.168.199.99:9999
#echo $https_proxy
#pip install mlflow
#!pip install mlflow
#!ls -l mlflow
# The data set used in this example is from http://archive.ics.uci.edu/ml/datasets/Wine+Quality
# P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis.
# Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.
import os
import warnings
import sys
import pandas as pd
import numpy as np
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.model_selection import train_test_split
from sklearn.linear_model import ElasticNet
import mlflow
import mlflow.sklearn
def eval_metrics(actual, pred):
rmse = np.sqrt(mean_squared_error(actual, pred))
mae = mean_absolute_error(actual, pred)
r2 = r2_score(actual, pred)
return rmse, mae, r2
###Output
_____no_output_____
###Markdown
ๅๅคๆฐๆฎ
###Code
warnings.filterwarnings("ignore")
np.random.seed(40)
# Read the wine-quality csv file (make sure you're running this from the root of MLflow!)
#wine_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "./mlflow/example/wine-quality.csv")
wine_path = "../mlflow/example/tutorial/wine-quality.csv"
data = pd.read_csv(wine_path)
# Split the data into training and test sets. (0.75, 0.25) split.
train, test = train_test_split(data)
# The predicted column is "quality" which is a scalar from [3, 9]
train_x = train.drop(["quality"], axis=1)
train_y = train[["quality"]]
test_x = test.drop(["quality"], axis=1)
test_y = test[["quality"]]
print("Traing dataset:\n")
train[0:10]
#train_x[0:10]
#train_y[0:10]
#test_x[0:10]
#test_y[0:10]
###Output
_____no_output_____
###Markdown
ๆๅๆจกๅ๏ผๆฐๆฎ้ขๆต๏ผ็ฒพๅบฆ่ฏไผฐ๏ผ่ฎฐๅฝๅๆฐใ
###Code
def learning(alpha = 0.5, l1_ratio = 0.5):
with mlflow.start_run():
lr = ElasticNet(alpha=alpha, l1_ratio=l1_ratio, random_state=42)
lr.fit(train_x, train_y)
predicted_qualities = lr.predict(test_x)
(rmse, mae, r2) = eval_metrics(test_y, predicted_qualities)
#print("Elasticnet model (alpha=%f, l1_ratio=%f):" % (alpha, l1_ratio))
#print(" RMSE: %s" % rmse)
#print(" MAE: %s" % mae)
#print(" R2: %s" % r2)
print("Elasticnet model (alpha=%f, l1_ratio=%f): \tRMSE: %s, \tMAE: %s, \tR2: %s" % (alpha, l1_ratio,rmse,mae,r2))
mlflow.log_param("alpha", alpha)
mlflow.log_param("l1_ratio", l1_ratio)
mlflow.log_metric("rmse", rmse)
mlflow.log_metric("r2", r2)
mlflow.log_metric("mae", mae)
mlflow.sklearn.log_model(lr, "model")
###Output
_____no_output_____
###Markdown
ๆ นๆฎๅๆฐ่ฎก็ฎ่ฏฏๅทฎใ
###Code
learning()
learning(0.8,0.8)
###Output
Elasticnet model (alpha=0.500000, l1_ratio=0.500000): RMSE: 0.82224284976, MAE: 0.627876141016, R2: 0.126787219728
Elasticnet model (alpha=0.800000, l1_ratio=0.800000): RMSE: 0.859868563763, MAE: 0.647899138083, R2: 0.0450425619538
###Markdown
ๅคๅๆฐๆน้่ฎก็ฎใ
###Code
# ๅๆฐ็ๆป่ฎก็ฎๆญฅๆฐ๏ผๆฎๆญค่ชๅจ็ๆๅๆฐใ
steps_alpha = 10
steps_l1_ratio = 10
# ๅผๅง่ฎก็ฎใ
for i in range(steps_alpha):
for j in range(steps_l1_ratio):
learning(i/10,j/10)
###Output
Elasticnet model (alpha=0.000000, l1_ratio=0.000000): RMSE: 0.742416293856, MAE: 0.577516890713, R2: 0.288106771584
Elasticnet model (alpha=0.000000, l1_ratio=0.100000): RMSE: 0.742416293856, MAE: 0.577516890713, R2: 0.288106771584
Elasticnet model (alpha=0.000000, l1_ratio=0.200000): RMSE: 0.742416293856, MAE: 0.577516890713, R2: 0.288106771584
Elasticnet model (alpha=0.000000, l1_ratio=0.300000): RMSE: 0.742416293856, MAE: 0.577516890713, R2: 0.288106771584
Elasticnet model (alpha=0.000000, l1_ratio=0.400000): RMSE: 0.742416293856, MAE: 0.577516890713, R2: 0.288106771584
Elasticnet model (alpha=0.100000, l1_ratio=0.000000): RMSE: 0.775783244087, MAE: 0.60754896565, R2: 0.222678531173
Elasticnet model (alpha=0.100000, l1_ratio=0.100000): RMSE: 0.779254652225, MAE: 0.611254798812, R2: 0.215706384307
Elasticnet model (alpha=0.100000, l1_ratio=0.200000): RMSE: 0.781877044345, MAE: 0.613321681121, R2: 0.210418803165
Elasticnet model (alpha=0.100000, l1_ratio=0.300000): RMSE: 0.782695805624, MAE: 0.613885048232, R2: 0.208764279596
Elasticnet model (alpha=0.100000, l1_ratio=0.400000): RMSE: 0.783754647528, MAE: 0.614627757447, R2: 0.206622041909
Elasticnet model (alpha=0.200000, l1_ratio=0.000000): RMSE: 0.779520598952, MAE: 0.610614801966, R2: 0.215170960074
Elasticnet model (alpha=0.200000, l1_ratio=0.100000): RMSE: 0.783698402191, MAE: 0.614202045269, R2: 0.206735909712
Elasticnet model (alpha=0.200000, l1_ratio=0.200000): RMSE: 0.785912999706, MAE: 0.615529039409, R2: 0.202246318229
Elasticnet model (alpha=0.200000, l1_ratio=0.300000): RMSE: 0.787848332596, MAE: 0.616559998445, R2: 0.19831249881
Elasticnet model (alpha=0.200000, l1_ratio=0.400000): RMSE: 0.790005142831, MAE: 0.61768105351, R2: 0.193917098051
Elasticnet model (alpha=0.300000, l1_ratio=0.000000): RMSE: 0.782264334365, MAE: 0.612377858715, R2: 0.209636397144
Elasticnet model (alpha=0.300000, l1_ratio=0.100000): RMSE: 0.787047763084, MAE: 0.615798595502, R2: 0.199940935299
Elasticnet model (alpha=0.300000, l1_ratio=0.200000): RMSE: 0.790579437095, MAE: 0.617609348205, R2: 0.192744708087
Elasticnet model (alpha=0.300000, l1_ratio=0.300000): RMSE: 0.794271918471, MAE: 0.619284328849, R2: 0.185186362912
Elasticnet model (alpha=0.300000, l1_ratio=0.400000): RMSE: 0.798262713739, MAE: 0.620875195132, R2: 0.176977779674
Elasticnet model (alpha=0.400000, l1_ratio=0.000000): RMSE: 0.78485197624, MAE: 0.613701061, R2: 0.204398882218
Elasticnet model (alpha=0.400000, l1_ratio=0.100000): RMSE: 0.790906912437, MAE: 0.617428849224, R2: 0.192075803886
Elasticnet model (alpha=0.400000, l1_ratio=0.200000): RMSE: 0.795854840602, MAE: 0.619684897077, R2: 0.181935406329
Elasticnet model (alpha=0.400000, l1_ratio=0.300000): RMSE: 0.800678981456, MAE: 0.621375629016, R2: 0.171987814078
Elasticnet model (alpha=0.400000, l1_ratio=0.400000): RMSE: 0.805299973089, MAE: 0.622592038556, R2: 0.162402752567
|
Bark 101/Layer_Activation_Visualization_from_Saved_Model_Bark_101.ipynb | ###Markdown
Load Libraries:
###Code
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D
from tensorflow.keras import backend as K
from tensorflow.keras import activations
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model, load_model
from tensorflow.keras import models
from tensorflow.keras import layers
import cv2
import numpy as np
from tqdm import tqdm
import math
import os
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load Model:
###Code
work_dir = "drive/My Drive/Texture/Bark-101-anonymized/Records/"
checkpointer_name = "best_weights.Bark_101.DataAug.rgb.256p.TTV.DataFlow.pad0.TL.3D.DenseNet201.wInit.imagenet.TrainableAfter.allDefault.Dense.1024.1024.2048.actF.elu.opt.Adam.drop.0.5.batch8.Flatten.l2.0.001.run_2.hdf5"
model_loaded = load_model(work_dir+checkpointer_name)
print("Loaded "+work_dir+checkpointer_name+".")
model_loaded.summary()
###Output
_____no_output_____
###Markdown
Model Layers:
###Code
layer_names = [] # conv4_block48_2_conv, conv3_block12_2_conv
for layer in model_loaded.layers:
layer_names.append(layer.name)
print(layer_names)
layer_no = -9
print(f"layer_names[{layer_no}] = {layer_names[layer_no]}")
###Output
_____no_output_____
###Markdown
By Loading Entire Test at Once:
###Code
'''
input_path = "drive/My Drive/Plant_Leaf_MalayaKew_MK_Dataset/"
filename = "D2_Plant_Leaf_MalayaKew_MK_impl_1_Original_RGB_test_X.pkl.npy"
#'''
#input_test = np.load(f"{input_path}{filename}", allow_pickle=True)
'''
print(f"input_test.shape = {input_test.shape}")
#'''
'''
layer_outputs = [layer.output for layer in model_loaded.layers]
activation_model = models.Model(inputs=model_loaded.input, outputs=layer_outputs)
activations = activation_model.predict(input_test)
#'''
###Output
_____no_output_____
###Markdown
By Loading Single at a Time:
###Code
root_path = "drive/My Drive/Texture/Bark-101-anonymized/Bark-101 Split/test/"
#num_classes = 44
#list_classes = [f"Class{i+1}" for i in range(num_classes)]
list_classes = [i for i in [1,11,22,33,44]]
list_input_path = []
for class_name in list_classes:
list_input_path.append(f"{root_path}{class_name}/")
print(f"len(list_input_path) = {len(list_input_path)}")
os.listdir(list_input_path[0])[0]
list_full_paths = []
choose_different_index = 0
for input_path in list_input_path:
filename = os.listdir(input_path)[choose_different_index]
choose_different_index += 0
list_full_paths.append(f"{input_path}{filename}")
print(f"len(list_full_paths) = {len(list_full_paths)}")
list_full_paths
'''
filename = "Class44(8)R315_00277.jpg"
test_image = cv2.imread(f"{input_path}{filename}")
print(f"test_image.shape = {test_image.shape}")
input_test = np.expand_dims(test_image, 0)
print(f"input_test.shape = {input_test.shape}")
#'''
list_test_images = []
for file_full_path in list_full_paths:
test_image = cv2.imread(file_full_path)
resized = cv2.resize(test_image, (256,256), interpolation = cv2.INTER_NEAREST)
print(f"file_full_path: {file_full_path}")
list_test_images.append(resized)
np_test_images = np.array(list_test_images)
print(f"np_test_images.shape = {np_test_images.shape}")
###Output
file_full_path: drive/My Drive/Texture/Bark-101-anonymized/Bark-101 Split/test/1/14021.jpg
file_full_path: drive/My Drive/Texture/Bark-101-anonymized/Bark-101 Split/test/11/10754.jpg
file_full_path: drive/My Drive/Texture/Bark-101-anonymized/Bark-101 Split/test/22/102683.jpg
file_full_path: drive/My Drive/Texture/Bark-101-anonymized/Bark-101 Split/test/33/80217.jpg
file_full_path: drive/My Drive/Texture/Bark-101-anonymized/Bark-101 Split/test/44/101770.jpg
np_test_images.shape = (5, 256, 256, 3)
###Markdown
Get Layer Activation Outputs:
###Code
layer_outputs = [layer.output for layer in model_loaded.layers]
activation_model = models.Model(inputs=model_loaded.input, outputs=layer_outputs)
#activations = activation_model.predict(input_test)
list_activations = []
for test_image in tqdm(np_test_images):
activations = activation_model.predict(np.array([test_image]))
list_activations.append(activations)
print(f"\nlen(list_activations) = {len(list_activations)}")
###Output
_____no_output_____
###Markdown
Visualize:
###Code
'''
input_1(256,256,3), conv1/relu(128,128,64), pool2_relu(64,64,256), pool3_relu(32,32,512), pool4_relu(16,16,1792), relu(8,8,1920)
'''
#target_layer_name = "conv3_block12_concat"
list_target_layer_names = ['input_1', 'conv1/relu', 'pool2_relu', 'pool3_relu', 'pool4_relu', 'relu']
list_layer_indices = []
for target_layer_name in list_target_layer_names:
for target_layer_index in range(len(layer_names)):
if layer_names[target_layer_index]==target_layer_name:
#layer_no = target_layer_index
list_layer_indices.append(target_layer_index)
#print(f"layer_names[{layer_no}] = {layer_names[layer_no]}")
print(f"list_layer_indices = {list_layer_indices}")
for activations in list_activations:
print(len(activations))
'''
current_layer = activations[layer_no]
num_neurons = current_layer.shape[1:][-1]
print(f"current_layer.shape = {current_layer.shape}")
print(f"image_dimension = {current_layer.shape[1:][:-1]}")
print(f"num_neurons = {num_neurons}")
#'''
list_all_activations_layers = []
list_all_num_neurons = []
# list_all_activations_layers -> list_activations_layers -> current_layer -> activations[layer_no]
for activations in list_activations:
list_activations_layers = []
list_neurons = []
for layer_no in list_layer_indices:
current_layer = activations[layer_no]
#print(f"current_layer.shape = {current_layer.shape}")
list_activations_layers.append(current_layer)
#list_current_layers.append(current_layer)
list_neurons.append(current_layer.shape[1:][-1])
list_all_activations_layers.append(list_activations_layers)
list_all_num_neurons.append(list_neurons)
print(f"len(list_all_activations_layers) = {len(list_all_activations_layers)}")
print(f"len(list_all_activations_layers[0]) = {len(list_all_activations_layers[0])}")
print(f"list_all_activations_layers[0][0] = {list_all_activations_layers[0][0].shape}")
print(f"list_all_num_neurons = {list_all_num_neurons}")
print(f"list_all_num_neurons[0] = {list_all_num_neurons[0]}")
print(f"list_all_activations_layers[0][0] = {list_all_activations_layers[0][0].shape}")
print(f"list_all_activations_layers[0][1] = {list_all_activations_layers[0][1].shape}")
print(f"list_all_activations_layers[0][2] = {list_all_activations_layers[0][2].shape}")
print(f"list_all_activations_layers[0][3] = {list_all_activations_layers[0][3].shape}")
print(f"list_all_activations_layers[0][4] = {list_all_activations_layers[0][4].shape}")
#print(f"list_all_activations_layers[0][5] = {list_all_activations_layers[0][5].shape}")
#'''
current_layer = list_all_activations_layers[0][1]
superimposed_activation_image = current_layer[0, :, :, 0]
for activation_image_index in range(1, current_layer.shape[-1]):
current_activation_image = current_layer[0, :, :, activation_image_index]
superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
plt.imshow(superimposed_activation_image, cmap='viridis')
#'''
#'''
current_layer = list_all_activations_layers[-1][1]
superimposed_activation_image = current_layer[0, :, :, 0]
for activation_image_index in range(1, current_layer.shape[-1]):
current_activation_image = current_layer[0, :, :, activation_image_index]
superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
plt.imshow(superimposed_activation_image, cmap='viridis')
#'''
#plt.matshow(current_layer[0, :, :, -1], cmap ='PiYG')
#plt.matshow(current_layer[0, :, :, -1], cmap ='viridis')
'''
superimposed_activation_image = current_layer[0, :, :, 0]
for activation_image_index in range(1, num_neurons):
current_activation_image = current_layer[0, :, :, activation_image_index]
#superimposed_activation_image = np.multiply(superimposed_activation_image, current_activation_image) # elementwise multiplication
superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
plt.imshow(superimposed_activation_image, cmap='viridis')
#'''
'''
superimposed_activation_image = current_layer[0, :, :, 0]
for activation_image_index in range(1, len(num_neurons)):
current_activation_image = current_layer[0, :, :, activation_image_index]
superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
plt.imshow(superimposed_activation_image, cmap='viridis')
#'''
#'''
# list_all_activations_layers -> list_activations_layers -> current_layer -> activations[layer_no]
#num_activation_per_test_image = len(list_target_layer_names)
list_all_superimposed_activation_image = []
for list_activations_layers_index in range(len(list_all_activations_layers)):
list_activations_layers = list_all_activations_layers[list_activations_layers_index]
list_current_num_neurons = list_all_num_neurons[list_activations_layers_index]
#print(f"list_activations_layers_index = {list_activations_layers_index}")
#print(f"list_all_num_neurons = {list_all_num_neurons}")
#print(f"list_current_num_neurons = {list_current_num_neurons}")
list_superimposed_activation_image = []
for activations_layer_index in range(len(list_activations_layers)):
activations_layers = list_activations_layers[activations_layer_index]
#print(f"activations_layers.shape = {activations_layers.shape}")
num_neurons = list_current_num_neurons[activations_layer_index]
superimposed_activation_image = activations_layers[0, :, :, 0]
for activation_image_index in range(1, num_neurons):
current_activation_image = activations_layers[0, :, :, activation_image_index]
superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
#print(f"superimposed_activation_image.shape = {superimposed_activation_image.shape}")
list_superimposed_activation_image.append(superimposed_activation_image)
#print(f"list_superimposed_activation_image[0].shape = {list_superimposed_activation_image[0].shape}")
list_all_superimposed_activation_image.append(list_superimposed_activation_image)
#print(f"list_all_superimposed_activation_image[0][0].shape = {list_all_superimposed_activation_image[0][0].shape}")
#plt.imshow(superimposed_activation_image, cmap='viridis')
print(f"len(list_all_superimposed_activation_image) = {len(list_all_superimposed_activation_image)}")
print(f"len(list_all_superimposed_activation_image[0]) = {len(list_all_superimposed_activation_image[0])}")
print(f"len(list_all_superimposed_activation_image[0][0]) = {len(list_all_superimposed_activation_image[0][0])}")
print(f"list_all_superimposed_activation_image[0][0].shape = {list_all_superimposed_activation_image[0][0].shape}")
#'''
'''
interpolation = cv2.INTER_LINEAR # INTER_LINEAR, INTER_CUBIC, INTER_NEAREST
# list_all_activations_layers -> list_activations_layers -> current_layer -> activations[layer_no]
#num_activation_per_test_image = len(list_target_layer_names)
list_all_superimposed_activation_image = []
for list_activations_layers_index in range(len(list_all_activations_layers)):
list_activations_layers = list_all_activations_layers[list_activations_layers_index]
list_current_num_neurons = list_all_num_neurons[list_activations_layers_index]
#print(f"list_activations_layers_index = {list_activations_layers_index}")
#print(f"list_all_num_neurons = {list_all_num_neurons}")
#print(f"list_current_num_neurons = {list_current_num_neurons}")
list_superimposed_activation_image = []
for activations_layer_index in range(len(list_activations_layers)):
activations_layers = list_activations_layers[activations_layer_index]
#print(f"activations_layers.shape = {activations_layers.shape}")
num_neurons = list_current_num_neurons[activations_layer_index]
superimposed_activation_image = activations_layers[0, :, :, 0]
superimposed_activation_image_resized = cv2.resize(superimposed_activation_image, (256,256), interpolation = interpolation)
for activation_image_index in range(1, num_neurons):
current_activation_image = activations_layers[0, :, :, activation_image_index]
#superimposed_activation_image = np.add(superimposed_activation_image, current_activation_image) # elementwise addition
current_activation_image_resized = cv2.resize(current_activation_image, (256,256), interpolation = interpolation)
superimposed_activation_image_resized = np.add(superimposed_activation_image_resized, current_activation_image_resized) # elementwise addition
#print(f"superimposed_activation_image.shape = {superimposed_activation_image.shape}")
#list_superimposed_activation_image.append(superimposed_activation_image)
list_superimposed_activation_image.append(superimposed_activation_image_resized)
#print(f"list_superimposed_activation_image[0].shape = {list_superimposed_activation_image[0].shape}")
list_all_superimposed_activation_image.append(list_superimposed_activation_image)
#print(f"list_all_superimposed_activation_image[0][0].shape = {list_all_superimposed_activation_image[0][0].shape}")
#plt.imshow(superimposed_activation_image, cmap='viridis')
print(f"len(list_all_superimposed_activation_image) = {len(list_all_superimposed_activation_image)}")
print(f"len(list_all_superimposed_activation_image[0]) = {len(list_all_superimposed_activation_image[0])}")
print(f"len(list_all_superimposed_activation_image[0][0]) = {len(list_all_superimposed_activation_image[0][0])}")
print(f"list_all_superimposed_activation_image[0][0].shape = {list_all_superimposed_activation_image[0][0].shape}")
print(f"list_all_superimposed_activation_image[0][-1].shape = {list_all_superimposed_activation_image[0][-1].shape}")
#'''
'''
supported cmap values are: 'Accent', 'Accent_r', 'Blues', 'Blues_r', 'BrBG', 'BrBG_r', 'BuGn', 'BuGn_r', 'BuPu', 'BuPu_r', 'CMRmap', 'CMRmap_r',
'Dark2', 'Dark2_r', 'GnBu', 'GnBu_r', 'Greens', 'Greens_r', 'Greys', 'Greys_r', 'OrRd', 'OrRd_r', 'Oranges', 'Oranges_r', 'PRGn', 'PRGn_r',
'Paired', 'Paired_r', 'Pastel1', 'Pastel1_r', 'Pastel2', 'Pastel2_r', 'PiYG', 'PiYG_r', 'PuBu', 'PuBuGn', 'PuBuGn_r', 'PuBu_r', 'PuOr',
'PuOr_r', 'PuRd', 'PuRd_r', 'Purples', 'Purples_r', 'RdBu', 'RdBu_r', 'RdGy', 'RdGy_r', 'RdPu', 'RdPu_r', 'RdYlBu', 'RdYlBu_r', 'RdYlGn',
'RdYlGn_r', 'Reds', 'Reds_r', 'Set1', 'Set1_r', 'Set2', 'Set2_r', 'Set3', 'Set3_r', 'Spectral', 'Spectral_r', 'Wistia', 'Wistia_r', 'YlGn',
'YlGnBu', 'YlGnBu_r', 'YlGn_r', 'YlOrBr', 'YlOrBr_r', 'YlOrRd', 'YlOrRd_r', 'afmhot', 'afmhot_r', 'autumn', 'autumn_r', 'binary', 'binary_r',
'bone', 'bone_r', 'brg', 'brg_r', 'bwr', 'bwr_r', 'cividis', 'cividis_r', 'cool', 'cool_r', 'coolwarm', 'coolwarm_r', 'copper', 'copper_r',
'cubehelix', 'cubehelix_r', 'flag', 'flag_r', 'gist_earth', 'gist_earth_r', 'gist_gray', 'gist_gray_r', 'gist_heat', 'gist_heat_r', 'gist_ncar',
'gist_ncar_r', 'gist_rainbow', 'gist_rainbow_r', 'gist_stern', 'gist_stern_r', 'gist_yarg', 'gist_yarg_r', 'gnuplot', 'gnuplot2', 'gnuplot2_r',
'gnuplot_r', 'gray', 'gray_r', 'hot', 'hot_r', 'hsv', 'hsv_r', 'inferno', 'inferno_r', 'jet', 'jet_r', 'magma', 'magma_r', 'nipy_spectral',
'nipy_spectral_r', 'ocean', 'oc...
'''
sub_fig_num_rows = len(list_test_images)
sub_fig_num_cols = len(list_target_layer_names)
fig_heigth = 11
fig_width = 11
cmap = "copper" # PuOr_r, Dark2, Dark2_r, RdBu, RdBu_r, coolwarm, viridis, PiYG, gray, binary, afmhot, PuBu, copper
fig, axes = plt.subplots(sub_fig_num_rows,sub_fig_num_cols, figsize=(fig_width,fig_heigth))
#plt.suptitle(f"Layer {str(layer_no+1)}: {layer_names[layer_no]} {str(current_layer.shape[1:])}", fontsize=20, y=1.1)
for i,ax in enumerate(axes.flat):
row = i//sub_fig_num_cols
col = i%sub_fig_num_cols
#print(f"i={i}; row={row}, col={col}")
#'''
ax.imshow(list_all_superimposed_activation_image[row][col], cmap=cmap)
#ax.imshow(list_all_superimposed_activation_image[row][col])
ax.set_xticks([])
ax.set_yticks([])
if col == 0:
ax.set_ylabel(f"Class {list_classes[row]}")
if row == 0:
#ax.set_xlabel(f"Layer {str(list_layer_indices[col])}") # , rotation=0, ha='right'
ax.set_xlabel(str(list_target_layer_names[col]))
#ax.set_xlabel(f"Layer {str(list_layer_indices[col])}: {str(list_target_layer_names[col])}") # , rotation=0, ha='right'
ax.xaxis.set_label_position('top')
ax.set_aspect('auto')
plt.subplots_adjust(wspace=0.02, hspace=0.05)
img_path = 'drive/My Drive/Visualizations/'+checkpointer_name[8:-5]+'.png'
plt.savefig(img_path, dpi=600)
plt.show()
print('img_path =', img_path)
#'''
# good cmap for this work: PuOr_r, Dark2_r, RdBu, RdBu_r, coolwarm, viridis, PiYG
'''
for activation_image_index in range(num_neurons):
plt.imshow(current_layer[0, :, :, activation_image_index], cmap='PiYG')
#'''
plt.imshow(superimposed_activation_image, cmap='gray')
###Output
_____no_output_____
###Markdown
Weight Visualization:
###Code
layer_outputs = [layer.output for layer in model_loaded.layers]
#activation_model = models.Model(inputs=model_loaded.input, outputs=layer_outputs)
#activations = activation_model.predict(input_test)
layer_configs = []
layer_weights = []
for layer in model_loaded.layers:
layer_configs.append(layer.get_config())
layer_weights.append(layer.get_weights())
print(f"len(layer_configs) = {len(layer_configs)}")
print(f"len(layer_weights) = {len(layer_weights)}")
layer_configs[-9]
layer_name = 'conv2_block1_1_conv' # conv5_block32_1_conv
model_weight = model_loaded.get_layer(layer_name).get_weights()[0]
#model_biases = model_loaded.get_layer(layer_name).get_weights()[1]
print(f"type(model_weight) = {type(model_weight)}")
print(f"model_weight.shape = {model_weight.shape}")
model_weight[0][0].shape
plt.matshow(model_weight[0, 0, :, :], cmap ='viridis')
###Output
_____no_output_____ |
reinforcement_learning/rl_deepracer_robomaker_coach_gazebo/rl_deepracer_coach_robomaker.ipynb | ###Markdown
Distributed DeepRacer RL training with SageMaker and RoboMaker--- IntroductionIn this notebook, we will train a fully autonomous 1/18th scale race car using reinforcement learning using Amazon SageMaker RL and AWS RoboMaker's 3D driving simulator. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) is a service that makes it easy for developers to develop, test, and deploy robotics applications. This notebook provides a jailbreak experience of [AWS DeepRacer](https://console.aws.amazon.com/deepracer/homewelcome), giving us more control over the training/simulation process and RL algorithm tuning.--- How it works? The reinforcement learning agent (i.e. our autonomous car) learns to drive by interacting with its environment, e.g., the track, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through repeated episodes. The figure above shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following: 1. **Objective**: Learn to drive autonomously by staying close to the center of the track.2. **Environment**: A 3D driving simulator hosted on AWS RoboMaker.3. **State**: The driving POV image captured by the car's head camera, as shown in the illustration above.4. **Action**: Six discrete steering wheel positions at different angles (configurable)5. **Reward**: Positive reward for staying close to the center line; High penalty for going off-track. This is configurable and can be made more complex (for e.g. steering penalty can be added). Prequisites Imports To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations. You can run this notebook from your local machine or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on `SageMaker` and a simulation job on `RoboMaker`.
###Code
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import Markdown
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from markdown_helper import *
###Output
_____no_output_____
###Markdown
Setup S3 bucket Set up the linkage and authentication to the S3 bucket that we want to use for checkpoint and metadata.
###Code
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
###Output
_____no_output_____
###Markdown
Define Variables We define variables such as the job prefix for the training jobs and s3_prefix for storing metadata required for synchronization between the training and simulation jobs
###Code
job_name_prefix = 'rl-deepracer'
# create unique job name
tm = gmtime()
job_name = s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure S3 prefix contains SageMaker
s3_prefix_robomaker = job_name_prefix + "-robomaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure that the S3 prefix contains the keyword 'robomaker'
# Duration of job in seconds (5 hours)
job_duration_in_seconds = 3600 * 5
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia), US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
print("Model checkpoints and other metadata will be stored at: {}{}".format(s3_output_path, job_name))
###Output
_____no_output_____
###Markdown
Create an IAM roleEither get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
###Code
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role('sagemaker')
print("Using IAM role arn: {}".format(role))
###Output
_____no_output_____
###Markdown
> Please note that this notebook cannot be run in `SageMaker local mode` as the simulator is based on AWS RoboMaker service. Permission setup for invoking AWS RoboMaker from this notebook In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
###Code
display(Markdown(generate_help_for_robomaker_trust_relationship(role)))
###Output
_____no_output_____
###Markdown
Configure VPC Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts. We will use the default VPC configuration for this example.
###Code
ec2 = boto3.client('ec2')
default_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
default_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if group["GroupName"] == "default" and group["VpcId"] == default_vpc]
default_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == default_vpc and subnet['DefaultForAz']==True]
print("Using default VPC:", default_vpc)
print("Using default security group:", default_security_groups)
print("Using default subnets:", default_subnets)
###Output
_____no_output_____
###Markdown
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
###Code
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == default_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
assert len(route_tables) >= 1, "No route tables were found. Please follow the VPC S3 endpoint creation "\
"guide by clicking the above link."
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=default_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
###Output
_____no_output_____
###Markdown
Setup the environment The environment is defined in a Python file called โdeepracer_env.pyโ and the file can be found at `src/robomaker/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only. We can experiment with different reward functions by modifying `reward_function` in this file. Action space and steering angles can be changed by modifying the step method in `DeepRacerDiscreteEnv` class. Configure the preset for RL algorithmThe parameters that configure the RL training job are defined in `src/robomaker/presets/deepracer.py`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example. You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
###Code
!pygmentize src/robomaker/presets/deepracer.py
###Output
_____no_output_____
###Markdown
Training EntrypointThe training code is written in the file โtraining_worker.pyโ which is uploaded in the /src directory. At a high level, it does the following:- Uploads SageMaker node's IP address.- Starts a Redis server which receives agent experiences sent by rollout worker[s] (RoboMaker simulator).- Trains the model everytime after a certain number of episodes are received.- Uploads the new model weights on S3. The rollout workers then update their model to execute the next set of episodes.
###Code
# Uncomment the line below to see the training code
#!pygmentize src/training_worker.py
###Output
_____no_output_____
###Markdown
Train the RL model using the Python SDK Script modeยถ First, we upload the preset and envrionment file to a particular location on S3, as expected by RoboMaker.
###Code
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
# Make sure nothing exists at this S3 prefix
!aws s3 rm --recursive {s3_location}
# Make any changes to the envrironment and preset files below and upload these files
!aws s3 cp src/robomaker/environments/ {s3_location}/environments/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
!aws s3 cp src/robomaker/presets/ {s3_location}/presets/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
###Output
_____no_output_____
###Markdown
Next, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
###Code
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
###Output
_____no_output_____
###Markdown
We use the RLEstimator for training RL jobs.1. Specify the source directory which has the environment file, preset and training code.2. Specify the entry point as the training code3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**4. Set the RLCOACH_PRESET as "deepracer" for this example.5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
###Code
RLCOACH_PRESET = "deepracer"
instance_type = "ml.c4.2xlarge"
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.10.1',
framework=RLFramework.TENSORFLOW,
role=role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
train_max_run=job_duration_in_seconds, # Maximum runtime in seconds
hyperparameters={"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"RLCOACH_PRESET": RLCOACH_PRESET,
},
metric_definitions = metric_definitions,
subnets=default_subnets, # Required for VPC mode
security_group_ids=default_security_groups, # Required for VPC mode
)
estimator.fit(job_name=job_name, wait=False)
###Output
_____no_output_____
###Markdown
Start the Robomaker job
###Code
from botocore.exceptions import UnknownServiceError
robomaker = boto3.client("robomaker")
###Output
_____no_output_____
###Markdown
Create Simulation Application We first create a RoboMaker simulation application using the `DeepRacer public bundle`. Please refer to [RoboMaker Sample Application Github Repository](https://github.com/aws-robotics/aws-robomaker-sample-application-deepracer) if you want to learn more about this bundle or modify it.
###Code
bundle_s3_key = 'deepracer/simulation_ws.tar.gz'
bundle_source = {'s3Bucket': s3_bucket,
's3Key': bundle_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE', 'version': '1.x'}
###Output
_____no_output_____
###Markdown
Download the public DeepRacer bundle provided by RoboMaker and upload it in our S3 bucket to create a RoboMaker Simulation Application
###Code
simulation_application_bundle_location = "https://s3-us-west-2.amazonaws.com/robomaker-applications-us-west-2-11d8d0439f6a/deep-racer/deep-racer-1.0.57.0.1.0.66.0/simulation_ws.tar.gz"
!wget {simulation_application_bundle_location}
!aws s3 cp simulation_ws.tar.gz s3://{s3_bucket}/{bundle_s3_key}
!rm simulation_ws.tar.gz
app_name = "deepracer-sample-application" + strftime("%y%m%d-%H%M%S", gmtime())
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[bundle_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine
)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
###Output
_____no_output_____
###Markdown
Launch the Simulation job on RoboMakerWe create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
###Code
# Use more rollout workers for faster convergence
num_simulation_workers = 1
envriron_vars = {
"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"WORLD_NAME": "hard_track", # Can be one of "easy_track", "medium_track", "hard_track"
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"NUMBER_OF_ROLLOUT_WORKERS": str(num_simulation_workers)}
simulation_application = {"application": simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
###Output
_____no_output_____
###Markdown
Visualizing the simulations in RoboMaker You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
###Code
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
###Output
_____no_output_____
###Markdown
Plot metrics for training job
###Code
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
intermediate_folder_key = "{}/output/intermediate".format(job_name)
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
###Output
_____no_output_____
###Markdown
Clean Up Execute the cells below if you want to kill RoboMaker and SageMaker job.
###Code
for job_arn in job_arns:
robomaker.cancel_simulation_job(job=job_arn)
sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
envriron_vars = {"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"NUMBER_OF_TRIALS": str(20),
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"WORLD_NAME": "hard_track",
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
print("Created the following job:")
print("Job ARN", response["arn"])
###Output
_____no_output_____
###Markdown
Clean Up Simulation Application Resource
###Code
robomaker.delete_simulation_application(application=simulation_app_arn)
###Output
_____no_output_____
###Markdown
Distributed DeepRacer RL training with SageMaker and RoboMaker--- IntroductionIn this notebook, we will train a fully autonomous 1/18th scale race car using reinforcement learning using Amazon SageMaker RL and AWS RoboMaker's 3D driving simulator. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) is a service that makes it easy for developers to develop, test, and deploy robotics applications. This notebook provides a jailbreak experience of [AWS DeepRacer](https://console.aws.amazon.com/deepracer/homewelcome), giving us more control over the training/simulation process and RL algorithm tuning.--- How it works? The reinforcement learning agent (i.e. our autonomous car) learns to drive by interacting with its environment, e.g., the track, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through repeated episodes. The figure above shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following: 1. **Objective**: Learn to drive autonomously by staying close to the center of the track.2. **Environment**: A 3D driving simulator hosted on AWS RoboMaker.3. **State**: The driving POV image captured by the car's head camera, as shown in the illustration above.4. **Action**: Six discrete steering wheel positions at different angles (configurable)5. **Reward**: Positive reward for staying close to the center line; High penalty for going off-track. This is configurable and can be made more complex (for e.g. steering penalty can be added). Prequisites Imports To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations. You can run this notebook from your local machine or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on `SageMaker` and a simulation job on `RoboMaker`.
###Code
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import Markdown
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from markdown_helper import *
###Output
_____no_output_____
###Markdown
Setup S3 bucket Set up the linkage and authentication to the S3 bucket that we want to use for checkpoint and metadata.
###Code
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
###Output
_____no_output_____
###Markdown
Define Variables We define variables such as the job prefix for the training jobs and s3_prefix for storing metadata required for synchronization between the training and simulation jobs
###Code
job_name_prefix = 'rl-deepracer'
# create unique job name
job_name = s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", gmtime())
# Duration of job in seconds (5 hours)
job_duration_in_seconds = 3600 * 5
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia), US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
print("Model checkpoints and other metadata will be stored at: {}{}".format(s3_output_path, job_name))
###Output
_____no_output_____
###Markdown
Create an IAM roleEither get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
###Code
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role('sagemaker')
print("Using IAM role arn: {}".format(role))
###Output
_____no_output_____
###Markdown
> Please note that this notebook cannot be run in `SageMaker local mode` as the simulator is based on AWS RoboMaker service. Permission setup for invoking AWS RoboMaker from this notebook In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
###Code
display(Markdown(generate_help_for_robomaker_trust_relationship(role)))
###Output
_____no_output_____
###Markdown
Configure VPC Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts. We will use the default VPC configuration for this example.
###Code
ec2 = boto3.client('ec2')
default_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
default_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if group["GroupName"] == "default" and group["VpcId"] == default_vpc]
default_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == default_vpc and subnet['DefaultForAz']==True]
print("Using default VPC:", default_vpc)
print("Using default security group:", default_security_groups)
print("Using default subnets:", default_subnets)
###Output
_____no_output_____
###Markdown
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
###Code
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == default_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
assert len(route_tables) >= 1, "No route tables were found. Please follow the VPC S3 endpoint creation "\
"guide by clicking the above link."
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=default_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
###Output
_____no_output_____
###Markdown
Setup the environment The environment is defined in a Python file called โdeepracer_env.pyโ and the file can be found at `src/robomaker/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only. We can experiment with different reward functions by modifying `reward_function` in this file. Action space and steering angles can be changed by modifying the step method in `DeepRacerDiscreteEnv` class. Configure the preset for RL algorithmThe parameters that configure the RL training job are defined in `src/robomaker/presets/deepracer.py`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example. You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
###Code
!pygmentize src/robomaker/presets/deepracer.py
###Output
_____no_output_____
###Markdown
Training EntrypointThe training code is written in the file โtraining_worker.pyโ which is uploaded in the /src directory. At a high level, it does the following:- Uploads SageMaker node's IP address.- Starts a Redis server which receives agent experiences sent by rollout worker[s] (RoboMaker simulator).- Trains the model everytime after a certain number of episodes are received.- Uploads the new model weights on S3. The rollout workers then update their model to execute the next set of episodes.
###Code
# Uncomment the line below to see the training code
#!pygmentize src/training_worker.py
###Output
_____no_output_____
###Markdown
Train the RL model using the Python SDK Script modeยถ First, we upload the preset and envrionment file to a particular location on S3, as expected by RoboMaker.
###Code
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
# Make sure nothing exists at this S3 prefix
!aws s3 rm --recursive {s3_location}
# Make any changes to the envrironment and preset files below and upload these files
!aws s3 cp src/robomaker/environments/ {s3_location}/environments/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
!aws s3 cp src/robomaker/presets/ {s3_location}/presets/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
###Output
_____no_output_____
###Markdown
Next, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
###Code
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
###Output
_____no_output_____
###Markdown
We use the RLEstimator for training RL jobs.1. Specify the source directory which has the environment file, preset and training code.2. Specify the entry point as the training code3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**4. Set the RLCOACH_PRESET as "deepracer" for this example.5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
###Code
RLCOACH_PRESET = "deepracer"
instance_type = "ml.c5.4xlarge"
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.10.1',
framework=RLFramework.TENSORFLOW,
role=role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
train_max_run=job_duration_in_seconds, # Maximum runtime in seconds
hyperparameters={"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"RLCOACH_PRESET": RLCOACH_PRESET,
},
metric_definitions = metric_definitions,
subnets=default_subnets, # Required for VPC mode
security_group_ids=default_security_groups, # Required for VPC mode
)
estimator.fit(job_name=job_name, wait=False)
###Output
_____no_output_____
###Markdown
Start the Robomaker job
###Code
from botocore.exceptions import UnknownServiceError
robomaker = boto3.client("robomaker")
###Output
_____no_output_____
###Markdown
Create Simulation Application We first create a RoboMaker simulation application using the `DeepRacer public bundle`. Please refer to [RoboMaker Sample Application Github Repository](https://github.com/aws-robotics/aws-robomaker-sample-application-deepracer) if you want to learn more about this bundle or modify it.
###Code
bundle_s3_key = 'deepracer/simulation_ws.tar.gz'
bundle_source = {'s3Bucket': s3_bucket,
's3Key': bundle_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE', 'version': '1.x'}
###Output
_____no_output_____
###Markdown
Download the public DeepRacer bundle provided by RoboMaker and upload it in our S3 bucket to create a RoboMaker Simulation Application
###Code
simulation_application_bundle_location = "https://s3-us-west-2.amazonaws.com/robomaker-applications-us-west-2-11d8d0439f6a/deep-racer/deep-racer-1.0.57.0.1.0.66.0/simulation_ws.tar.gz"
!wget {simulation_application_bundle_location}
!aws s3 cp simulation_ws.tar.gz s3://{s3_bucket}/{bundle_s3_key}
!rm simulation_ws.tar.gz
app_name = "deepracer-sample-application" + strftime("%y%m%d-%H%M%S", gmtime())
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[bundle_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine
)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
###Output
_____no_output_____
###Markdown
Launch the Simulation job on RoboMakerWe create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
###Code
# Use more rollout workers for faster convergence
num_simulation_workers = 1
envriron_vars = {
"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"WORLD_NAME": "hard_track", # Can be one of "easy_track", "medium_track", "hard_track"
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"NUMBER_OF_ROLLOUT_WORKERS": str(num_simulation_workers)}
simulation_application = {"application": simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
###Output
_____no_output_____
###Markdown
Visualizing the simulations in RoboMaker You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
###Code
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
###Output
_____no_output_____
###Markdown
Plot metrics for training job
###Code
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
intermediate_folder_key = "{}/output/intermediate".format(job_name)
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
###Output
_____no_output_____
###Markdown
Clean Up Execute the cells below if you want to kill RoboMaker and SageMaker job.
###Code
for job_arn in job_arns:
robomaker.cancel_simulation_job(job=job_arn)
sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
envriron_vars = {"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"NUMBER_OF_TRIALS": str(20),
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"WORLD_NAME": "hard_track",
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig
)
print("Created the following job:")
print("Job ARN", response["arn"])
###Output
_____no_output_____
###Markdown
Clean Up Simulation Application Resource
###Code
robomaker.delete_simulation_application(application=simulation_app_arn)
###Output
_____no_output_____
###Markdown
Distributed DeepRacer RL training with SageMaker and RoboMaker--- IntroductionIn this notebook, we will train a fully autonomous 1/18th scale race car using reinforcement learning using Amazon SageMaker RL and AWS RoboMaker's 3D driving simulator. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) is a service that makes it easy for developers to develop, test, and deploy robotics applications. This notebook provides a jailbreak experience of [AWS DeepRacer](https://console.aws.amazon.com/deepracer/homewelcome), giving us more control over the training/simulation process and RL algorithm tuning.--- How it works? The reinforcement learning agent (i.e. our autonomous car) learns to drive by interacting with its environment, e.g., the track, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through repeated episodes. The figure above shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following: 1. **Objective**: Learn to drive autonomously by staying close to the center of the track.2. **Environment**: A 3D driving simulator hosted on AWS RoboMaker.3. **State**: The driving POV image captured by the car's head camera, as shown in the illustration above.4. **Action**: Six discrete steering wheel positions at different angles (configurable)5. **Reward**: Positive reward for staying close to the center line; High penalty for going off-track. This is configurable and can be made more complex (for e.g. steering penalty can be added). Prequisites Imports To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations. You can run this notebook from your local machine or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on `SageMaker` and a simulation job on `RoboMaker`.
###Code
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import Markdown
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from markdown_helper import *
###Output
_____no_output_____
###Markdown
Setup S3 bucket Set up the linkage and authentication to the S3 bucket that we want to use for checkpoint and metadata.
###Code
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
###Output
_____no_output_____
###Markdown
Define Variables We define variables such as the job prefix for the training jobs and s3_prefix for storing metadata required for synchronization between the training and simulation jobs
###Code
job_name_prefix = 'rl-deepracer'
# create unique job name
tm = gmtime()
job_name = s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure S3 prefix contains SageMaker
s3_prefix_robomaker = job_name_prefix + "-robomaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure that the S3 prefix contains the keyword 'robomaker'
# Duration of job in seconds (5 hours)
job_duration_in_seconds = 3600 * 5
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia), US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
print("Model checkpoints and other metadata will be stored at: {}{}".format(s3_output_path, job_name))
###Output
_____no_output_____
###Markdown
Create an IAM roleEither get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
###Code
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role('sagemaker')
print("Using IAM role arn: {}".format(role))
###Output
_____no_output_____
###Markdown
> Please note that this notebook cannot be run in `SageMaker local mode` as the simulator is based on AWS RoboMaker service. Permission setup for invoking AWS RoboMaker from this notebook In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
###Code
display(Markdown(generate_help_for_robomaker_trust_relationship(role)))
###Output
_____no_output_____
###Markdown
Configure VPC Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts. We will use the default VPC configuration for this example.
###Code
ec2 = boto3.client('ec2')
default_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
default_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if 'VpcId' in group and group["GroupName"] == "default" and group["VpcId"] == default_vpc]
default_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == default_vpc and subnet['DefaultForAz']==True]
print("Using default VPC:", default_vpc)
print("Using default security group:", default_security_groups)
print("Using default subnets:", default_subnets)
###Output
_____no_output_____
###Markdown
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
###Code
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == default_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
assert len(route_tables) >= 1, "No route tables were found. Please follow the VPC S3 endpoint creation "\
"guide by clicking the above link."
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=default_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
###Output
_____no_output_____
###Markdown
Setup the environment The environment is defined in a Python file called โdeepracer_env.pyโ and the file can be found at `src/robomaker/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only. We can experiment with different reward functions by modifying `reward_function` in this file. Action space and steering angles can be changed by modifying the step method in `DeepRacerDiscreteEnv` class. Configure the preset for RL algorithmThe parameters that configure the RL training job are defined in `src/robomaker/presets/deepracer.py`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example. You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
###Code
!pygmentize src/robomaker/presets/deepracer.py
###Output
_____no_output_____
###Markdown
Training EntrypointThe training code is written in the file โtraining_worker.pyโ which is uploaded in the /src directory. At a high level, it does the following:- Uploads SageMaker node's IP address.- Starts a Redis server which receives agent experiences sent by rollout worker[s] (RoboMaker simulator).- Trains the model everytime after a certain number of episodes are received.- Uploads the new model weights on S3. The rollout workers then update their model to execute the next set of episodes.
###Code
# Uncomment the line below to see the training code
#!pygmentize src/training_worker.py
###Output
_____no_output_____
###Markdown
Train the RL model using the Python SDK Script mode First, we upload the preset and environment file to a particular location on S3, as expected by RoboMaker.
###Code
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
# Make sure nothing exists at this S3 prefix
!aws s3 rm --recursive {s3_location}
# Make any changes to the environment and preset files below and upload these files
!aws s3 cp src/robomaker/environments/ {s3_location}/environments/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
!aws s3 cp src/robomaker/presets/ {s3_location}/presets/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
###Output
_____no_output_____
###Markdown
Next, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
###Code
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
###Output
_____no_output_____
###Markdown
We use the RLEstimator for training RL jobs.1. Specify the source directory which has the environment file, preset and training code.2. Specify the entry point as the training code3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**4. Set the RLCOACH_PRESET as "deepracer" for this example.5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
###Code
RLCOACH_PRESET = "deepracer"
instance_type = "ml.c4.2xlarge"
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.11',
framework=RLFramework.TENSORFLOW,
role=role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
train_max_run=job_duration_in_seconds, # Maximum runtime in seconds
hyperparameters={"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"RLCOACH_PRESET": RLCOACH_PRESET,
},
metric_definitions = metric_definitions,
subnets=default_subnets, # Required for VPC mode
security_group_ids=default_security_groups, # Required for VPC mode
)
estimator.fit(job_name=job_name, wait=False)
###Output
_____no_output_____
###Markdown
Start the Robomaker job
###Code
from botocore.exceptions import UnknownServiceError
robomaker = boto3.client("robomaker")
###Output
_____no_output_____
###Markdown
Create Simulation Application We first create a RoboMaker simulation application using the `DeepRacer public bundle`. Please refer to [RoboMaker Sample Application Github Repository](https://github.com/aws-robotics/aws-robomaker-sample-application-deepracer) if you want to learn more about this bundle or modify it.
###Code
bundle_s3_key = 'deepracer/simulation_ws.tar.gz'
bundle_source = {'s3Bucket': s3_bucket,
's3Key': bundle_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE', 'version': '1.x'}
###Output
_____no_output_____
###Markdown
Download the public DeepRacer bundle provided by RoboMaker and upload it in our S3 bucket to create a RoboMaker Simulation Application
###Code
simulation_application_bundle_location = "https://s3.amazonaws.com/deepracer-managed-resources/deepracer-github-simapp.tar.gz"
!wget {simulation_application_bundle_location}
!aws s3 cp deepracer-github-simapp.tar.gz s3://{s3_bucket}/{bundle_s3_key}
!rm deepracer-github-simapp.tar.gz
app_name = "deepracer-sample-application" + strftime("%y%m%d-%H%M%S", gmtime())
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[bundle_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine
)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
###Output
_____no_output_____
###Markdown
Launch the Simulation job on RoboMakerWe create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
###Code
# Use more rollout workers for faster convergence
num_simulation_workers = 1
envriron_vars = {
"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"WORLD_NAME": "hard_track", # Can be one of "easy_track", "medium_track", "hard_track"
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"NUMBER_OF_ROLLOUT_WORKERS": str(num_simulation_workers)}
simulation_application = {"application": simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
###Output
_____no_output_____
###Markdown
Visualizing the simulations in RoboMaker You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
###Code
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
###Output
_____no_output_____
###Markdown
Plot metrics for training job
###Code
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
intermediate_folder_key = "{}/output/intermediate".format(job_name)
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
###Output
_____no_output_____
###Markdown
Clean Up Execute the cells below if you want to kill RoboMaker and SageMaker job.
###Code
for job_arn in job_arns:
robomaker.cancel_simulation_job(job=job_arn)
sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
envriron_vars = {"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"NUMBER_OF_TRIALS": str(20),
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"WORLD_NAME": "hard_track",
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
print("Created the following job:")
print("Job ARN", response["arn"])
###Output
_____no_output_____
###Markdown
Clean Up Simulation Application Resource
###Code
robomaker.delete_simulation_application(application=simulation_app_arn)
###Output
_____no_output_____
###Markdown
Distributed DeepRacer RL training with SageMaker and RoboMaker--- IntroductionIn this notebook, we will train a fully autonomous 1/18th scale race car using reinforcement learning using Amazon SageMaker RL and AWS RoboMaker's 3D driving simulator. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) is a service that makes it easy for developers to develop, test, and deploy robotics applications. This notebook provides a jailbreak experience of [AWS DeepRacer](https://console.aws.amazon.com/deepracer/homewelcome), giving us more control over the training/simulation process and RL algorithm tuning.--- How it works? The reinforcement learning agent (i.e. our autonomous car) learns to drive by interacting with its environment, e.g., the track, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through repeated episodes. The figure above shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following: 1. **Objective**: Learn to drive autonomously by staying close to the center of the track.2. **Environment**: A 3D driving simulator hosted on AWS RoboMaker.3. **State**: The driving POV image captured by the car's head camera, as shown in the illustration above.4. **Action**: Six discrete steering wheel positions at different angles (configurable)5. **Reward**: Positive reward for staying close to the center line; High penalty for going off-track. This is configurable and can be made more complex (for e.g. steering penalty can be added). Prequisites Imports To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations. You can run this notebook from your local machine or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on `SageMaker` and a simulation job on `RoboMaker`.
###Code
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import Markdown
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from markdown_helper import *
###Output
_____no_output_____
###Markdown
Setup S3 bucket Set up the linkage and authentication to the S3 bucket that we want to use for checkpoint and metadata.
###Code
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
###Output
_____no_output_____
###Markdown
Define Variables We define variables such as the job prefix for the training jobs and s3_prefix for storing metadata required for synchronization between the training and simulation jobs
###Code
job_name_prefix = 'rl-deepracer'
# create unique job name
tm = gmtime()
job_name = s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure S3 prefix contains SageMaker
s3_prefix_robomaker = job_name_prefix + "-robomaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure that the S3 prefix contains the keyword 'robomaker'
# Duration of job in seconds (5 hours)
job_duration_in_seconds = 3600 * 5
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia), US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
print("Model checkpoints and other metadata will be stored at: {}{}".format(s3_output_path, job_name))
###Output
_____no_output_____
###Markdown
Create an IAM roleEither get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
###Code
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role('sagemaker')
print("Using IAM role arn: {}".format(role))
###Output
_____no_output_____
###Markdown
> Please note that this notebook cannot be run in `SageMaker local mode` as the simulator is based on AWS RoboMaker service. Permission setup for invoking AWS RoboMaker from this notebook In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
###Code
display(Markdown(generate_help_for_robomaker_trust_relationship(role)))
###Output
_____no_output_____
###Markdown
Configure VPC Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts. We will use the default VPC configuration for this example.
###Code
ec2 = boto3.client('ec2')
default_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
default_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if group["GroupName"] == "default" and group["VpcId"] == default_vpc]
default_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == default_vpc and subnet['DefaultForAz']==True]
print("Using default VPC:", default_vpc)
print("Using default security group:", default_security_groups)
print("Using default subnets:", default_subnets)
###Output
_____no_output_____
###Markdown
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
###Code
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == default_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
assert len(route_tables) >= 1, "No route tables were found. Please follow the VPC S3 endpoint creation "\
"guide by clicking the above link."
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=default_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
###Output
_____no_output_____
###Markdown
Setup the environment The environment is defined in a Python file called โdeepracer_env.pyโ and the file can be found at `src/robomaker/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only. We can experiment with different reward functions by modifying `reward_function` in this file. Action space and steering angles can be changed by modifying the step method in `DeepRacerDiscreteEnv` class. Configure the preset for RL algorithmThe parameters that configure the RL training job are defined in `src/robomaker/presets/deepracer.py`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example. You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
###Code
!pygmentize src/robomaker/presets/deepracer.py
###Output
_____no_output_____
###Markdown
Training EntrypointThe training code is written in the file โtraining_worker.pyโ which is uploaded in the /src directory. At a high level, it does the following:- Uploads SageMaker node's IP address.- Starts a Redis server which receives agent experiences sent by rollout worker[s] (RoboMaker simulator).- Trains the model everytime after a certain number of episodes are received.- Uploads the new model weights on S3. The rollout workers then update their model to execute the next set of episodes.
###Code
# Uncomment the line below to see the training code
#!pygmentize src/training_worker.py
###Output
_____no_output_____
###Markdown
Train the RL model using the Python SDK Script modeยถ First, we upload the preset and envrionment file to a particular location on S3, as expected by RoboMaker.
###Code
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
# Make sure nothing exists at this S3 prefix
!aws s3 rm --recursive {s3_location}
# Make any changes to the envrironment and preset files below and upload these files
!aws s3 cp src/robomaker/environments/ {s3_location}/environments/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
!aws s3 cp src/robomaker/presets/ {s3_location}/presets/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
###Output
_____no_output_____
###Markdown
Next, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
###Code
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
###Output
_____no_output_____
###Markdown
We use the RLEstimator for training RL jobs.1. Specify the source directory which has the environment file, preset and training code.2. Specify the entry point as the training code3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**4. Set the RLCOACH_PRESET as "deepracer" for this example.5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
###Code
RLCOACH_PRESET = "deepracer"
instance_type = "ml.c4.2xlarge"
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.11.0',
framework=RLFramework.TENSORFLOW,
role=role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
train_max_run=job_duration_in_seconds, # Maximum runtime in seconds
hyperparameters={"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"RLCOACH_PRESET": RLCOACH_PRESET,
},
metric_definitions = metric_definitions,
subnets=default_subnets, # Required for VPC mode
security_group_ids=default_security_groups, # Required for VPC mode
)
estimator.fit(job_name=job_name, wait=False)
###Output
_____no_output_____
###Markdown
Start the Robomaker job
###Code
from botocore.exceptions import UnknownServiceError
robomaker = boto3.client("robomaker")
###Output
_____no_output_____
###Markdown
Create Simulation Application We first create a RoboMaker simulation application using the `DeepRacer public bundle`. Please refer to [RoboMaker Sample Application Github Repository](https://github.com/aws-robotics/aws-robomaker-sample-application-deepracer) if you want to learn more about this bundle or modify it.
###Code
bundle_s3_key = 'deepracer/simulation_ws.tar.gz'
bundle_source = {'s3Bucket': s3_bucket,
's3Key': bundle_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE', 'version': '1.x'}
###Output
_____no_output_____
###Markdown
Download the public DeepRacer bundle provided by RoboMaker and upload it in our S3 bucket to create a RoboMaker Simulation Application
###Code
simulation_application_bundle_location = "https://s3-us-west-2.amazonaws.com/robomaker-applications-us-west-2-11d8d0439f6a/deep-racer/deep-racer-1.0.74.0.1.0.82.0/simulation_ws.tar.gz"
!wget {simulation_application_bundle_location}
!aws s3 cp simulation_ws.tar.gz s3://{s3_bucket}/{bundle_s3_key}
!rm simulation_ws.tar.gz
app_name = "deepracer-sample-application" + strftime("%y%m%d-%H%M%S", gmtime())
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[bundle_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine
)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
###Output
_____no_output_____
###Markdown
Launch the Simulation job on RoboMakerWe create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
###Code
# Use more rollout workers for faster convergence
num_simulation_workers = 1
envriron_vars = {
"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"WORLD_NAME": "hard_track", # Can be one of "easy_track", "medium_track", "hard_track"
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"NUMBER_OF_ROLLOUT_WORKERS": str(num_simulation_workers)}
simulation_application = {"application": simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
###Output
_____no_output_____
###Markdown
Visualizing the simulations in RoboMaker You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
###Code
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
###Output
_____no_output_____
###Markdown
Plot metrics for training job
###Code
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
intermediate_folder_key = "{}/output/intermediate".format(job_name)
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
###Output
_____no_output_____
###Markdown
Clean Up Execute the cells below if you want to kill RoboMaker and SageMaker job.
###Code
for job_arn in job_arns:
robomaker.cancel_simulation_job(job=job_arn)
sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
envriron_vars = {"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"NUMBER_OF_TRIALS": str(20),
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"WORLD_NAME": "hard_track",
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
print("Created the following job:")
print("Job ARN", response["arn"])
###Output
_____no_output_____
###Markdown
Clean Up Simulation Application Resource
###Code
robomaker.delete_simulation_application(application=simulation_app_arn)
###Output
_____no_output_____
###Markdown
Distributed DeepRacer RL training with SageMaker and RoboMaker--- IntroductionIn this notebook, we will train a fully autonomous 1/18th scale race car using reinforcement learning using Amazon SageMaker RL and AWS RoboMaker's 3D driving simulator. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) is a service that makes it easy for developers to develop, test, and deploy robotics applications. This notebook provides a jailbreak experience of [AWS DeepRacer](https://console.aws.amazon.com/deepracer/homewelcome), giving us more control over the training/simulation process and RL algorithm tuning.--- How it works? The reinforcement learning agent (i.e. our autonomous car) learns to drive by interacting with its environment, e.g., the track, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through repeated episodes. The figure above shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following: 1. **Objective**: Learn to drive autonomously by staying close to the center of the track.2. **Environment**: A 3D driving simulator hosted on AWS RoboMaker.3. **State**: The driving POV image captured by the car's head camera, as shown in the illustration above.4. **Action**: Six discrete steering wheel positions at different angles (configurable)5. **Reward**: Positive reward for staying close to the center line; High penalty for going off-track. This is configurable and can be made more complex (for e.g. steering penalty can be added). Prequisites Imports To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations. You can run this notebook from your local machine or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on `SageMaker` and a simulation job on `RoboMaker`.
###Code
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import Markdown
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from markdown_helper import *
###Output
_____no_output_____
###Markdown
Setup S3 bucket Set up the linkage and authentication to the S3 bucket that we want to use for checkpoint and metadata.
###Code
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
###Output
_____no_output_____
###Markdown
Define Variables We define variables such as the job prefix for the training jobs and s3_prefix for storing metadata required for synchronization between the training and simulation jobs
###Code
job_name_prefix = 'rl-deepracer'
# create unique job name
tm = gmtime()
job_name = s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure S3 prefix contains SageMaker
s3_prefix_robomaker = job_name_prefix + "-robomaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure that the S3 prefix contains the keyword 'robomaker'
# Duration of job in seconds (5 hours)
job_duration_in_seconds = 3600 * 5
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia), US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
print("Model checkpoints and other metadata will be stored at: {}{}".format(s3_output_path, job_name))
###Output
_____no_output_____
###Markdown
Create an IAM roleEither get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
###Code
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role('sagemaker')
print("Using IAM role arn: {}".format(role))
###Output
_____no_output_____
###Markdown
> Please note that this notebook cannot be run in `SageMaker local mode` as the simulator is based on AWS RoboMaker service. Permission setup for invoking AWS RoboMaker from this notebook In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
###Code
display(Markdown(generate_help_for_robomaker_trust_relationship(role)))
###Output
_____no_output_____
###Markdown
Configure VPC Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts. We will use the default VPC configuration for this example.
###Code
ec2 = boto3.client('ec2')
default_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
default_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if group["GroupName"] == "default" and group["VpcId"] == default_vpc]
default_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == default_vpc and subnet['DefaultForAz']==True]
print("Using default VPC:", default_vpc)
print("Using default security group:", default_security_groups)
print("Using default subnets:", default_subnets)
###Output
_____no_output_____
###Markdown
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
###Code
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == default_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
assert len(route_tables) >= 1, "No route tables were found. Please follow the VPC S3 endpoint creation "\
"guide by clicking the above link."
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=default_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
###Output
_____no_output_____
###Markdown
Setup the environment The environment is defined in a Python file called โdeepracer_env.pyโ and the file can be found at `src/robomaker/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only. We can experiment with different reward functions by modifying `reward_function` in this file. Action space and steering angles can be changed by modifying the step method in `DeepRacerDiscreteEnv` class. Configure the preset for RL algorithmThe parameters that configure the RL training job are defined in `src/robomaker/presets/deepracer.py`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example. You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
###Code
!pygmentize src/robomaker/presets/deepracer.py
###Output
_____no_output_____
###Markdown
Training EntrypointThe training code is written in the file โtraining_worker.pyโ which is uploaded in the /src directory. At a high level, it does the following:- Uploads SageMaker node's IP address.- Starts a Redis server which receives agent experiences sent by rollout worker[s] (RoboMaker simulator).- Trains the model everytime after a certain number of episodes are received.- Uploads the new model weights on S3. The rollout workers then update their model to execute the next set of episodes.
###Code
# Uncomment the line below to see the training code
#!pygmentize src/training_worker.py
###Output
_____no_output_____
###Markdown
Train the RL model using the Python SDK Script modeยถ First, we upload the preset and envrionment file to a particular location on S3, as expected by RoboMaker.
###Code
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
# Make sure nothing exists at this S3 prefix
!aws s3 rm --recursive {s3_location}
# Make any changes to the envrironment and preset files below and upload these files
!aws s3 cp src/robomaker/environments/ {s3_location}/environments/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
!aws s3 cp src/robomaker/presets/ {s3_location}/presets/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
###Output
_____no_output_____
###Markdown
Next, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
###Code
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
###Output
_____no_output_____
###Markdown
We use the RLEstimator for training RL jobs.1. Specify the source directory which has the environment file, preset and training code.2. Specify the entry point as the training code3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**4. Set the RLCOACH_PRESET as "deepracer" for this example.5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
###Code
RLCOACH_PRESET = "deepracer"
instance_type = "ml.c4.2xlarge"
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.10.1',
framework=RLFramework.TENSORFLOW,
role=role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
train_max_run=job_duration_in_seconds, # Maximum runtime in seconds
hyperparameters={"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"RLCOACH_PRESET": RLCOACH_PRESET,
},
metric_definitions = metric_definitions,
subnets=default_subnets, # Required for VPC mode
security_group_ids=default_security_groups, # Required for VPC mode
)
estimator.fit(job_name=job_name, wait=False)
###Output
_____no_output_____
###Markdown
Start the Robomaker job
###Code
from botocore.exceptions import UnknownServiceError
robomaker = boto3.client("robomaker")
###Output
_____no_output_____
###Markdown
Create Simulation Application We first create a RoboMaker simulation application using the `DeepRacer public bundle`. Please refer to [RoboMaker Sample Application Github Repository](https://github.com/aws-robotics/aws-robomaker-sample-application-deepracer) if you want to learn more about this bundle or modify it.
###Code
bundle_s3_key = 'deepracer/simulation_ws.tar.gz'
bundle_source = {'s3Bucket': s3_bucket,
's3Key': bundle_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE', 'version': '1.x'}
###Output
_____no_output_____
###Markdown
Download the public DeepRacer bundle provided by RoboMaker and upload it in our S3 bucket to create a RoboMaker Simulation Application
###Code
simulation_application_bundle_location = "https://s3-us-west-2.amazonaws.com/robomaker-applications-us-west-2-11d8d0439f6a/deep-racer/deep-racer-1.0.80.0.1.0.106.0/simulation_ws.tar.gz"
!wget {simulation_application_bundle_location}
!aws s3 cp simulation_ws.tar.gz s3://{s3_bucket}/{bundle_s3_key}
!rm simulation_ws.tar.gz
app_name = "deepracer-sample-application" + strftime("%y%m%d-%H%M%S", gmtime())
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[bundle_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine
)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
###Output
_____no_output_____
###Markdown
Launch the Simulation job on RoboMakerWe create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
###Code
# Use more rollout workers for faster convergence
num_simulation_workers = 1
envriron_vars = {
"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"WORLD_NAME": "hard_track", # Can be one of "easy_track", "medium_track", "hard_track"
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"NUMBER_OF_ROLLOUT_WORKERS": str(num_simulation_workers)}
simulation_application = {"application": simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
###Output
_____no_output_____
###Markdown
Visualizing the simulations in RoboMaker You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
###Code
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
###Output
_____no_output_____
###Markdown
Plot metrics for training job
###Code
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
intermediate_folder_key = "{}/output/intermediate".format(job_name)
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
###Output
_____no_output_____
###Markdown
Clean Up Execute the cells below if you want to kill RoboMaker and SageMaker job.
###Code
for job_arn in job_arns:
robomaker.cancel_simulation_job(job=job_arn)
sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
envriron_vars = {"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"NUMBER_OF_TRIALS": str(20),
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"WORLD_NAME": "hard_track",
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
print("Created the following job:")
print("Job ARN", response["arn"])
###Output
_____no_output_____
###Markdown
Clean Up Simulation Application Resource
###Code
robomaker.delete_simulation_application(application=simulation_app_arn)
###Output
_____no_output_____
###Markdown
Distributed DeepRacer RL training with SageMaker and RoboMaker--- IntroductionIn this notebook, we will train a fully autonomous 1/18th scale race car using reinforcement learning using Amazon SageMaker RL and AWS RoboMaker's 3D driving simulator. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) is a service that makes it easy for developers to develop, test, and deploy robotics applications. This notebook provides a jailbreak experience of [AWS DeepRacer](https://console.aws.amazon.com/deepracer/homewelcome), giving us more control over the training/simulation process and RL algorithm tuning.--- How it works? The reinforcement learning agent (i.e. our autonomous car) learns to drive by interacting with its environment, e.g., the track, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through repeated episodes. The figure above shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following: 1. **Objective**: Learn to drive autonomously by staying close to the center of the track.2. **Environment**: A 3D driving simulator hosted on AWS RoboMaker.3. **State**: The driving POV image captured by the car's head camera, as shown in the illustration above.4. **Action**: Six discrete steering wheel positions at different angles (configurable)5. **Reward**: Positive reward for staying close to the center line; High penalty for going off-track. This is configurable and can be made more complex (for e.g. steering penalty can be added). Prequisites Imports To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations. You can run this notebook from your local machine or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on `SageMaker` and a simulation job on `RoboMaker`.
###Code
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import Markdown
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from markdown_helper import *
###Output
_____no_output_____
###Markdown
Setup S3 bucket Set up the linkage and authentication to the S3 bucket that we want to use for checkpoint and metadata.
###Code
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
###Output
_____no_output_____
###Markdown
Define Variables We define variables such as the job prefix for the training jobs and s3_prefix for storing metadata required for synchronization between the training and simulation jobs
###Code
job_name_prefix = 'rl-deepracer'
# create unique job name
tm = gmtime()
job_name = s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure S3 prefix contains SageMaker
s3_prefix_robomaker = job_name_prefix + "-robomaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure that the S3 prefix contains the keyword 'robomaker'
# Duration of job in seconds (5 hours)
job_duration_in_seconds = 3600 * 5
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia), US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
print("Model checkpoints and other metadata will be stored at: {}{}".format(s3_output_path, job_name))
###Output
_____no_output_____
###Markdown
Create an IAM roleEither get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
###Code
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role('sagemaker')
print("Using IAM role arn: {}".format(role))
###Output
_____no_output_____
###Markdown
> Please note that this notebook cannot be run in `SageMaker local mode` as the simulator is based on AWS RoboMaker service. Permission setup for invoking AWS RoboMaker from this notebook In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
###Code
display(Markdown(generate_help_for_robomaker_trust_relationship(role)))
###Output
_____no_output_____
###Markdown
Configure VPC Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts. We will use the default VPC configuration for this example.
###Code
ec2 = boto3.client('ec2')
default_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
default_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if group["GroupName"] == "default" and group["VpcId"] == default_vpc]
default_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == default_vpc and subnet['DefaultForAz']==True]
print("Using default VPC:", default_vpc)
print("Using default security group:", default_security_groups)
print("Using default subnets:", default_subnets)
###Output
_____no_output_____
###Markdown
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
###Code
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == default_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
assert len(route_tables) >= 1, "No route tables were found. Please follow the VPC S3 endpoint creation "\
"guide by clicking the above link."
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=default_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
###Output
_____no_output_____
###Markdown
Setup the environment The environment is defined in a Python file called โdeepracer_env.pyโ and the file can be found at `src/robomaker/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only. We can experiment with different reward functions by modifying `reward_function` in this file. Action space and steering angles can be changed by modifying the step method in `DeepRacerDiscreteEnv` class. Configure the preset for RL algorithmThe parameters that configure the RL training job are defined in `src/robomaker/presets/deepracer.py`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example. You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
###Code
!pygmentize src/robomaker/presets/deepracer.py
###Output
_____no_output_____
###Markdown
Training EntrypointThe training code is written in the file โtraining_worker.pyโ which is uploaded in the /src directory. At a high level, it does the following:- Uploads SageMaker node's IP address.- Starts a Redis server which receives agent experiences sent by rollout worker[s] (RoboMaker simulator).- Trains the model everytime after a certain number of episodes are received.- Uploads the new model weights on S3. The rollout workers then update their model to execute the next set of episodes.
###Code
# Uncomment the line below to see the training code
#!pygmentize src/training_worker.py
###Output
_____no_output_____
###Markdown
Train the RL model using the Python SDK Script modeยถ First, we upload the preset and envrionment file to a particular location on S3, as expected by RoboMaker.
###Code
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
# Make sure nothing exists at this S3 prefix
!aws s3 rm --recursive {s3_location}
# Make any changes to the envrironment and preset files below and upload these files
!aws s3 cp src/robomaker/environments/ {s3_location}/environments/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
!aws s3 cp src/robomaker/presets/ {s3_location}/presets/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
###Output
_____no_output_____
###Markdown
Next, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
###Code
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
###Output
_____no_output_____
###Markdown
We use the RLEstimator for training RL jobs.1. Specify the source directory which has the environment file, preset and training code.2. Specify the entry point as the training code3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**4. Set the RLCOACH_PRESET as "deepracer" for this example.5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
###Code
RLCOACH_PRESET = "deepracer"
instance_type = "ml.c4.2xlarge"
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.11',
framework=RLFramework.TENSORFLOW,
role=role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
train_max_run=job_duration_in_seconds, # Maximum runtime in seconds
hyperparameters={"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"RLCOACH_PRESET": RLCOACH_PRESET,
},
metric_definitions = metric_definitions,
subnets=default_subnets, # Required for VPC mode
security_group_ids=default_security_groups, # Required for VPC mode
)
estimator.fit(job_name=job_name, wait=False)
###Output
_____no_output_____
###Markdown
Start the Robomaker job
###Code
from botocore.exceptions import UnknownServiceError
robomaker = boto3.client("robomaker")
###Output
_____no_output_____
###Markdown
Create Simulation Application We first create a RoboMaker simulation application using the `DeepRacer public bundle`. Please refer to [RoboMaker Sample Application Github Repository](https://github.com/aws-robotics/aws-robomaker-sample-application-deepracer) if you want to learn more about this bundle or modify it.
###Code
bundle_s3_key = 'deepracer/simulation_ws.tar.gz'
bundle_source = {'s3Bucket': s3_bucket,
's3Key': bundle_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE', 'version': '1.x'}
###Output
_____no_output_____
###Markdown
Download the public DeepRacer bundle provided by RoboMaker and upload it in our S3 bucket to create a RoboMaker Simulation Application
###Code
simulation_application_bundle_location = "https://s3.amazonaws.com/deepracer-managed-resources/deepracer-github-simapp.tar.gz"
!wget {simulation_application_bundle_location}
!aws s3 cp deepracer-github-simapp.tar.gz s3://{s3_bucket}/{bundle_s3_key}
!rm deepracer-github-simapp.tar.gz
app_name = "deepracer-sample-application" + strftime("%y%m%d-%H%M%S", gmtime())
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[bundle_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine
)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
###Output
_____no_output_____
###Markdown
Launch the Simulation job on RoboMakerWe create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
###Code
# Use more rollout workers for faster convergence
num_simulation_workers = 1
envriron_vars = {
"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"WORLD_NAME": "hard_track", # Can be one of "easy_track", "medium_track", "hard_track"
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"NUMBER_OF_ROLLOUT_WORKERS": str(num_simulation_workers)}
simulation_application = {"application": simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
###Output
_____no_output_____
###Markdown
Visualizing the simulations in RoboMaker You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
###Code
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
###Output
_____no_output_____
###Markdown
Plot metrics for training job
###Code
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
intermediate_folder_key = "{}/output/intermediate".format(job_name)
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
###Output
_____no_output_____
###Markdown
Clean Up Execute the cells below if you want to kill RoboMaker and SageMaker job.
###Code
for job_arn in job_arns:
robomaker.cancel_simulation_job(job=job_arn)
sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
envriron_vars = {"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"NUMBER_OF_TRIALS": str(20),
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"WORLD_NAME": "hard_track",
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
print("Created the following job:")
print("Job ARN", response["arn"])
###Output
_____no_output_____
###Markdown
Clean Up Simulation Application Resource
###Code
robomaker.delete_simulation_application(application=simulation_app_arn)
###Output
_____no_output_____
###Markdown
Distributed DeepRacer RL training with SageMaker and RoboMaker--- IntroductionIn this notebook, we will train a fully autonomous 1/18th scale race car using reinforcement learning using Amazon SageMaker RL and AWS RoboMaker's 3D driving simulator. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) is a service that makes it easy for developers to develop, test, and deploy robotics applications. This notebook provides a jailbreak experience of [AWS DeepRacer](https://console.aws.amazon.com/deepracer/homewelcome), giving us more control over the training/simulation process and RL algorithm tuning.--- How it works? The reinforcement learning agent (i.e. our autonomous car) learns to drive by interacting with its environment, e.g., the track, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through repeated episodes. The figure above shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following: 1. **Objective**: Learn to drive autonomously by staying close to the center of the track.2. **Environment**: A 3D driving simulator hosted on AWS RoboMaker.3. **State**: The driving POV image captured by the car's head camera, as shown in the illustration above.4. **Action**: Six discrete steering wheel positions at different angles (configurable)5. **Reward**: Positive reward for staying close to the center line; High penalty for going off-track. This is configurable and can be made more complex (for e.g. steering penalty can be added). Prequisites Imports To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations. You can run this notebook from your local machine or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on `SageMaker` and a simulation job on `RoboMaker`.
###Code
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import Markdown
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from markdown_helper import *
###Output
_____no_output_____
###Markdown
Setup S3 bucket Set up the linkage and authentication to the S3 bucket that we want to use for checkpoint and metadata.
###Code
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
###Output
_____no_output_____
###Markdown
Define Variables We define variables such as the job prefix for the training jobs and s3_prefix for storing metadata required for synchronization between the training and simulation jobs
###Code
job_name_prefix = 'rl-deepracer'
# create unique job name
tm = gmtime()
job_name = s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure S3 prefix contains SageMaker
s3_prefix_robomaker = job_name_prefix + "-robomaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure that the S3 prefix contains the keyword 'robomaker'
# Duration of job in seconds (5 hours)
job_duration_in_seconds = 3600 * 5
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia), US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
print("Model checkpoints and other metadata will be stored at: {}{}".format(s3_output_path, job_name))
###Output
_____no_output_____
###Markdown
Create an IAM roleEither get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
###Code
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role('sagemaker')
print("Using IAM role arn: {}".format(role))
###Output
_____no_output_____
###Markdown
> Please note that this notebook cannot be run in `SageMaker local mode` as the simulator is based on AWS RoboMaker service. Permission setup for invoking AWS RoboMaker from this notebook In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
###Code
display(Markdown(generate_help_for_robomaker_trust_relationship(role)))
###Output
_____no_output_____
###Markdown
Configure VPC Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts. We will use the default VPC configuration for this example.
###Code
ec2 = boto3.client('ec2')
default_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
default_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if group["GroupName"] == "default" and group["VpcId"] == default_vpc]
default_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == default_vpc and subnet['DefaultForAz']==True]
print("Using default VPC:", default_vpc)
print("Using default security group:", default_security_groups)
print("Using default subnets:", default_subnets)
###Output
_____no_output_____
###Markdown
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
###Code
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == default_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
assert len(route_tables) >= 1, "No route tables were found. Please follow the VPC S3 endpoint creation "\
"guide by clicking the above link."
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=default_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
###Output
_____no_output_____
###Markdown
Setup the environment The environment is defined in a Python file called โdeepracer_env.pyโ and the file can be found at `src/robomaker/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only. We can experiment with different reward functions by modifying `reward_function` in this file. Action space and steering angles can be changed by modifying the step method in `DeepRacerDiscreteEnv` class. Configure the preset for RL algorithmThe parameters that configure the RL training job are defined in `src/robomaker/presets/deepracer.py`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example. You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
###Code
!pygmentize src/robomaker/presets/deepracer.py
###Output
_____no_output_____
###Markdown
Training EntrypointThe training code is written in the file โtraining_worker.pyโ which is uploaded in the /src directory. At a high level, it does the following:- Uploads SageMaker node's IP address.- Starts a Redis server which receives agent experiences sent by rollout worker[s] (RoboMaker simulator).- Trains the model everytime after a certain number of episodes are received.- Uploads the new model weights on S3. The rollout workers then update their model to execute the next set of episodes.
###Code
# Uncomment the line below to see the training code
#!pygmentize src/training_worker.py
###Output
_____no_output_____
###Markdown
Train the RL model using the Python SDK Script modeยถ First, we upload the preset and envrionment file to a particular location on S3, as expected by RoboMaker.
###Code
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
# Make sure nothing exists at this S3 prefix
!aws s3 rm --recursive {s3_location}
# Make any changes to the envrironment and preset files below and upload these files
!aws s3 cp src/robomaker/environments/ {s3_location}/environments/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
!aws s3 cp src/robomaker/presets/ {s3_location}/presets/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
###Output
_____no_output_____
###Markdown
Next, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
###Code
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
###Output
_____no_output_____
###Markdown
We use the RLEstimator for training RL jobs.1. Specify the source directory which has the environment file, preset and training code.2. Specify the entry point as the training code3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**4. Set the RLCOACH_PRESET as "deepracer" for this example.5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
###Code
RLCOACH_PRESET = "deepracer"
instance_type = "ml.c4.2xlarge"
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.11.1',
framework=RLFramework.TENSORFLOW,
role=role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
train_max_run=job_duration_in_seconds, # Maximum runtime in seconds
hyperparameters={"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"RLCOACH_PRESET": RLCOACH_PRESET,
},
metric_definitions = metric_definitions,
subnets=default_subnets, # Required for VPC mode
security_group_ids=default_security_groups, # Required for VPC mode
)
estimator.fit(job_name=job_name, wait=False)
###Output
_____no_output_____
###Markdown
Start the Robomaker job
###Code
from botocore.exceptions import UnknownServiceError
robomaker = boto3.client("robomaker")
###Output
_____no_output_____
###Markdown
Create Simulation Application We first create a RoboMaker simulation application using the `DeepRacer public bundle`. Please refer to [RoboMaker Sample Application Github Repository](https://github.com/aws-robotics/aws-robomaker-sample-application-deepracer) if you want to learn more about this bundle or modify it.
###Code
bundle_s3_key = 'deepracer/simulation_ws.tar.gz'
bundle_source = {'s3Bucket': s3_bucket,
's3Key': bundle_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE', 'version': '1.x'}
###Output
_____no_output_____
###Markdown
Download the public DeepRacer bundle provided by RoboMaker and upload it in our S3 bucket to create a RoboMaker Simulation Application
###Code
simulation_application_bundle_location = "https://s3-us-west-2.amazonaws.com/robomaker-applications-us-west-2-11d8d0439f6a/deep-racer/deep-racer-1.0.80.0.1.0.106.0/simulation_ws.tar.gz"
!wget {simulation_application_bundle_location}
!aws s3 cp simulation_ws.tar.gz s3://{s3_bucket}/{bundle_s3_key}
!rm simulation_ws.tar.gz
app_name = "deepracer-sample-application" + strftime("%y%m%d-%H%M%S", gmtime())
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[bundle_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine
)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
###Output
_____no_output_____
###Markdown
Launch the Simulation job on RoboMakerWe create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
###Code
# Use more rollout workers for faster convergence
num_simulation_workers = 1
envriron_vars = {
"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"WORLD_NAME": "hard_track", # Can be one of "easy_track", "medium_track", "hard_track"
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"NUMBER_OF_ROLLOUT_WORKERS": str(num_simulation_workers)}
simulation_application = {"application": simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
###Output
_____no_output_____
###Markdown
Visualizing the simulations in RoboMaker You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
###Code
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
###Output
_____no_output_____
###Markdown
Plot metrics for training job
###Code
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
intermediate_folder_key = "{}/output/intermediate".format(job_name)
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
###Output
_____no_output_____
###Markdown
Clean Up Execute the cells below if you want to kill RoboMaker and SageMaker job.
###Code
for job_arn in job_arns:
robomaker.cancel_simulation_job(job=job_arn)
sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
envriron_vars = {"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"NUMBER_OF_TRIALS": str(20),
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"WORLD_NAME": "hard_track",
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
print("Created the following job:")
print("Job ARN", response["arn"])
###Output
_____no_output_____
###Markdown
Clean Up Simulation Application Resource
###Code
robomaker.delete_simulation_application(application=simulation_app_arn)
###Output
_____no_output_____
###Markdown
Distributed DeepRacer RL training with SageMaker and RoboMaker--- IntroductionIn this notebook, we will train a fully autonomous 1/18th scale race car using reinforcement learning using Amazon SageMaker RL and AWS RoboMaker's 3D driving simulator. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) is a service that makes it easy for developers to develop, test, and deploy robotics applications. This notebook provides a jailbreak experience of [AWS DeepRacer](https://console.aws.amazon.com/deepracer/homewelcome), giving us more control over the training/simulation process and RL algorithm tuning.--- How it works? The reinforcement learning agent (i.e. our autonomous car) learns to drive by interacting with its environment, e.g., the track, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through repeated episodes. The figure above shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following: 1. **Objective**: Learn to drive autonomously by staying close to the center of the track.2. **Environment**: A 3D driving simulator hosted on AWS RoboMaker.3. **State**: The driving POV image captured by the car's head camera, as shown in the illustration above.4. **Action**: Six discrete steering wheel positions at different angles (configurable)5. **Reward**: Positive reward for staying close to the center line; High penalty for going off-track. This is configurable and can be made more complex (for e.g. steering penalty can be added). Prequisites Imports To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations. You can run this notebook from your local machine or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on `SageMaker` and a simulation job on `RoboMaker`.
###Code
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import Markdown
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from markdown_helper import *
###Output
_____no_output_____
###Markdown
Setup S3 bucket Set up the linkage and authentication to the S3 bucket that we want to use for checkpoint and metadata.
###Code
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
###Output
_____no_output_____
###Markdown
Define Variables We define variables such as the job prefix for the training jobs and s3_prefix for storing metadata required for synchronization between the training and simulation jobs
###Code
job_name_prefix = 'rl-deepracer'
# create unique job name
tm = gmtime()
job_name = s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure S3 prefix contains SageMaker
s3_prefix_robomaker = job_name_prefix + "-robomaker-" + strftime("%y%m%d-%H%M%S", tm) #Ensure that the S3 prefix contains the keyword 'robomaker'
# Duration of job in seconds (5 hours)
job_duration_in_seconds = 3600 * 5
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia), US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
print("Model checkpoints and other metadata will be stored at: {}{}".format(s3_output_path, job_name))
###Output
_____no_output_____
###Markdown
Create an IAM roleEither get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
###Code
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role('sagemaker')
print("Using IAM role arn: {}".format(role))
###Output
_____no_output_____
###Markdown
> Please note that this notebook cannot be run in `SageMaker local mode` as the simulator is based on AWS RoboMaker service. Permission setup for invoking AWS RoboMaker from this notebook In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
###Code
display(Markdown(generate_help_for_robomaker_trust_relationship(role)))
###Output
_____no_output_____
###Markdown
Configure VPC Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts. We will use the default VPC configuration for this example.
###Code
ec2 = boto3.client('ec2')
default_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
default_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if 'VpcId' in group and group["GroupName"] == "default" and group["VpcId"] == default_vpc]
default_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == default_vpc and subnet['DefaultForAz']==True]
print("Using default VPC:", default_vpc)
print("Using default security group:", default_security_groups)
print("Using default subnets:", default_subnets)
###Output
_____no_output_____
###Markdown
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
###Code
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == default_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
assert len(route_tables) >= 1, "No route tables were found. Please follow the VPC S3 endpoint creation "\
"guide by clicking the above link."
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=default_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
###Output
_____no_output_____
###Markdown
Setup the environment The environment is defined in a Python file called โdeepracer_env.pyโ and the file can be found at `src/robomaker/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only. We can experiment with different reward functions by modifying `reward_function` in this file. Action space and steering angles can be changed by modifying the step method in `DeepRacerDiscreteEnv` class. Configure the preset for RL algorithmThe parameters that configure the RL training job are defined in `src/robomaker/presets/deepracer.py`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example. You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
###Code
!pygmentize src/robomaker/presets/deepracer.py
###Output
_____no_output_____
###Markdown
Training EntrypointThe training code is written in the file โtraining_worker.pyโ which is uploaded in the /src directory. At a high level, it does the following:- Uploads SageMaker node's IP address.- Starts a Redis server which receives agent experiences sent by rollout worker[s] (RoboMaker simulator).- Trains the model everytime after a certain number of episodes are received.- Uploads the new model weights on S3. The rollout workers then update their model to execute the next set of episodes.
###Code
# Uncomment the line below to see the training code
#!pygmentize src/training_worker.py
###Output
_____no_output_____
###Markdown
Train the RL model using the Python SDK Script mode First, we upload the preset and environment file to a particular location on S3, as expected by RoboMaker.
###Code
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
# Make sure nothing exists at this S3 prefix
!aws s3 rm --recursive {s3_location}
# Make any changes to the environment and preset files below and upload these files
!aws s3 cp src/robomaker/environments/ {s3_location}/environments/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
!aws s3 cp src/robomaker/presets/ {s3_location}/presets/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
###Output
_____no_output_____
###Markdown
Next, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
###Code
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
###Output
_____no_output_____
###Markdown
We use the RLEstimator for training RL jobs.1. Specify the source directory which has the environment file, preset and training code.2. Specify the entry point as the training code3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**4. Set the RLCOACH_PRESET as "deepracer" for this example.5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
###Code
RLCOACH_PRESET = "deepracer"
instance_type = "ml.c4.2xlarge"
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.11',
framework=RLFramework.TENSORFLOW,
role=role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
train_max_run=job_duration_in_seconds, # Maximum runtime in seconds
hyperparameters={"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"RLCOACH_PRESET": RLCOACH_PRESET,
},
metric_definitions = metric_definitions,
subnets=default_subnets, # Required for VPC mode
security_group_ids=default_security_groups, # Required for VPC mode
)
estimator.fit(job_name=job_name, wait=False)
###Output
_____no_output_____
###Markdown
Start the Robomaker job
###Code
from botocore.exceptions import UnknownServiceError
robomaker = boto3.client("robomaker")
###Output
_____no_output_____
###Markdown
Create Simulation Application We first create a RoboMaker simulation application using the `DeepRacer public bundle`. Please refer to [RoboMaker Sample Application Github Repository](https://github.com/aws-robotics/aws-robomaker-sample-application-deepracer) if you want to learn more about this bundle or modify it.
###Code
bundle_s3_key = 'deepracer/simulation_ws.tar.gz'
bundle_source = {'s3Bucket': s3_bucket,
's3Key': bundle_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE', 'version': '1.x'}
###Output
_____no_output_____
###Markdown
Download the public DeepRacer bundle provided by RoboMaker and upload it in our S3 bucket to create a RoboMaker Simulation Application
###Code
simulation_application_bundle_location = "https://s3.amazonaws.com/deepracer.onoyoji.jp.myinstance.com/deepracer-managed-resources/deepracer-github-simapp.tar.gz"
!wget {simulation_application_bundle_location}
!aws s3 cp deepracer-github-simapp.tar.gz s3://{s3_bucket}/{bundle_s3_key}
!rm deepracer-github-simapp.tar.gz
app_name = "deepracer-sample-application" + strftime("%y%m%d-%H%M%S", gmtime())
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[bundle_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine
)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
###Output
_____no_output_____
###Markdown
Launch the Simulation job on RoboMakerWe create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
###Code
# Use more rollout workers for faster convergence
num_simulation_workers = 1
envriron_vars = {
"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"WORLD_NAME": "hard_track", # Can be one of "easy_track", "medium_track", "hard_track"
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"NUMBER_OF_ROLLOUT_WORKERS": str(num_simulation_workers)}
simulation_application = {"application": simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
###Output
_____no_output_____
###Markdown
Visualizing the simulations in RoboMaker You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
###Code
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
###Output
_____no_output_____
###Markdown
Plot metrics for training job
###Code
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
intermediate_folder_key = "{}/output/intermediate".format(job_name)
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
###Output
_____no_output_____
###Markdown
Clean Up Execute the cells below if you want to kill RoboMaker and SageMaker job.
###Code
for job_arn in job_arns:
robomaker.cancel_simulation_job(job=job_arn)
sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
envriron_vars = {"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"NUMBER_OF_TRIALS": str(20),
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"WORLD_NAME": "hard_track",
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig,
outputLocation={"s3Bucket":s3_bucket, "s3Prefix":s3_prefix_robomaker}
)
print("Created the following job:")
print("Job ARN", response["arn"])
###Output
_____no_output_____
###Markdown
Clean Up Simulation Application Resource
###Code
robomaker.delete_simulation_application(application=simulation_app_arn)
###Output
_____no_output_____
###Markdown
Distributed DeepRacer RL training with SageMaker and RoboMaker--- IntroductionIn this notebook, we will train a fully autonomous 1/18th scale race car using reinforcement learning using Amazon SageMaker RL and AWS RoboMaker's 3D driving simulator. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) is a service that makes it easy for developers to develop, test, and deploy robotics applications. This notebook provides a jailbreak experience of [AWS DeepRacer](https://console.aws.amazon.com/deepracer/homewelcome), giving us more control over the training/simulation process and RL algorithm tuning.--- How it works? The reinforcement learning agent (i.e. our autonomous car) learns to drive by interacting with its environment, e.g., the track, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through repeated episodes. The figure above shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following: 1. **Objective**: Learn to drive autonomously by staying close to the center of the track.2. **Environment**: A 3D driving simulator hosted on AWS RoboMaker.3. **State**: The driving POV image captured by the car's head camera, as shown in the illustration above.4. **Action**: Six discrete steering wheel positions at different angles (configurable)5. **Reward**: Positive reward for staying close to the center line; High penalty for going off-track. This is configurable and can be made more complex (for e.g. steering penalty can be added). Prequisites Imports To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations. You can run this notebook from your local machine or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on `SageMaker` and a simulation job on `RoboMaker`.
###Code
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import Markdown
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from markdown_helper import *
###Output
_____no_output_____
###Markdown
Setup S3 bucket Set up the linkage and authentication to the S3 bucket that we want to use for checkpoint and metadata.
###Code
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
###Output
_____no_output_____
###Markdown
Define Variables We define variables such as the job prefix for the training jobs and s3_prefix for storing metadata required for synchronization between the training and simulation jobs
###Code
job_name_prefix = 'rl-deepracer'
# create unique job name
job_name = s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", gmtime())
# Duration of job in seconds (5 hours)
job_duration_in_seconds = 3600 * 5
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia), US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
print("Model checkpoints and other metadata will be stored at: {}{}".format(s3_output_path, job_name))
###Output
_____no_output_____
###Markdown
Create an IAM roleEither get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
###Code
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role('sagemaker')
print("Using IAM role arn: {}".format(role))
###Output
_____no_output_____
###Markdown
> Please note that this notebook cannot be run in `SageMaker local mode` as the simulator is based on AWS RoboMaker service. Permission setup for invoking AWS RoboMaker from this notebook In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
###Code
display(Markdown(generate_help_for_robomaker_trust_relationship(role)))
###Output
_____no_output_____
###Markdown
Configure VPC Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts. We will use the default VPC configuration for this example.
###Code
ec2 = boto3.client('ec2')
default_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
default_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if group["GroupName"] == "default" and group["VpcId"] == default_vpc]
default_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == default_vpc and subnet['DefaultForAz']==True]
print("Using default VPC:", default_vpc)
print("Using default security group:", default_security_groups)
print("Using default subnets:", default_subnets)
###Output
_____no_output_____
###Markdown
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
###Code
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == default_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
assert len(route_tables) >= 1, "No route tables were found. Please follow the VPC S3 endpoint creation "\
"guide by clicking the above link."
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=default_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
###Output
_____no_output_____
###Markdown
Setup the environment The environment is defined in a Python file called โdeepracer_env.pyโ and the file can be found at `src/robomaker/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only. We can experiment with different reward functions by modifying `reward_function` in this file. Action space and steering angles can be changed by modifying the step method in `DeepRacerDiscreteEnv` class. Configure the preset for RL algorithmThe parameters that configure the RL training job are defined in `src/robomaker/presets/deepracer.py`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example. You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
###Code
!pygmentize src/robomaker/presets/deepracer.py
###Output
_____no_output_____
###Markdown
Training EntrypointThe training code is written in the file โtraining_worker.pyโ which is uploaded in the /src directory. At a high level, it does the following:- Uploads SageMaker node's IP address.- Starts a Redis server which receives agent experiences sent by rollout worker[s] (RoboMaker simulator).- Trains the model everytime after a certain number of episodes are received.- Uploads the new model weights on S3. The rollout workers then update their model to execute the next set of episodes.
###Code
# Uncomment the line below to see the training code
#!pygmentize src/training_worker.py
###Output
_____no_output_____
###Markdown
Train the RL model using the Python SDK Script modeยถ First, we upload the preset and envrionment file to a particular location on S3, as expected by RoboMaker.
###Code
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
# Make sure nothing exists at this S3 prefix
!aws s3 rm --recursive {s3_location}
# Make any changes to the envrironment and preset files below and upload these files
!aws s3 cp src/robomaker/environments/ {s3_location}/environments/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
!aws s3 cp src/robomaker/presets/ {s3_location}/presets/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
###Output
_____no_output_____
###Markdown
Next, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
###Code
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
###Output
_____no_output_____
###Markdown
We use the RLEstimator for training RL jobs.1. Specify the source directory which has the environment file, preset and training code.2. Specify the entry point as the training code3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**4. Set the RLCOACH_PRESET as "deepracer" for this example.5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
###Code
RLCOACH_PRESET = "deepracer"
instance_type = "ml.c5.4xlarge"
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.10.1',
framework=RLFramework.TENSORFLOW,
role=role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
train_max_run=job_duration_in_seconds, # Maximum runtime in seconds
hyperparameters={"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"RLCOACH_PRESET": RLCOACH_PRESET,
},
metric_definitions = metric_definitions,
subnets=default_subnets, # Required for VPC mode
security_group_ids=default_security_groups, # Required for VPC mode
)
estimator.fit(job_name=job_name, wait=False)
###Output
_____no_output_____
###Markdown
Start the Robomaker job
###Code
from botocore.exceptions import UnknownServiceError
try:
robomaker = boto3.client("robomaker")
except UnknownServiceError:
#TODO: This will go away
print ("Trying to install the RoboMakerModel on your system.")
# Set up the boto3 model.
!aws configure add-model --service-model file://RoboMakerModel.json --service-name robomaker
import importlib
importlib.reload(boto3)
robomaker = boto3.client("robomaker")
print("Model installation succeeded!")
###Output
_____no_output_____
###Markdown
Create Simulation Application We first create a RoboMaker simulation application using the `DeepRacer public bundle`. Please refer to [RoboMaker Sample Application Github Repository](https://github.com/aws-robotics/aws-robomaker-sample-application-deepracer) if you want to learn more about this bundle or modify it.
###Code
bundle_s3_key = 'deepracer/simulation_ws.tar.gz'
bundle_source = {'s3Bucket': s3_bucket,
's3Key': bundle_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE', 'version': '1.x'}
###Output
_____no_output_____
###Markdown
Download the public DeepRacer bundle provided by RoboMaker and upload it in our S3 bucket to create a RoboMaker Simulation Application
###Code
simulation_application_bundle_location = "https://s3-us-west-2.amazonaws.com/robomaker-applications-us-west-2-11d8d0439f6a/deep-racer/deep-racer-1.0.57.0.1.0.66.0/simulation_ws.tar.gz"
!wget {simulation_application_bundle_location}
!aws s3 cp simulation_ws.tar.gz s3://{s3_bucket}/{bundle_s3_key}
!rm simulation_ws.tar.gz
app_name = "deepracer-sample-application" + strftime("%y%m%d-%H%M%S", gmtime())
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[bundle_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine
)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
###Output
_____no_output_____
###Markdown
Launch the Simulation job on RoboMakerWe create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/homewelcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
###Code
# Use more rollout workers for faster convergence
num_simulation_workers = 1
envriron_vars = {
"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"WORLD_NAME": "hard_track", # Can be one of "easy_track", "medium_track", "hard_track"
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"NUMBER_OF_ROLLOUT_WORKERS": str(num_simulation_workers)}
simulation_application = {"application": simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
name="sm-deepracer-robomaker",
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
###Output
_____no_output_____
###Markdown
Visualizing the simulations in RoboMaker You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
###Code
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
###Output
_____no_output_____
###Markdown
Plot metrics for training job
###Code
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
intermediate_folder_key = "{}/output/intermediate".format(job_name)
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
###Output
_____no_output_____
###Markdown
Clean Up Execute the cells below if you want to kill RoboMaker and SageMaker job.
###Code
for job_arn in job_arns:
robomaker.cancel_simulation_job(job=job_arn)
sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
###Output
_____no_output_____
###Markdown
Evaluation
###Code
envriron_vars = {"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"NUMBER_OF_TRIALS": str(20),
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"WORLD_NAME": "hard_track",
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
name="sm-deepracer-robomaker",
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig
)
print("Created the following job:")
print("Job ARN", response["arn"])
###Output
_____no_output_____
###Markdown
Clean Up Simulation Application Resource
###Code
robomaker.delete_simulation_application(application=simulation_app_arn)
###Output
_____no_output_____ |
underthecovers/assembly/L10.ipynb | ###Markdown
SLS Lecture 10 : Assembly : Program Anatomy I Let's start with some preliminary Byte Anatomy We need some notation [UC-SLS:Representing information - Preliminaries: Bits, Bytes and Notation](https://appavooteaching.github.io/UndertheCovers/textbook/assembly/InfoRepI.html) Vectors of bits - Byte a vector of 8 bits$$[b_7 b_6 b_5 b_4 b_3 b_2 b_1 b_0] \; \text{where} \; b_i \in \{0,1\}$$
###Code
displayBytes([[0x00],[0xff]], labels=["ALL OFF", "ALL ON"])
###Output
_____no_output_____
###Markdown
- A byte : can take on 256 unique values -- $2^8 = 256$ possible values
###Code
displayBytes(bytes=[[i] for i in range(256)],
center=True)
###Output
_____no_output_____
###Markdown
Natural to define value, as a non-negative integer (UINT), as the positional sum of powers of two as follows:$$ \sum_{i=0}^{7} b_i \times 2^{i}$$ $$ [10010011] $$$$ 1\times2^7 + 0 \times 2^6 + 0 \times 2^5 + 1 \times 2^4 + 0 \times 2^3 + 0 \times 2^2 + 1 \times 2^1 + 1 \times 2^0 $$ $$ (2*2*2*2*2*2*2) + 0 + 0 + (2*2*2*2) + 0 + 0 + 2 + 1 $$$$ 128 + 32 + 2 + 1 $$$$ 147 $$ Quick Review : Hexadecimal - Hex- Just a more convenient notation rather than base two- Base 16 : Digits ${0,1,2,3,4,5,6,7,8,9,A,B,C,D,E,F}$- Four bits, a nibble, $[ b_{3} b_{2} b_{1} b_{0} ]$ maps to a single hex digit
###Code
displayBytes(bytes=[[i] for i in range(16)],td_font_size="2.5em", th_font_size="1.5em", numbits=4, columns=["[$b_3$", "$b_2$", "$b_1$", "$b_0$]"], labels=[format(i,"1X")for i in range(16)], labelstitle="HEX", center=True)
###Output
_____no_output_____
###Markdown
Conversion and visualization becomes easy with time
###Code
X=np.uint8(0xae)
XL=np.bitwise_and(X,0xf)
XH=np.bitwise_and(np.right_shift(X,4),0xf)
displayBytes(bytes=[[X]])
displayBytes(bytes=[[XH]], numbits=4,columns=["[$b_7$", "$b_6$", "$b_5$", "$b_4$]"])
displayBytes(bytes=[[XL]], numbits=4,columns=["[$b_3$", "$b_2$", "$b_1$", "$b_0$]"])
displayStr(format(XH,"1X"), size="2em", align="center")
displayStr(format(XL,"1X"), size="2em", align="center")
displayStr(format(X,"02X"), size="2em", align="center")
###Output
_____no_output_____
###Markdown
- We prefix a hex value with `0x` to distinguish base 16 values (eg. `0x11`)- And we use `0b` to distinguish base 2 value (eg. `0b11`).- If we don't prefix we will assume it is obvious from context or we are assuming base 10 (eg. 11 means eleven). Exercises - 0b00010000 $\rightarrow$ 0x ?- 0b10000001 $\rightarrow$ 0x ?- 0b10111001 $\rightarrow$ 0x ?- 0b10101010 $\rightarrow$ 0x ? - 0b01010101 $\rightarrow$ 0x ?- 0b11110111 $\rightarrow$ 0x ? Standard notion and "values" of a byte
###Code
# an quick and dirty table of all byte values
displayBytes(bytes=[[i] for i in range(256)], labelstitle="HEX (UINT)", labels=["0x"+format(i,"02x")+ " (" + format(i,"03d") +")" for i in range(256)], center=True)
###Output
_____no_output_____
###Markdown
Beyond 8 bits More generally $n$-bit binary vector$$ X_{n} \equiv [ x_{n-1} x_{n-2} \ldots x_{0} ] $$$$ \sum_{i=0}^{n-1} b_i \times 2^{i}$$Standard lengths are certain multiples of 8 | Multiplication | Number of Bits| Notation | Names || --- | --- | --- | --- || $1 \times 8$ | 8 bits | $X_{8}$ | Byte, C: unsigned char || $2 \times 8$ | 16 bits | $X_{16}$ | INTEL: Word, ARM: Half Word, C: unsigned short || $4 \times 8$ | 32 bits | $X_{32}$ | INTEL: Double Word, ARM: Word, C: unsigned int || $8 \times 8$ | 64 bits | $X_{64}$ | INTEL: Quad Word, ARM: Double Word, C: unsigned long long | | $16 \times 8$ | 128 bits | $X_{128}$ | INTEL: Double Quad Word, ARM: Quad Word || $32 \times 8$ | 256 bits | $X_{256}$ | ? || $64 \times 8$ | 512 bits | $X_{512}$ | ? |1. Memory is an array of bytes2. Registers vary in their sizes depends on the CPU A program is bytes and manipulates bytes - codes is represented with bytes - data is represented with bytes - CPU operations are encoded with bytes - byte oriented operations of CPU are our building blocks One more word about integers - As humans we very quickly become comfortable with the idea of negative quantities- But what does negative and positive "mean" when dealing with binary vectors - CPUs default support is for unsigned $n$ bit integers $0$ to $2^{n} - 1$ - add, subtract, bitwise and, bitwise or, compare, etc - CPUs typically have special support to for signed integers - a very clever encoding - "2's Complement" encodes a positive and negative values - $-2^{n-1}$ to $2^{n-1}-1$ in $n$ bits - As Assembly programmers we will need to carefully know what instructions are sensitive - does NOT matter - add, subtract, bitwise and bitwise or - does matter - operations that depend on comparison (eg less than, greater than) - punt on the details of 2's complement till later - we will assume signed values and focus on when it matters OK now lets start putting the pieces together so that we can write our program Assembly Instruction statement syntax```gas[label:] mnemonic [operands][ comment ]```Anything in square brackets is optional depending on the mnemonic.The following are the four types of Intel instructions1. `mnemonic` - alone no explicitly defined operands2. `mnemonic ` - a single operand - which is the destination ... where the result will be stored3. `mnemonic , ` - two operands - one that names a source location for input - and one that is the destination4. `mnemonic , , ` - three operands - two that name input sources - and one that names the destination Sources and destinations (3.7 OPERAND ADDRESSING)Sources and destinations name both a location of a value and its length.-Eg. `2` bytes at Address `0x10000` is the operand for the instruction Addressing Modes- lets look more closely now at the address mode by carefully studying the `mov` instruction - add all the ways that we can specify its `operands` ``` mov , ````` and `` are the operands and the `mov` is the mnemonic of instruction we want to encode. `mov`Overwrite the `` with a copy of what is in the ``- note the value that was in `` is **over-written** - its "gone"- the `` still has its version This is actually more like copy than move!From a high level programming perspective it is like an assignment statement```x = y;``` destinations and sourcesHere are the various times of locations that can be a source or destination1. Register (reg) -- one of the processor's registers2. Memory (mem) -- an address of an arbitrary memory location3. Immediate (imm) -- a special type of Memory location where the value is in the bytes following the opcode - You can only use Immediates as a sourceHere is the valid combinations that you can have- `mov , `- `mov , `- `mov , `- `mov , `What is missing? Sizes- Register names specify size of location - The rules for mixing is a little subtle (eg moving from a smaller to larger register)- Immediate generally are 1,2,4 bytes in size- We will see memory syntax next Specifying memory locations is subtle -- Effective AddressSee the slide on line slide for details from the Intel Manual -- "Specifying an Offset"- Most general form $$ EA={Base}_{reg} + ({Index}_{reg} * {Scale}) + {Displacement} $$where- $Scale = \{1,2,5,8\}$ - ${Displacement}$ is 8-bit, 16-bit, or 32-bit value- ${Base}_{reg}$ and ${Index}_{reg}$ are the value in a 64-bit general-purpose register.The components can be mixed and matched to make it easier to work with arrays and data structures of various kinds located in memory. There are several version of syntax for these combinations Specifying and offset/address to be used to locate the operand value- A lot of the subtly and confusion come from how we work with memory locations - Effective address 1. static location: - " Displacement: A displacement alone represents a direct (uncomputed) offset to the operand. Because the displacement is encoded in the instruction, this form of an address is sometimes called an absolute or static address. It is commonly used to access a statically allocated scalar operand. 2. dynamic location: - "Base: A base alone represents an indirect offset to the operand. Since the value in the base register can change, it can be used for dynamic storage of variables and data structures." 3. dynamic + static "Base + Displacement: A base register and a displacement can be used together for two distinct purposes: - As an index into an array when the element size is not 2, 4, or 8 bytes - The displacement component encodes the static offset to the beginning of the array. - The base register holds the results of a calculation to determine the offset to a specific element within the array. - To access a field of a record: - the base register holds the address of the beginning of the record, - while the displacement is a static offset to the field." - this form is really useful for stack frame records (rbp base) -- more later on this 4. "(Index * Scale) + Displacement : This address mode offers an efficient way to index into a static array when the element size is 2, 4, or 8 bytes. The displacement locates the beginning of the array, the index register holds the subscript of the desired array element, and the processor automatically converts the subscript into an index by applying the scaling factor." 5. "Base + Index + Displacement : Using two registers together supports either - a two-dimensional array (the displacement holds the address of the beginning of the array) or - one of several instances of an array of records (the displacement is an offset to a field within the record)." 6. "Base + (Index * Scale) + Displacement : Using all the addressing components together allows efficient indexing of a two-dimensional array when the elements of the array are 2, 4, or 8 bytes in size." 7. PC Relative: "RIP + Displacement : In 64-bit mode, RIP-relative addressing uses a signed 32-bit displacement to calculate the effective address of the next instruction by sign-extend the 32-bit value and add to the 64-bit value in RIP." Intel Syntax examples- ` PTR [displacement]` where `displacement` is either a number of symbol - the assembler will often let you skip the ` PTR` if it can figure it out - but I think it is safer to be verbose - the assembler will let you skip the `[]` if you are using a label - but again I think it is more clear that you mean that value at the label- `OFFSET [symbol]` can be used as a source if you want to use the address of the symbol itself as a number- ` PTR [RegBase + displacement]`- ` PTR [RegIdx * scale + displacement]`- ` PTR [RegBase + RegIdx * scale + displacement]` `sumit.S` and `usesumit.S` Setup
###Code
# setup for mov example
appdir=os.getenv('HOME')
appdir=appdir + "/sum"
#print(movdir)
output=runTermCmd("[[ -d " + appdir + " ]] && rm -rf "+ appdir +
";mkdir " + appdir +
";cp ../src/Makefile ../src/10num.txt ../src/setup.gdb " + appdir)
#TermShellCmd("ls", cwd=movdir)
display(Markdown('''
- create a directory `mkdir sum; cd sum`
- create and write `sumit.s` and `usesumit.s` see below
- add a `Makefile` to automate assembling and linking
- we are going run the commands by hand this time to highlight the details
- add our `setup.gdb` to make working in gdb easier
- normally you would want to track everything in git
'''))
###Output
_____no_output_____
###Markdown
Lets try and write a reusable routine - lets assume that we have a symbol `XARRAY` that is the address of the data- lets assume to use our routine you need to pass the length of the array - len in `rbx`- let put the result in `rax`Think about our objective in these terms$$ rax = \sum_{i=0}^{rbx} XARRAY[rdi] $$right?Ok remember the tricky part is realizing that it is up to us to implement the idea of an array. - it is a data structure that we need to keep straight our head
###Code
display(Markdown(FileCodeBox(
file="../src/sumit.s",
lang="gas",
title="<b>CODE: asm - sumit.s",
h="100%",
w="107em"
)))
###Output
_____no_output_____
###Markdown
To assemble `sumit.S` into `sumit.o`
###Code
TermShellCmd("[[ -a sumit.o ]] && rm sumit.o; make sumit.o", cwd="../src", prompt='')
###Output
_____no_output_____
###Markdown
So how might we use our "fragment"Lets create a program that defines a `_start` routine and creates the memory locations that we can control. Lets create `usesum.S`Lets assume that- will set aside enough memory for an maximum of 1000 values in where we set the `XARRAY` symbol- we will allow the length actual length of data in `XARRAY` to be specified at a location marked by `XARRAY_LEN`.- we will store the result in a location marked by the symbol `sum`We will use our code by loading our data at XARRAY, updating XARRAY_LEN, executing the code and examining the result 1. The code should setup the memory we need2. setup the registers as needed for `sumIt`3. run `sumIt`4. store the results at the location of `sum`
###Code
display(Markdown(FileCodeBox(
file="../src/usesum.s",
lang="gas",
title="<b>CODE: asm - usesum.s",
h="100%",
w="107em"
)))
###Output
_____no_output_____
###Markdown
To assemble `usesum.S` into `usesum.o`
###Code
TermShellCmd("[[ -a usesum.o ]] && rm usesum.o; make usesum.o", cwd="../src", prompt='')
###Output
_____no_output_____
###Markdown
To link `usesum.o` and `sumit.o` into an executable `usesum`
###Code
TermShellCmd("[[ -a usesum ]] && rm usesum; make usesum", cwd="../src", prompt='')
###Output
_____no_output_____
###Markdown
Lets make some data!Lets create an ascii file with 10 numbers and then use a tool called `ascii2binary` to convert it into 8 byte signed integers
###Code
TermShellCmd("[[ -a 10num.bin ]] && rm 10num.bin; make 10num.bin", cwd=appdir, prompt='')
TermShellCmd("cat 10num.txt", cwd=appdir, pretext='$ cat 10num.txt', prompt='')
TermShellCmd("hexdump -v -C 10num.bin", cwd=appdir, pretext='$ hexdump -C 10num.bin', prompt='')
###Output
_____no_output_____
###Markdown
Let's make some "real" dataSome Unix tricks of the trade
###Code
TermShellCmd("[[ -a 100randomnum.bin ]] && rm 100randomnum.bin; make 100randomnum.bin", cwd=appdir, prompt='')
TermShellCmd("hexdump -C 100randomnum.bin | head -10", cwd=appdir, pretext="$ hexdump -C 100randomnum.bin | head -10", prompt='')
TermShellCmd("od -t d8 100randomnum.bin | head -10", cwd=appdir, pretext="$ od -t d8 100randomnum.bin | head -10", prompt='')
###Output
_____no_output_____
###Markdown
How to run `usesum` and load data with gdb```gdb -tui usesumb _startrun restore lets us load memory from a filerestore 100randomnum.bin binary &XARRAY set the number of elementsset *((long long *)&XARRAY_LEN) = 100 following the follow in xarray that is being added to our sum display /1 ((long long *)(&XARRAY))[$rdi] now we can single step our way through our continue till we hit and int3```
###Code
display(showDT())
###Output
_____no_output_____
###Markdown
SLS Lecture 10 : Assembly : Program Anatomy I A word about Integers- As humans we very quickly become comfortable with the idea of negative quantities- But what does negative and positive "mean" when dealing with binary vectors - CPUs default support is for unsigned $n$ bit integers $0$ to $2^{n} - 1$ - add, subtract, bitwise and, bitwise or, compare, etc - CPUs typically have special support to for signed integers - a very clever encoding - "2's Complement" encodes a positive and negative values - $-2^{n-1}$ to $2^{n-1}-1$ in $n$ bits - As Assembly programmers we will need to carefully know what instructions are sensitive - does NOT matter - add, subtract, bitwise and bitwise or - does matter - operations that depend on comparison (eg less than, greater than) - punt on the details of 2's complement till later - we will assume signed values and focus on when it matters Assembly Instruction statement syntax```gas[label:] mnemonic [operands][ ; comment ]```Anything in square brackets is optional depending on the mnemonic.The following are the four types of Intel instructions1. `mnemonic` - alone no explicitly defined operands2. `mnemonic ` - a single operand - which is the destination ... where the result will be stored3. `mnemonic , ` - two operands - one that names a source location for input - and one that is the destination4. `mnemonic , , ` - three operands - two that name input sources - and one that names the destination Sources and destinations (3.7 OPERAND ADDRESSING)Sources and destinations name both a location of a value and its length.-Eg. `2` bytes at Address `0x10000` is the operand for the instruction Addressing Modes- lets look more closely now at the address mode by carefully studying the `mov` instruction - add all the ways that we can specify its `operands` ``` mov , ````` and `` are the operands and the `mov` is the mnemonic of instruction we want to encode. `mov`Overwrite the `` with a copy of what is in the ``- note the value that was in `` is **over-written** - its "gone"- the `` still has its version This is actually more like copy than move!From a high level programming perspective it is like an assignment statement```x = y;``` destinations and sourcesHere are the various times of locations that can be a source or destination1. Register (reg) -- one of the processor's registers2. Memory (mem) -- an address of an arbitrary memory location3. Immediate (imm) -- a special type of Memory location where the value is in the bytes following the opcode - You can only use Immediates as a sourceHere is the valid combinations that you can have- `mov , `- `mov , `- `mov , `- `mov , `What is missing? Sizes- Register names specify size of location - The rules for mixing is a little subtle (eg moving from a smaller to larger register)- Immediate generally are 1,2,4 bytes in size- We will see memory syntax next Specifying memory locations is subtle -- Effective AddressSee the slide on line slide for details from the Intel Manual -- "Specifying an Offset"- Most general form $$ EA={Base}_{reg} + ({Index}_{reg} * {Scale}) + {Displacement} $$where- $Scale = \{1,2,5,8\}$ - ${Displacement}$ is 8-bit, 16-bit, or 32-bit value- ${Base}_{reg}$ and ${Index}_{reg}$ are the value in a 64-bit general-purpose register.The components can be mixed and matched to make it easier to work with arrays and data structures of various kinds located in memory. There are several version of syntax for these combinations Specifying and offset/address to be used to locate the operand value- A lot of the subtly and confusion come from how we work with memory locations - Effective address 1. static location: - " Displacement: A displacement alone represents a direct (uncomputed) offset to the operand. Because the displacement is encoded in the instruction, this form of an address is sometimes called an absolute or static address. It is commonly used to access a statically allocated scalar operand. 2. dynamic location: - "Base: A base alone represents an indirect offset to the operand. Since the value in the base register can change, it can be used for dynamic storage of variables and data structures." 3. dynamic + static "Base + Displacement: A base register and a displacement can be used together for two distinct purposes: - As an index into an array when the element size is not 2, 4, or 8 bytes - The displacement component encodes the static offset to the beginning of the array. - The base register holds the results of a calculation to determine the offset to a specific element within the array. - To access a field of a record: - the base register holds the address of the beginning of the record, - while the displacement is a static offset to the field." - this form is really useful for stack frame records (rbp base) -- more later on this 4. "(Index * Scale) + Displacement : This address mode offers an efficient way to index into a static array when the element size is 2, 4, or 8 bytes. The displacement locates the beginning of the array, the index register holds the subscript of the desired array element, and the processor automatically converts the subscript into an index by applying the scaling factor." 5. "Base + Index + Displacement : Using two registers together supports either - a two-dimensional array (the displacement holds the address of the beginning of the array) or - one of several instances of an array of records (the displacement is an offset to a field within the record)." 6. "Base + (Index * Scale) + Displacement : Using all the addressing components together allows efficient indexing of a two-dimensional array when the elements of the array are 2, 4, or 8 bytes in size." 7. PC Relative: "RIP + Displacement : In 64-bit mode, RIP-relative addressing uses a signed 32-bit displacement to calculate the effective address of the next instruction by sign-extend the 32-bit value and add to the 64-bit value in RIP." Intel Syntax examples- `PTR [displacement]` where `displacement` is either a number of symbol - the assembler will often let you skip the `PTR ` if it can figure it out - but I think it is safer to be verbose - the assembler will let you skip the `[]` if you are using a label - but again I think it is more clear that you mean that value at the label- `OFFSET [symbol]` can be used as a source if you want to use the address of the symbol itself as a number- `PTR [RegBase + displacement]`- `PTR [RegIdx * scale + displacement]`- `PTR [RegBase + RegIdx * scale + displacement]` `sumit.S` and `usesumit.S` Setup
###Code
# setup for mov example
appdir=os.getenv('HOME')
appdir=appdir + "/sum"
#print(movdir)
output=runTermCmd("[[ -d " + appdir + " ]] && rm -rf "+ appdir +
";mkdir " + appdir +
";cp ../src/Makefile ../src/10num.txt ../src/setup.gdb " + appdir)
#TermShellCmd("ls", cwd=movdir)
display(Markdown('''
- create a directory `mkdir sum; cd sum`
- create and write `sumit.S` and `usesumit.S` see below
- add a `Makefile` to automate assembling and linking
- we are going run the commands by hand this time to highlight the details
- add our `setup.gdb` to make working in gdb easier
- normally you would want to track everything in git
'''))
###Output
_____no_output_____
###Markdown
Lets try and write a reusable routine - lets assume that we have a symbol `XARRAY` that is the address of the data- lets assume to use our routine you need to pass the length of the array - len in `rbx`- let put the result in `rax`Think about our objective in these terms$$ rax = \sum_{i=0}^{rbx} XARRAY[rdi] $$right?Ok remember the tricky part is realizing that it is up to us to implement the idea of an array. - it is a data structure that we need to keep straight our head
###Code
display(Markdown(FileCodeBox(
file="../src/sumit.S",
lang="gas",
title="<b>CODE: asm - sumit.S",
h="100%",
w="107em"
)))
###Output
_____no_output_____
###Markdown
To assemble `sumit.S` into `sumit.o`
###Code
TermShellCmd("[[ -a sumit.o ]] && rm sumit.o; make sumit.o", cwd="../src", prompt='')
###Output
_____no_output_____
###Markdown
So how might we use our "fragment"Lets create a program that defines a `_start` routine and creates the memory locations that we can control. Lets create `usesum.S`Lets assume that- will set aside enough memory for an maximum of 1000 values in where we set the `XARRAY` symbol- we will allow the length actual length of data in `XARRAY` to be specified at a location marked by `XARRAY_LEN`.- we will store the result in a location marked by the symbol `sum`We will use our code by loading our data at XARRAY, updating XARRAY_LEN, executing the code and examining the result 1. The code should setup the memory we need2. setup the registers as needed for `sumIt`3. run `sumIt`4. store the results at the location of `sum`
###Code
display(Markdown(FileCodeBox(
file="../src/usesum.S",
lang="gas",
title="<b>CODE: asm - usesum.S",
h="100%",
w="107em"
)))
###Output
_____no_output_____
###Markdown
To assemble `usesum.S` into `usesum.o`
###Code
TermShellCmd("[[ -a usesum.o ]] && rm usesum.o; make usesum.o", cwd="../src", prompt='')
###Output
_____no_output_____
###Markdown
To link `usesum.o` and `sumit.o` into an executable `usesum`
###Code
TermShellCmd("[[ -a usesum ]] && rm usesum; make usesum", cwd="../src", prompt='')
###Output
_____no_output_____
###Markdown
Lets make some data!Lets create an ascii file with 10 numbers and then use a tool called `ascii2binary` to convert it into 8 byte signed integers
###Code
TermShellCmd("[[ -a 10num.bin ]] && rm 10num.bin; make 10num.bin", cwd=appdir, prompt='')
TermShellCmd("cat 10num.txt", cwd=appdir, pretext='$ cat 10num.txt', prompt='')
TermShellCmd("hexdump -v -C 10num.bin", cwd=appdir, pretext='$ hexdump -C 10num.bin', prompt='')
###Output
_____no_output_____
###Markdown
Let's make some "real" dataSome Unix tricks of the trade
###Code
TermShellCmd("[[ -a 100randomnum.bin ]] && rm 100randomnum.bin; make 100randomnum.bin", cwd=appdir, prompt='')
TermShellCmd("hexdump -C 100randomnum.bin | head -10", cwd=appdir, pretext="$ hexdump -C 100randomnum.bin | head -10", prompt='')
TermShellCmd("od -t d8 100randomnum.bin | head -10", cwd=appdir, pretext="$ od -t d8 100randomnum.bin | head -10", prompt='')
###Output
_____no_output_____
###Markdown
How to run `usesum` and load data with gdb```gdb -tui usesumb _startrun restore lets us load memory from a filerestore 100randomnum.bin binary &XARRAY set the number of elementsset *((long long *)&XARRAY_LEN) = 100 following the follow in xarray that is being added to our sum display /1 ((long long *)(&XARRAY))[$rdi] now we can single step our way through our continue till we hit and int3```
###Code
display(showDT())
###Output
_____no_output_____
###Markdown
SLS Lecture 10 : Assembly : Program Anatomy I Assembly Instruction statement syntax```gas[label:] mnemonic [operands][ ; comment ]```Anything in square brackets is optional depending on the mnemonic.The following are the four types of Intel instructions1. `mnemonic` - alone no explicitly defined operands2. `mnemonic ` - a single operand - which is the destination ... where the result will be stored3. `mnemonic , ` - two operands - one that names a source location for input - and one that is the destination4. `mnemonic , , ` - three operands - two that name input sources - and one that names the destination Sources and destinations (3.7 OPERAND ADDRESSING)Sources and destinations name both a location of a value and its length.-Eg. `2` bytes at Address `0x10000` is the operand for the instruction Addressing Modes- lets look more closely now at the address mode by carefully studying the `mov` instruction - add all the ways that we can specify its `operands` ``` mov , ````` and `` are the operands and the `mov` is the mnemonic of instruction we want to encode. `mov`Overwrite the `` with a copy of what is in the ``- note the value that was in `` is **over-written** - its "gone"- the `` still has its version This is actually more like copy than move!From a high level programming perspective it is like an assignment statement```x = y;``` destinations and sourcesHere are the various times of locations that can be a source or destination1. Register (reg) -- one of the processor's registers2. Memory (mem) -- an address of an arbitrary memory location3. Immediate (imm) -- a special type of Memory location where the value is in the bytes following the opcode - You can only use Immediates as a sourceHere is the valid combinations that you can have- `mov , `- `mov , `- `mov , `- `mov , `What is missing? Sizes- Register names specify size of location - The rules for mixing is a little subtle (eg moving from a smaller to larger register)- Immediate generally are 1,2,4 bytes in size- We will see memory syntax next Specifying memory locations is subtle -- Effective AddressSee the slide on line slide for details from the Intel Manual -- "Specifying an Offset"- Most general form $$ EA={Base}_{reg} + ({Index}_{reg} * {Scale}) + {Displacement} $$where- $Scale = \{1,2,5,8\}$ - ${Displacement}$ is 8-bit, 16-bit, or 32-bit value- ${Base}_{reg}$ and ${Index}_{reg}$ are the value in a 64-bit general-purpose register.The components can be mixed and matched to make it easier to work with arrays and data structures of various kinds located in memory. There are several version of syntax for these combinations Specifying and offset/address to be used to locate the operand value- A lot of the subtly and confusion come from how we work with memory locations - Effective address 1. static location: - " Displacement: A displacement alone represents a direct (uncomputed) offset to the operand. Because the displacement is encoded in the instruction, this form of an address is sometimes called an absolute or static address. It is commonly used to access a statically allocated scalar operand. 2. dynamic location: - "Base: A base alone represents an indirect offset to the operand. Since the value in the base register can change, it can be used for dynamic storage of variables and data structures." 3. dynamic + static "Base + Displacement: A base register and a displacement can be used together for two distinct purposes: - As an index into an array when the element size is not 2, 4, or 8 bytes - The displacement component encodes the static offset to the beginning of the array. - The base register holds the results of a calculation to determine the offset to a specific element within the array. - To access a field of a record: - the base register holds the address of the beginning of the record, - while the displacement is a static offset to the field." - this form is really useful for stack frame records (rbp base) -- more later on this 4. "(Index * Scale) + Displacement : This address mode offers an efficient way to index into a static array when the element size is 2, 4, or 8 bytes. The displacement locates the beginning of the array, the index register holds the subscript of the desired array element, and the processor automatically converts the subscript into an index by applying the scaling factor." 5. "Base + Index + Displacement : Using two registers together supports either - a two-dimensional array (the displacement holds the address of the beginning of the array) or - one of several instances of an array of records (the displacement is an offset to a field within the record)." 6. "Base + (Index * Scale) + Displacement : Using all the addressing components together allows efficient indexing of a two-dimensional array when the elements of the array are 2, 4, or 8 bytes in size." 7. PC Relative: "RIP + Displacement : In 64-bit mode, RIP-relative addressing uses a signed 32-bit displacement to calculate the effective address of the next instruction by sign-extend the 32-bit value and add to the 64-bit value in RIP." Intel Syntax examples- `PTR [displacement]` where `displacement` is either a number of symbol - the assembler will often let you skip the `PTR ` if it can figure it out - but I think it is safer to be verbose - the assembler will let you skip the `[]` if you are using a label - but again I think it is more clear that you mean that value at the label- `OFFSET [symbol]` can be used as a source if you want to use the address of the symbol itself as a number- `PTR [RegBase + displacement]`- `PTR [RegIdx * scale + displacement]`- `PTR [RegBase + RegIdx * scale + displacement]` `sumit.S` and `usesumit.S` Setup
###Code
# setup for mov example
appdir=os.getenv('HOME')
appdir=appdir + "/sum"
#print(movdir)
output=runTermCmd("[[ -d " + appdir + " ]] && rm -rf "+ appdir +
";mkdir " + appdir +
";cp ../src/Makefile ../src/10num.txt ../src/setup.gdb " + appdir)
#TermShellCmd("ls", cwd=movdir)
display(Markdown('''
- create a directory `mkdir sum; cd sum`
- create and write `sumit.S` and `usesumit.S` see below
- add a `Makefile` to automate assembling and linking
- we are going run the commands by hand this time to highlight the details
- add our `setup.gdb` to make working in gdb easier
- normally you would want to track everything in git
'''))
###Output
_____no_output_____
###Markdown
Lets try and write a reusable routine - lets assume that we have a symbol `XARRAY` that is the address of the data- lets assume to use our routine you need to pass the length of the array - len in `rbx`- let put the result in `rax`Think about our objective in these terms$$ rax = \sum_{i=0}^{rbx} XARRAY[rdi] $$right?Ok remember the tricky part is realizing that it is up to us to implement the idea of an array. - it is a data structure that we need to keep straight our head
###Code
display(Markdown(FileCodeBox(
file="../src/sumit.S",
lang="gas",
title="<b>CODE: asm - sumit.S",
h="100%",
w="107em"
)))
###Output
_____no_output_____
###Markdown
To assemble `sumit.S` into `sumit.o`
###Code
TermShellCmd("[[ -a sumit.o ]] && rm sumit.o; make sumit.o", cwd="../src", prompt='')
###Output
_____no_output_____
###Markdown
So how might we use our "fragment"Lets create a program that defines a `_start` routine and creates the memory locations that we can control. Lets create `usesum.S`Lets assume that- will set aside enough memory for an maximum of 1000 values in where we set the `XARRAY` symbol- we will allow the length actual length of data in `XARRAY` to be specified at a location marked by `XARRAY_LEN`.- we will store the result in a location marked by the symbol `sum`We will use our code by loading our data at XARRAY, updating XARRAY_LEN, executing the code and examining the result 1. The code should setup the memory we need2. setup the registers as needed for `sumIt`3. run `sumIt`4. store the results at the location of `sum`
###Code
display(Markdown(FileCodeBox(
file="../src/usesum.S",
lang="gas",
title="<b>CODE: asm - usesum.S",
h="100%",
w="107em"
)))
###Output
_____no_output_____
###Markdown
To assemble `usesum.S` into `usesum.o`
###Code
TermShellCmd("[[ -a usesum.o ]] && rm usesum.o; make usesum.o", cwd="../src", prompt='')
###Output
_____no_output_____
###Markdown
To link `usesum.o` and `sumit.o` into an executable `usesum`
###Code
TermShellCmd("[[ -a usesum ]] && rm usesum; make usesum", cwd="../src", prompt='')
###Output
_____no_output_____
###Markdown
Lets make some data!Lets create an ascii file with 10 numbers and then use a tool called `ascii2binary` to convert it into 8 byte signed integers
###Code
TermShellCmd("[[ -a 10num.bin ]] && rm 10num.bin; make 10num.bin", cwd=appdir, prompt='')
TermShellCmd("cat 10num.txt", cwd=appdir, pretext='$ cat 10num.txt', prompt='')
TermShellCmd("hexdump -v -C 10num.bin", cwd=appdir, pretext='$ hexdump -C 10num.bin', prompt='')
###Output
_____no_output_____
###Markdown
Let's make some "real" dataSome Unix tricks of the trade
###Code
TermShellCmd("[[ -a 100randomnum.bin ]] && rm 100randomnum.bin; make 100randomnum.bin", cwd=appdir, prompt='')
TermShellCmd("hexdump -C 100randomnum.bin | head -10", cwd=appdir, pretext="$ hexdump -C 100randomnum.bin | head -10", prompt='')
TermShellCmd("od -t d8 100randomnum.bin | head -10", cwd=appdir, pretext="$ od -t d8 100randomnum.bin | head -10", prompt='')
###Output
_____no_output_____
###Markdown
How to run `usesum` and load data with gdb```gdb -tui usesumb _startrun restore lets us load memory from a filerestore 100randomnum.bin binary &XARRAY set the number of elementsset *((long long *)&XARRAY_LEN) = 100 following the follow in xarray that is being added to our sum display /1 ((long long *)(&XARRAY))[$rdi] now we can single step our way through our continue till we hit and int3```
###Code
display(showDT())
###Output
_____no_output_____ |
ClassMaterial/06 - Smart Signatures/06 code/06.1d1_WSC_SmartSignatures_Security.ipynb | ###Markdown
Smart signatures โ Transaction Fee Attack 06.1 Writing Smart Contracts Peter Gruber ([email protected])2022-01-12* Write and deploy smart Signatures SetupSee notebook 04.1, the lines below will always automatically load functions in `algo_util.py`, the five accounts and the Purestake credentials
###Code
# Loading shared code and credentials
import sys, os
codepath = '..'+os.path.sep+'..'+os.path.sep+'sharedCode'
sys.path.append(codepath)
from algo_util import *
cred = load_credentials()
# Shortcuts to directly access the 3 main accounts
MyAlgo = cred['MyAlgo']
Alice = cred['Alice']
Bob = cred['Bob']
Charlie = cred['Charlie']
Dina = cred['Dina']
from algosdk import account, mnemonic
from algosdk.v2client import algod
from algosdk.future import transaction
from algosdk.future.transaction import PaymentTxn
from algosdk.future.transaction import AssetConfigTxn, AssetTransferTxn, AssetFreezeTxn
from algosdk.future.transaction import LogicSig, LogicSigTransaction
import algosdk.error
import json
import base64
import hashlib
from pyteal import *
# Initialize the algod client (Testnet or Mainnet)
algod_client = algod.AlgodClient(algod_token='', algod_address=cred['algod_test'], headers=cred['purestake_token'])
print(Alice['public'])
print(Bob['public'])
print(Charlie['public'])
###Output
HITPAAJ4HKANMP6EUYASXDUTCL653T7QMNHJL5NODL6XEGBM4KBLDJ2D2E
O2SLRPK4I4SWUOCYGGKHHUCFJJF5ORHFL76YO43FYTB7HUO7AHDDNNR5YA
5GIOBOLZSQEHTNNXWRJ6RGNPGCKWYJYUZZKY6YXHJVKFZXRB2YLDFDVH64
###Markdown
Check Purestake API
###Code
last_block = algod_client.status()["last-round"]
print(f"Last committed block is: {last_block}")
###Output
Last committed block is: 19804682
###Markdown
Clearing out Modesty Step 1: The programmer writes down the conditions as a PyTeal program
###Code
max_amount = Int(int(1*1E6)) # <---- 1e6 micro Algos = 1 Algo
modest_pyteal = And (
Txn.receiver() == Addr(Bob["public"]), # Receipient must be Bob
Txn.amount() <= max_amount # Requested amount must be smaller than max_amount
)
# Security missing (!!!) ... do not copy-paste
###Output
_____no_output_____
###Markdown
Step 2: Compile PyTeal -> Teal
###Code
modest_teal = compileTeal(modest_pyteal, Mode.Signature, version=3)
print(modest_teal)
###Output
#pragma version 3
txn Receiver
addr O2SLRPK4I4SWUOCYGGKHHUCFJJF5ORHFL76YO43FYTB7HUO7AHDDNNR5YA
==
txn Amount
int 1000000
<=
&&
###Markdown
Step 3: Compile Teal -> Bytecode for AVM
###Code
Modest = algod_client.compile(modest_teal)
Modest
###Output
_____no_output_____
###Markdown
Step 4: Alice funds and deploys the smart signature
###Code
# Step 1: prepare transaction
sp = algod_client.suggested_params()
amt = int(2.2*1e6)
txn = transaction.PaymentTxn(sender=Alice['public'], sp=sp, receiver=Modest['hash'], amt=amt)
# Step 2+3: sign and sen
stxn = txn.sign(Alice['private'])
txid = algod_client.send_transaction(stxn)
# Step 4: wait for confirmation
txinfo = wait_for_confirmation(algod_client, txid)
###Output
Current round is 19804698.
Waiting for round 19804698 to finish.
Waiting for round 19804699 to finish.
Transaction AI5GNWQIOIJT5QMILVTRMAWLUU6P4PBR5R2KJEOS472PB2JAORGQ confirmed in round 19804700.
###Markdown
Step 5: Alice informs Bob
###Code
print("Alice communicates to Bob the following")
print("Compiled smart signature:", Modest['result'])
print("Address of smart signature: ", Modest['hash'])
###Output
Alice communicates to Bob the following
Compiled smart signature: AyABwIQ9JgEgdqS4vVxHJWo4WDGUc9BFSkvXROVf/YdzZcTD89HfAcYxBygSMQgiDhA=
Address of smart signature: B6FDPCIK7KUXF5TPTVU4EA6MMWYQEAWYA2UK2VS2VQMPO3QR5OQTYP6MUQ
###Markdown
Step 6: Bob proposes a transaction with very high TX fee
###Code
# Step 1: prepare TX
sp = algod_client.suggested_params()
sp.fee = int(2.38e6) # <---------- WOW! 2 ALGO transaction fee
sp.flat_fee = True
withdrawal_amt = int(0.00001*1e6) # <---------- small
txn = PaymentTxn(sender=Modest['hash'], sp=sp,
receiver=Bob['public'], amt=withdrawal_amt)
# Step 2: sign TX <---- This step is different!
encodedProg = Modest['result'].encode()
program = base64.decodebytes(encodedProg)
lsig = LogicSig(program)
stxn = LogicSigTransaction(txn, lsig)
# Step 3: send
txid = algod_client.send_transaction(stxn)
# Step 4: wait for confirmation
txinfo = wait_for_confirmation(algod_client, txid)
###Output
Current round is 19804767.
Waiting for round 19804767 to finish.
Waiting for round 19804768 to finish.
Transaction 6QPUETD7EGXHRBOWEC2Z3JWJ3NO7O7FXHCEQYV5M5OCW2ALIX6HQ confirmed in round 19804769.
###Markdown
Step 7: The Money is gone
###Code
# Check on Algoexplorer
print('https://testnet.algoexplorer.io/address/'+ Modest['hash'])
###Output
https://testnet.algoexplorer.io/address/B6FDPCIK7KUXF5TPTVU4EA6MMWYQEAWYA2UK2VS2VQMPO3QR5OQTYP6MUQ
|
wprowadzenie_5.ipynb | ###Markdown
super
###Code
class A(object):
def __init__(self, **kwargs):
print('A.__init__ with {}'.format(kwargs))
super(A, self).__init__()
class B(A):
def __init__(self, **kwargs):
print('B.__init__ with {}'.format(kwargs))
super(B, self).__init__(**kwargs)
class C(A):
def __init__(self, **kwargs):
print('C.__init__ with {}'.format(kwargs))
super(C, self).__init__(**kwargs)
class D(B, C):
def __init__(self):
print('D.__init__')
super(D, self).__init__(a=1, b=2, x=3)
print(D.mro())
D()
class A(object):
def __init__(self, a):
self.a = a
class B(A):
def __init__(self, b, **kw):
self.b = b
super(B, self).__init__(**kw)
class C(A):
def __init__(self, c, **kw):
self.c = c
super(C, self).__init__(**kw)
class D(B, C):
def __init__(self, a, b, c):
super(D, self).__init__(a=a, b=b, c=c)
obj = D(1,2,3)
obj.a, obj.b, obj.c
class First(object):
def __init__(self):
print "first"
class Second(First):
def __init__(self):
print "second before super"
super(Second, self).__init__()
print "second after super"
class Third(First):
def __init__(self):
print "third before super"
super(Third, self).__init__()
print "third after super"
class Fourth(Second, Third):
def __init__(self):
print "fourth before super"
super(Fourth, self).__init__()
print "that's it"
Fourth()
class First(object):
def __init__(self):
print "first"
class Second(First):
def __init__(self):
print "second before super"
super(Second, self).__init__(a=2)
print "second after super"
class Third(First):
def __init__(self, a):
print "third before super"
super(Third, self).__init__()
print "third after super"
class Fourth(Second, Third):
def __init__(self):
print "fourth before super"
super(Fourth, self).__init__()
print "that's it"
Fourth()
Second()
###Output
second before super
###Markdown
Metody wirtualne?
###Code
class A():
def suma(self, a, b):
return a + b
class AzMnozeniem(A):
def mnozenie(self, a, b):
return a * b
k = AzMnozeniem()
k.mnozenie(3, 4)
k.suma(3, 4)
###Output
_____no_output_____
###Markdown
Przeciฤ
ลผanie operatorรณw
###Code
class A(object):
def __init__(self, a):
self.a = a
def __add__(self, other):
self.a += other.a
return self
(A(4) + A(5)).a
###Output
_____no_output_____
###Markdown
```object.__lt__(self, other)object.__le__(self, other)object.__eq__(self, other)object.__ne__(self, other)object.__gt__(self, other)object.__ge__(self, other)object.__add__(self, other)object.__sub__(self, other)object.__mul__(self, other)object.__floordiv__(self, other)object.__mod__(self, other)object.__divmod__(self, other)object.__pow__(self, other[, modulo])object.__lshift__(self, other)object.__rshift__(self, other)object.__and__(self, other)object.__xor__(self, other)object.__or__(self, other)``` Klasy abstrakcyjne
###Code
import abc
class Person():
__metaclass__ = abc.ABCMeta
def __init__(self, name):
self.name = name
@abc.abstractmethod
def say_hello(self):
pass
class Programmer(Person):
def __init__(self, name, language):
Person.__init__(self, name)
self.language = language
def say_hello(self):
print('Hello! I\'m %s and I write in %s.' % (self.name, self.language))
p = Person(name="Duck")
p
p = Programmer(name="Duck", language="Duck++")
p
p.say_hello()
###Output
Hello! I'm Duck and I write in Duck++.
###Markdown
Atrybuty
###Code
class A(object):
b = "0.001"
def __init__(self, a):
self.a = a
A.b
A.a
A(234).a
###Output
_____no_output_____
###Markdown
Ciekawostki
###Code
class A():
j = 0
for i in range(10):
j += i
A.j
###Output
_____no_output_____
###Markdown
Prฤdkoลฤ
###Code
import math
import random
class Pole(object):
def __init__(self, r=2.):
self.r = r
def oblicz(self):
return math.pi * (self.r**2)
n = 1000
def get_mean_cls(n=n):
return sum([Pole(random.random()).oblicz() for i in range(n)])/float(n)
def get_mean(n=n):
return sum([math.pi * (random.random()**2) for r in range(n)])/float(n)
%timeit get_mean_cls()
%timeit get_mean()
###Output
1000 loops, best of 3: 286 ยตs per loop
###Markdown
Property
###Code
class Kaczka(object):
def __init__(self, dl_skrzydla):
self.dl_skrzydla = dl_skrzydla
def plyn(self):
print "Chlup chlup"
k = Kaczka(124)
k.plyn()
k.dl_skrzydla
class Kaczuszka(Kaczka):
def __init__(self, dl_skrzydla):
self._dl_skrzydla = dl_skrzydla
@property
def dl_skrzydla(self):
return self._dl_skrzydla / 2.
@dl_skrzydla.setter
def dl_skrzydla(self, value):
self._dl_skrzydla = value
k = Kaczuszka(124)
k.plyn()
k.dl_skrzydla
k.dl_skrzydla = 100
k.dl_skrzydla
k.dl_skrzydla += 50
k.dl_skrzydla
###Output
_____no_output_____ |
notebooks/Plot_AxionLPlasmonRates.ipynb | ###Markdown
Notebook to calculate the photon spectra in IAXO for the Primakoff and LPlasmon fluxes
###Code
import sys
sys.path.append('../src')
from Params import *
from PlotFuncs import *
from Like import *
from AxionFuncs import *
import matplotlib.patheffects as pe
path_effects=[pe.Stroke(linewidth=7, foreground='k'), pe.Normal()]
###Output
_____no_output_____
###Markdown
First we just look at the LPlasmon flux from RZ by taking E_res = 10e-3 and take a range of interesting masses
###Code
fig,ax = MySquarePlot(r"$E_\gamma$ [eV]",r" ${\rm d}N_\gamma/{\rm d}E_\gamma$ [eV$^{-1}$]",lfs=37)
# Masses we are interested
m_a_vals = [1e-3,2e-3,3e-3,4e-3,5e-3,6e-3]
cols = cm.rainbow(linspace(0,1,size(m_a_vals)))
# Initialise binning:
E_res = 10e-3
E_max = 20.0
nfine = 10
nE_bins = 1000
Ei,E_bins = EnergyBins(E_res,E_max,nfine,nE_bins)
# Fluxes for g = 1e-10
Flux10_0 = AxionFlux_Primakoff_PlasmonCorrection(1e-10,Ei)
Flux10_1 = AxionFlux_Lplasmon(1e-10,Ei,B_model_seismic())
# Loop over masses of interest and plot each signal
for m_a,col in zip(m_a_vals,cols):
dN0 = PhotonNumber_gag(Ei,Flux10_0,m_a,g=5e-11,Eres=Ei[0])
plt.plot(Ei*1000,dN0/1000,'-',label=str(int(m_a*1000)),lw=3,color=col,path_effects=path_effects)
dN = PhotonNumber_gag(Ei,Flux10_0+Flux10_1,m_a,g=5e-11,Eres=Ei[0])
plt.plot(Ei*1000,dN/1000,'--',lw=3,color=col)
print('m_a = ',m_a,'Number of LPlasmon events = ',trapz(dN,Ei)-trapz(dN0,Ei))
# Tweaking:
plt.xlim(left=Ei[0]*1000)
plt.xlim(right=Ei[-1]*1000)
plt.ylim(bottom=5e-9,top=1e4)
plt.xscale('log')
plt.yscale('log')
leg = plt.legend(fontsize=30,frameon=True,title=r'$m_a$ [meV]',loc="lower right",framealpha=1,edgecolor='k',labelspacing=0.2)
plt.setp(leg.get_title(),fontsize=30)
leg.get_frame().set_linewidth(2.5)
plt.gcf().text(0.7,0.21,r'$E_{\rm res} = 10$ eV',horizontalalignment='right',fontsize=30)
plt.gcf().text(0.7,0.16,r'$g_{a\gamma} = 5\times10^{-11}$ GeV$^{-1}$',horizontalalignment='right',fontsize=30)
dN1_ref = PhotonNumber_gag(Ei,Flux10_0,1e-6,g=5e-11,Eres=Ei[0])
xtxt = Ei[500:3000]*1000
ytxt = 1.18*dN1_ref[500:3000]/1000
txt = CurvedText(xtxt,ytxt,text=r'Primakoff',va = 'bottom',axes = ax,fontsize=30)
plt.gcf().text(0.18,0.64,r'LPlasmon',fontsize=30,rotation=46)
plt.gcf().text(0.15,0.83,r'{\bf Vacuum mode}',fontsize=35)
# Save figure
MySaveFig(fig,'XraySpectrum_lowmasses')
###Output
m_a = 0.001 Number of LPlasmon events = 24783.86446893381
m_a = 0.002 Number of LPlasmon events = 19011.863561040605
m_a = 0.003 Number of LPlasmon events = 7382.401341175646
m_a = 0.004 Number of LPlasmon events = 1248.7896789739898
m_a = 0.005 Number of LPlasmon events = 506.2437878559722
m_a = 0.006 Number of LPlasmon events = 303.5823162651359
###Markdown
Now do a similar thing for the buffer gas phase but change the pressure values
###Code
m_a = 1.0e-1
pos = 1/array([1e100,10,5,2,1.1,1.0000001]) # Pressure offsets from p_max
T_operating = 1.8
labs = array(['1000','10','5','2','1.1','1'])
# KSVZ axion
g = 2e-10*m_a*1.92
fig,ax = MySquarePlot(r"$E_\gamma$ [eV]",r" ${\rm d}N_\gamma/{\rm d}E_\gamma$ [eV$^{-1}$]",lfs=37)
cols = cm.rainbow(linspace(0,1,size(pos)))
# Finer binning that before:
E_res = 1e-3
E_max = 40.0
nfine = 10
nE_bins = 1000
Ei = logspace(log10(E_res),log10(E_max),1000)
# Fluxes again:
Flux10_0 = AxionFlux_Primakoff_PlasmonCorrection(1e-10,Ei)
Flux10_1 = AxionFlux_Lplasmon(1e-10,Ei,B_model_seismic())
dN1_0 = PhotonNumber_gag_BufferGas(Ei,Flux10_0,m_a,0.99999999*(m_a)**2.0*T_operating/0.02,g=g,Eres=Ei[0])
dN2_0 = PhotonNumber_gag_BufferGas(Ei,Flux10_0+Flux10_1,m_a,0.999999*(m_a)**2.0*T_operating/0.02,g=5e-11,Eres=Ei[0])
for po,col,label in zip(pos,cols,labs):
pressure = po*(m_a)**2.0*T_operating/0.02
lab = r'$m_a/$'+label
if po<1e-10:
lab = '0'
if lab==r'$m_a/$1':
lab = r'$m_a$'
dN0 = PhotonNumber_gag_BufferGas(Ei,Flux10_0,m_a,pressure,g=g,Eres=Ei[0])
plt.plot(Ei*1000,dN0/1000,'-',label=lab,lw=3,color=col,path_effects=path_effects)
dN = PhotonNumber_gag_BufferGas(Ei,Flux10_0+Flux10_1,m_a,pressure,g=g,Eres=Ei[0])
plt.plot(Ei*1000,dN/1000,'--',lw=3,color=col)
print('m_a = ',m_a,'pressure_offset = ',po,'Number of LPlasmon events = ',trapz(dN,Ei)-trapz(dN0,Ei))
plt.xlim(left=Ei[0]*1000)
plt.xlim(right=Ei[-1]*1000)
plt.ylim(bottom=1e-24,top=5e3)
plt.xscale('log')
plt.yscale('log')
leg = plt.legend(fontsize=30,frameon=True,title=r'$m_\gamma = $',loc="lower right",framealpha=1,edgecolor='k',labelspacing=0.2)
plt.setp(leg.get_title(),fontsize=30)
leg.get_frame().set_linewidth(2.5)
plt.gcf().text(0.65,0.26,r'$E_{\rm res} = 1$ eV',horizontalalignment='right',fontsize=30)
plt.gcf().text(0.65,0.21,r'$m_a = 10^{-1}$ eV',horizontalalignment='right',fontsize=30)
plt.gcf().text(0.65,0.16,r'$g_{a\gamma} = 3.84\times10^{-11}$ GeV$^{-1}$',horizontalalignment='right',fontsize=30)
txt = CurvedText(x = Ei[650:]*1000,y = 2*dN1_0[650:]/1000,text=r'Primakoff',va = 'bottom',axes = ax,fontsize=30)
#plt.gcf().text(0.18,0.7,r'LPlasmon',fontsize=30,rotation=46)
fs = 20
plt.gcf().text(0.13,0.41,'Upper layers',fontsize=fs)
plt.gcf().text(0.2,0.50,'Tachocline',fontsize=fs)
plt.gcf().text(0.3,0.68,'Radiative zone',fontsize=fs)
plt.plot([2,2.5],[1.8*0.7e-14,1.8*1e-16],'k--')
plt.plot([5,8],[1.8*0.2e-10,1.8*0.1e-12],'k--')
plt.plot([20,40],[1.8*0.6e-4,1.8*0.1e-6],'k--')
plt.gcf().text(0.15,0.83,r'{\bf $^4$He Buffer gas mode}',fontsize=35)
MySaveFig(fig,'XraySpectrum_BufferGas')
###Output
m_a = 0.1 pressure_offset = 1e-100 Number of LPlasmon events = 0.0017295782311865793
m_a = 0.1 pressure_offset = 0.1 Number of LPlasmon events = 0.0010267363345910496
m_a = 0.1 pressure_offset = 0.2 Number of LPlasmon events = 0.001295534874845572
m_a = 0.1 pressure_offset = 0.5 Number of LPlasmon events = 0.003171707501060439
m_a = 0.1 pressure_offset = 0.9090909090909091 Number of LPlasmon events = 0.03418154382575267
m_a = 0.1 pressure_offset = 0.9999999000000099 Number of LPlasmon events = 0.06042514048749581
|
courses/C2_ Build Your Model/SOLUTIONS/DSE C2 L2_ Practice with OOP and Pythonic Package Development.ipynb | ###Markdown
DSE Course 2, Lab 2: Practice with OOP and Pythonic Package Development**Instructor**: Wesley Beckner**Contact**: [email protected] this lab we will practice object oriented programming and creating packages in python. We will also demonstrate what are classes and objects and methods and attributes.--- Part 1 Classes, Instances, Methods, and AttribtuesA class is created with the reserved word `class`A class can have attributes
###Code
# define a class
class MyClass:
some_attribute = 5
###Output
_____no_output_____
###Markdown
We use the **_class blueprint_** _MyClass_ to create an **_instance_**We can now access attributes belonging to that class:
###Code
# create instance
instance = MyClass()
# access attributes of the instance of MyClass
instance.some_attribute
###Output
_____no_output_____
###Markdown
attributes can be changed:
###Code
instance.some_attribute = 50
instance.some_attribute
###Output
_____no_output_____
###Markdown
In practice we always use the `__init__()` function, which is executed when the class is being initiated.
###Code
class Pokeball:
def __init__(self, contains=None, type_name="poke ball"):
self.contains = contains
self.type_name = type_name
self.catch_rate = 0.50 # note this attribute is not accessible upon init
# empty pokeball
pokeball1 = Pokeball()
# used pokeball of a different type
pokeball1 = Pokeball("Pikachu", "master ball")
###Output
_____no_output_____
###Markdown
> what is the special keyword [`self`](http://neopythonic.blogspot.com/2008/10/why-explicit-self-has-to-stay.html) doing?The `self` parameter is a reference to the current instance of the class and is used to access variables belonging to the class. classes can also contain methods
###Code
import random
class Pokeball:
def __init__(self, contains=None, type_name="poke ball"):
self.contains = contains
self.type_name = type_name
self.catch_rate = 0.50 # note this attribute is not accessible upon init
# the method catch, will update self.contains, if a catch is successful
# it will also use self.catch_rate to set the performance of the catch
def catch(self, pokemon):
if self.contains == None:
if random.random() < self.catch_rate:
self.contains = pokemon
print(f"{pokemon} captured!")
else:
print(f"{pokemon} escaped!")
pass
else:
print("pokeball is not empty!")
pokeball = Pokeball()
pokeball.catch("picachu")
pokeball.contains
###Output
_____no_output_____
###Markdown
L2 Q1Create a release method for the class Pokeball:
###Code
class Pokeball:
def __init__(self, contains=None, type_name="poke ball"):
self.contains = contains
self.type_name = type_name
self.catch_rate = 0.50 # note this attribute is not accessible upon init
# the method catch, will update self.contains, if a catch is successful
# it will also use self.catch_rate to set the performance of the catch
def catch(self, pokemon):
if self.contains == None:
if random.random() < self.catch_rate:
self.contains = pokemon
print(f"{pokemon} captured!")
else:
print(f"{pokemon} escaped!")
pass
else:
print("pokeball is not empty!")
def release(self):
if self.contains ==None:
print("Pokeball is already empty")
else:
print(self.contains, "has been released")
self.contains = None
pokeball = Pokeball()
pokeball.catch("picachu")
pokeball.contains
pokeball.release()
###Output
Pokeball is already empty
###Markdown
InheritanceInheritance allows you to adopt into a child class, the methods/attributes of a parent class
###Code
class MasterBall(Pokeball):
pass
masterball = MasterBall()
masterball.type_name
###Output
_____no_output_____
###Markdown
HMMM we don't like that type name. let's make sure we change some of the inherited attributes!We'll do this again with the `__init__` function
###Code
class MasterBall(Pokeball):
def __init__(self, contains=None, type_name="Masterball", catch_rate=0.8):
self.contains = contains
self.type_name = type_name
self.catch_rate = catch_rate
masterball = MasterBall()
masterball.type_name
masterball.catch("charmander")
###Output
charmander captured!
###Markdown
We can also write this, this way:
###Code
class MasterBall(Pokeball):
def __init__(self, contains=None, type_name="Masterball"):
Pokeball.__init__(self, contains, type_name)
self.catch_rate = 0.8
masterball = MasterBall()
masterball.type_name
masterball = MasterBall()
masterball.catch("charmander")
###Output
charmander captured!
###Markdown
The keyword `super` will let us write even more succintly:
###Code
class MasterBall(Pokeball):
def __init__(self, contains=None, type_name="Masterball"):
super().__init__(contains, type_name)
self.catch_rate = 0.8
masterball = MasterBall()
masterball.catch("charmander")
###Output
charmander captured!
###Markdown
L2 Q2Write another class object called `GreatBall` that inherits the properties of `Pokeball`, has a `catch_rate` of 0.6, and `type_name` of Greatball
###Code
# Code Cell for L2 Q2
class GreatBall(Pokeball):
def __init__(self, contains=None, type_name="Greatball"):
Pokeball.__init__(self, contains, type_name)
self.catch_rate = 0.6
great = GreatBall()
###Output
_____no_output_____
###Markdown
Interacting Objects L2 Q3Write another class object called `Pokemon`. It has the [attributes](https://bulbapedia.bulbagarden.net/wiki/Type):* name* weight* speed* typeNow create a class object called `FastBall`, it inherits the properties of `Pokeball` but has a new condition on `catch` method: if pokemon.speed > 100 then there is 100% chance of catch success.> what changes do you have to make to the way we've been interacting with pokeball to make this new requirement work?
###Code
class Pokeball:
def __init__(self, contains=None, type_name="poke ball"):
self.contains = contains
self.type_name = type_name
self.catch_rate = 0.50 # note this attribute is not accessible upon init
# the method catch, will update self.contains, if a catch is successful
# it will also use self.catch_rate to set the performance of the catch
def catch(self, pokemon):
if self.contains == None:
if random.random() < self.catch_rate:
self.contains = pokemon
print(f"{pokemon} captured!")
else:
print(f"{pokemon} escaped!")
pass
else:
print("pokeball is not empty!")
def release(self):
if self.contains ==None:
print("Pokeball is already empty")
else:
print(self.contains, "has been released")
self.contains = None
class Pokemon():
def __init__(self, name, weight, speed, type_):
self.name = name
self.weight = weight
self.speed = speed
self.type_ = type_
class FastBall(Pokeball):
def __init__(self, contains=None, type_name="Fastball"):
Pokeball.__init__(self, contains, type_name)
self.catch_rate = 0.6
def catch_fast(self, pokemon):
if pokemon.speed > 100:
if self.contains == None:
self.contains = pokemon.name
print(pokemon.name, "has been captured")
else:
print("Pokeball is not empty")
else:
self.catch(pokemon.name)
fast = FastBall()
mewtwo = Pokemon('Mewtwo',18,110,'Psychic')
print(fast.contains)
fast.catch_fast(mewtwo)
print(fast.contains)
fast.catch_fast(mewtwo)
type(mewtwo) == Pokemon
###Output
_____no_output_____
###Markdown
L2 Q4In the above task, did you have to write any code to test that your new classes worked?! We will talk about that more at a later time, but for now, wrap any testing that you did into a new function called `test_classes` in the code cell below
###Code
# Code Cell for L2 Q4
###Output
_____no_output_____
###Markdown
Part 2Our next objective is to organize our project code into objects and methods. Let's think and discuss, how could your work be organized into OOP? L2 Q5Paste your functions used so far in your project in the code cell below. Then list in the markdown cell below that your ideas for objects vs methods and attributes
###Code
# Code Cell for L2 Q5
###Output
_____no_output_____
###Markdown
Text cell for L2 Q5 L2 Q6Write your functions into classes and methods
###Code
# Code Cell for L2 Q6
###Output
_____no_output_____ |
005_Test_Internet_Speed/005_Test_Internet_Speed.ipynb | ###Markdown
All the IPython Notebooks in python tips series by Dr. Milan Parmar are available @ **[GitHub](https://github.com/milaan9/91_Python_Tips/blob/main/000_Convert_Jupyter_Notebook_to_PDF.ipynb)** Python Program to Test Internet Speed
###Code
pip install speedtest-cli
'''
Python Program to Test internet speed
'''
# Import the necessary module!
import speedtest
# Create an instance of Speedtest and call it st
st = speedtest.Speedtest()
# Fetch the download speed
# Use of download method to fetch the speed and store in d_st
download = st.download()
# Fetch the upload speed
# Use of upload method to fetch the speed and store in u_st
upload = st.upload()
# Converting into Mbps
download = download/1000000
upload = upload/1000000
# Display the result
print("Your โฌ Download speed is", round(download, 3), 'Mbps')
print("Your โซ Upload speed is", round(upload, 3), 'Mbps')
# Fetch the ping
st.get_servers([])
ping = st.results.ping
# Display the result
print("Your Ping is", ping)
'''
Python Program to Test internet speed using Tkinter GUI
'''
# Import the necessary modules!
import speedtest
from tkinter.ttk import *
from tkinter import *
import threading
root = Tk()
root.title("Test Internet Speed")
root.geometry('380x260')
root.resizable(False, False)
root.configure(bg="#ffffff")
root.iconbitmap('speed.ico')
# design Label
Label(root, text ='TEST INTERNET SPEED', bg='#ffffff', fg='#404042', font = 'arial 23 bold').pack()
Label(root, text ='by @milaan9', bg='#fff', fg='#404042', font = 'arial 15 bold').pack(side =BOTTOM)
# making label for show internet speed
down_label = Label(root, text="โฌ Download Speed - ", bg='#fff', font = 'arial 10 bold')
down_label.place(x = 90, y= 50)
up_label = Label(root, text="โซ Upload Speed - ", bg='#fff', font = 'arial 10 bold')
up_label.place(x = 90, y= 80)
ping_label = Label(root, text="Your Ping - ", bg='#fff', font = 'arial 10 bold')
ping_label.place(x = 90, y= 110)
# function for check speed
def check_speed():
global download_speed, upload_speed
speed_test= speedtest.Speedtest()
download= speed_test.download()
upload = speed_test.upload()
download_speed = round(download / (10 ** 6), 2)
upload_speed = round(upload / (10 ** 6), 2)
# function for progress bar and update text
def update_text():
thread=threading.Thread(target=check_speed, args=())
thread.start()
progress=Progressbar(root, orient=HORIZONTAL,
length=210, mode='indeterminate')
progress.place(x = 85, y = 140)
progress.start()
while thread.is_alive():
root.update()
pass
down_label.config(text="โฌ Download Speed - "+str(download_speed)+"Mbps")
up_label.config(text="โซ Upload Speed - "+str(upload_speed)+"Mbps")
# Fetch the ping
st.get_servers([])
ping = st.results.ping
ping_label.config(text="Your Ping is - "+str(ping))
progress.stop()
progress.destroy()
# button for call to function
button = Button(root, text="Check Speed โถ", width=30, bd = 0, bg = '#404042', fg='#fff', pady = 5, command=update_text)
button.place(x=85, y = 170)
root.mainloop()
###Output
_____no_output_____
###Markdown
All the IPython Notebooks in **Python Mini-Projects** lecture series by **[Dr. Milaan Parmar](https://www.linkedin.com/in/milaanparmar/)** are available @ **[GitHub](https://github.com/milaan9/91_Python_Mini_Projects)** Python Program to Test Internet Speed
###Code
pip install speedtest-cli
'''
Python Program to Test internet speed
'''
# Import the necessary module!
import speedtest
# Create an instance of Speedtest and call it st
st = speedtest.Speedtest()
# Fetch the download speed
# Use of download method to fetch the speed and store in d_st
download = st.download()
# Fetch the upload speed
# Use of upload method to fetch the speed and store in u_st
upload = st.upload()
# Converting into Mbps
download = download/1000000
upload = upload/1000000
# Display the result
print("Your โฌ Download speed is", round(download, 3), 'Mbps')
print("Your โซ Upload speed is", round(upload, 3), 'Mbps')
# Fetch the ping
st.get_servers([])
ping = st.results.ping
# Display the result
print("Your Ping is", ping)
'''
Python Program to Test internet speed using Tkinter GUI
'''
# Import the necessary modules!
import speedtest
from tkinter.ttk import *
from tkinter import *
import threading
root = Tk()
root.title("Test Internet Speed")
root.geometry('380x260')
root.resizable(False, False)
root.configure(bg="#ffffff")
root.iconbitmap('speed.ico')
# design Label
Label(root, text ='TEST INTERNET SPEED', bg='#ffffff', fg='#404042', font = 'arial 23 bold').pack()
Label(root, text ='by @milaan9', bg='#fff', fg='#404042', font = 'arial 15 bold').pack(side =BOTTOM)
# making label for show internet speed
down_label = Label(root, text="โฌ Download Speed - ", bg='#fff', font = 'arial 10 bold')
down_label.place(x = 90, y= 50)
up_label = Label(root, text="โซ Upload Speed - ", bg='#fff', font = 'arial 10 bold')
up_label.place(x = 90, y= 80)
ping_label = Label(root, text="Your Ping - ", bg='#fff', font = 'arial 10 bold')
ping_label.place(x = 90, y= 110)
# function for check speed
def check_speed():
global download_speed, upload_speed
speed_test= speedtest.Speedtest()
download= speed_test.download()
upload = speed_test.upload()
download_speed = round(download / (10 ** 6), 2)
upload_speed = round(upload / (10 ** 6), 2)
# function for progress bar and update text
def update_text():
thread=threading.Thread(target=check_speed, args=())
thread.start()
progress=Progressbar(root, orient=HORIZONTAL,
length=210, mode='indeterminate')
progress.place(x = 85, y = 140)
progress.start()
while thread.is_alive():
root.update()
pass
down_label.config(text="โฌ Download Speed - "+str(download_speed)+"Mbps")
up_label.config(text="โซ Upload Speed - "+str(upload_speed)+"Mbps")
# Fetch the ping
st.get_servers([])
ping = st.results.ping
ping_label.config(text="Your Ping is - "+str(ping))
progress.stop()
progress.destroy()
# button for call to function
button = Button(root, text="Check Speed โถ", width=30, bd = 0, bg = '#404042', fg='#fff', pady = 5, command=update_text)
button.place(x=85, y = 170)
root.mainloop()
'''
Python Program to Find IP Address of website
'''
# importing socket library
import socket
def get_hostname_IP():
hostname = input("Please enter website address(URL):")
try:
print (f'Hostname: {hostname}')
print (f'IP: {socket.gethostbyname(hostname)}')
except socket.gaierror as error:
print (f'Invalid Hostname, error raised is {error}')
get_hostname_IP()
###Output
Please enter website address(URL):www.facebook.com
Hostname: www.facebook.com
IP: 31.13.79.35
|
courses/modsim2018/matheuspiquini/Task17-1.ipynb | ###Markdown
Lucas_task 15 - Motor Control Introduction to modeling and simulation of human movementhttps://github.com/BMClab/bmc/blob/master/courses/ModSim2018.md Implement a simulation of the ankle joint model using the parameters from Thelen (2003) and Elias (2014)
###Code
import numpy as np
import pandas as pd
%matplotlib notebook
import matplotlib.pyplot as plt
import math
from Muscle import Muscle
Lslack = 2.4*0.09 # tendon slack length
Lce_o = 0.09 # optimal muscle fiber length
Fmax = 1400 #maximal isometric DF force
alpha = 7*math.pi/180 # DF muscle fiber pennation angle
dt = 0.001
dorsiflexor = Muscle(Lce_o=Lce_o, Fmax=Fmax, Lslack=Lslack, alpha=alpha, dt = dt)
soleus = Muscle(Lce_o=0.049, Fmax=8050, Lslack=0.289, alpha=25*np.pi/180, dt = dt)
soleus.Fmax
###Output
_____no_output_____
###Markdown
Muscle properties Parameters from Nigg & Herzog (2006).
###Code
Umax = 0.04 # SEE strain at Fmax
width = 0.63 # Max relative length change of CE
###Output
_____no_output_____
###Markdown
Activation dynamics parameters
###Code
a = 1
u = 1 #Initial conditional for Brain's activation
#b = .25*10#*Lce_o
###Output
_____no_output_____
###Markdown
Subject's anthropometricsParameters obtained experimentally or from Winter's book.
###Code
Mass = 75 #total body mass (kg)
Lseg = 0.26 #segment length (m)
m = 1*Mass #foot mass (kg)
g = 9.81 #acceleration of gravity (m/s2)
hcm = 0.85 #distance from ankle joint to center of mass of the body(m)
I = 4/3*m*hcm**2#moment of inertia
legAng = math.pi/2 #angle of the leg with horizontal (90 deg)
As_TA = np.array([30.6, -7.44e-2, -1.41e-4, 2.42e-6, 1.5e-8]) / 100 # at [m] instead of [cm]
# Coefs for moment arm for ankle angle
Bs_TA = np.array([4.3, 1.66e-2, -3.89e-4, -4.45e-6, -4.34e-8]) / 100 # at [m] instead of [cm]
As_SOL = np.array([32.3, 7.22e-2, -2.24e-4, -3.15e-6, 9.27e-9]) / 100 # at [m] instead of [cm]
Bs_SOL = np.array([-4.1, 2.57e-2, 5.45e-4, -2.22e-6, -5.5e-9]) / 100 # at [m] instead of [cm]
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
phi = 5*np.pi/180
phid = 0 #zero velocity
Lm0 = 0.31 #initial total lenght of the muscle
dorsiflexor.Lnorm_ce = 1 #norm
soleus.Lnorm_ce = 1 #norm
t0 = 0 #Initial time
tf = 60 #Final Time
t = np.arange(t0,tf,dt) # time array
# preallocating
F = np.empty((t.shape[0],2))
phivec = np.empty(t.shape)
Fkpe = np.empty(t.shape)
FiberLen = np.empty(t.shape)
TendonLen = np.empty(t.shape)
a_dynamics = np.empty(t.shape)
Moment = np.empty(t.shape)
x = np.empty(t.shape)
y = np.empty(t.shape)
z = np.empty(t.shape)
q = np.empty(t.shape)
###Output
_____no_output_____
###Markdown
Simulation - Series
###Code
def momentArmDF(phi):
'''
Calculate the tibialis anterior moment arm according to Elias et al (2014)
Input:
phi: Ankle joint angle in radians
Output:
Rarm: TA moment arm
'''
# Consider neutral ankle position as zero degrees
phi = phi*180/np.pi # converting to degrees
Rf = 4.3 + 1.66E-2*phi + -3.89E-4*phi**2 + -4.45E-6*phi**3 + -4.34E-8*phi**4
Rf = Rf/100 # converting to meters
return Rf
def ComputeTotalLengthSizeTA(phi):
'''
Calculate TA MTU length size according to Elias et al (2014)
Input:
phi: ankle angle
'''
phi = phi*180/math.pi # converting to degrees
Lm = 30.6 + -7.44E-2*phi + -1.41E-4*phi**2 + 2.42E-6*phi**3 + 1.5E-8*phi**4
Lm = Lm/100
return Lm
def ComputeMomentJoint(Rf_TA, Fnorm_tendon_TA, Fmax_TA, Rf_SOL, Fnorm_tendon_SOL, Fmax_SOL, m, g, phi):
'''
Inputs:
RF = Moment arm
Fnorm_tendon = Normalized tendon force
m = Segment Mass
g = Acelleration of gravity
Fmax= maximal isometric force
Output:
M = Total moment with respect to joint
'''
M = (-0.65*m*g*hcm*phi + Rf_TA*Fnorm_tendon_TA*Fmax_TA
+ Rf_SOL*Fnorm_tendon_SOL*Fmax_SOL
+ m*g*hcm*np.sin(phi))
return M
def ComputeAngularAcelerationJoint(M, I):
'''
Inputs:
M = Total moment with respect to joint
I = Moment of Inertia
Output:
phidd= angular aceleration of the joint
'''
phidd = M/I
return phidd
def computeMomentArmJoint(theta, Bs):
# theta - joint angle (degrees)
# Bs - coeficients for the polinomio
auxBmultp = np.empty(Bs.shape);
for i in range (len(Bs)):
auxBmultp[i] = Bs[i] * (theta**i)
Rf = sum(auxBmultp)
return Rf
def ComputeTotalLenghtSize(theta, As):
# theta = joint angle(degrees)
# As - coeficients for the polinomio
auxAmultp = np.empty(As.shape);
for i in range (len(As)):
auxAmultp[i] = As[i] * (theta**i)
Lm = sum(auxAmultp)
return Lm
noise = 0.1*np.random.randn(len(t))*1/dt
phiRef = 5*np.pi/180
LceRef_TA = 0.089
LceRef_SOL = 0.037
Kp = 200000
Kd = 100
for i in range (len(t)):
Lm_TA = ComputeTotalLenghtSize(phi*180/np.pi, As_TA)
Rf_TA = computeMomentArmJoint(phi*180/np.pi, Bs_TA)
Lm_SOL = ComputeTotalLenghtSize(phi*180/np.pi, As_SOL)
Rf_SOL = computeMomentArmJoint(phi*180/np.pi, Bs_SOL)
##############################################################
e_TA = LceRef_TA - dorsiflexor.Lnorm_ce*dorsiflexor.Lce_o
if e_TA > 0:
u_TA = max(min(1,-Kp*e_TA + Kd*dorsiflexor.Lnorm_cedot*dorsiflexor.Lce_o),0.01)
else:
u_TA = 0.01
e_SOL = LceRef_SOL - soleus.Lnorm_ce*soleus.Lce_o
if e_SOL > 0:
u_SOL = 0.01
else:
u_SOL = max(min(1,-Kp*e_SOL + Kd*soleus.Lnorm_cedot*soleus.Lce_o),0.01)
#e = phiRef - phi
#if e > 0:
# u_TA = max(min(1,Kp*e - Kd*phid),0.01)
# u_SOL = 0.01
#else:
# u_TA = 0.01
# u_SOL = max(min(1,-Kp*e + Kd*phid),0.01)
################################################################
dorsiflexor.updateMuscle(Lm=Lm_TA, u=u_TA)
soleus.updateMuscle(Lm=Lm_SOL, u=u_SOL)
################################################################
#Compute MomentJoint
M = ComputeMomentJoint(Rf_TA,dorsiflexor.Fnorm_tendon,
dorsiflexor.Fmax,
Rf_SOL, soleus.Fnorm_tendon,
soleus.Fmax,
m,g,phi)
#Compute Angular Aceleration Joint
torqueWithNoise = M + noise[i]
phidd = ComputeAngularAcelerationJoint (torqueWithNoise,I)
# Euler integration steps
phid= phid + dt*phidd
phi = phi + dt*phid
phideg= (phi*180)/math.pi #convert joint angle from radians to degree
# Store variables in vectors
F[i,0] = dorsiflexor.Fnorm_tendon*dorsiflexor.Fmax
F[i,1] = soleus.Fnorm_tendon*soleus.Fmax
Fkpe[i] = dorsiflexor.Fnorm_kpe*dorsiflexor.Fmax
FiberLen[i] = dorsiflexor.Lnorm_ce*dorsiflexor.Lce_o
TendonLen[i] = dorsiflexor.Lnorm_see*dorsiflexor.Lce_o
a_dynamics[i] = dorsiflexor.a
phivec[i] = phideg
Moment[i] = M
x[i] = e_TA
y[i] = e_SOL
z[i] = u_TA
q[i] = u_SOL
print(LceRef_SOL)
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t,x,c='magenta')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('error TA')
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t,z,c='magenta')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('error SOL')
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t,y,c='magenta')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('error SOL')
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t,q,c='magenta')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('error SOL')
plt.show()
###Output
_____no_output_____
###Markdown
Plots
###Code
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t,a_dynamics,c='magenta')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Activation dynamics')
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t, Moment)
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('joint moment')
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t, F[:,1], c='red')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Force (N)')
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t,phivec,c='red')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Joint angle (deg)')
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t,FiberLen, label = 'fiber')
ax.plot(t,TendonLen, label = 'tendon')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Length (m)')
ax.legend(loc='best')
fig, ax = plt.subplots(1, 3, figsize=(9,4), sharex=True, sharey=True)
ax[0].plot(t,FiberLen, label = 'fiber')
ax[1].plot(t,TendonLen, label = 'tendon')
ax[2].plot(t,FiberLen + TendonLen, label = 'muscle (tendon + fiber)')
ax[1].set_xlabel('time (s)')
ax[0].set_ylabel('Length (m)')
ax[0].legend(loc='best')
ax[1].legend(loc='best')
ax[2].legend(loc='best')
plt.show()
###Output
_____no_output_____ |
plant_pathelogy.ipynb | ###Markdown
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from zipfile import ZipFile
import os
from sklearn.model_selection import train_test_split
from sklearn.utils.class_weight import compute_class_weight
import tensorflow as tf
import tensorflow.keras.backend as K
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import CategoricalCrossentropy
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
!pip install efficientnet
from efficientnet.tfkeras import EfficientNetB7
###Output
_____no_output_____
###Markdown
**TPU preparation**
###Code
AUTO = tf.data.experimental.AUTOTUNE
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
except ValueError:
raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
tpu_strategy = tf.distribute.experimental.TPUStrategy(tpu)
print("REPLICAS: ", tpu_strategy.num_replicas_in_sync)
IMG_SIZE = 800
BATCH_SIZE = 8* tpu_strategy.num_replicas_in_sync
classes = 4
###Output
_____no_output_____
###Markdown
**Loading data**
###Code
with ZipFile('/content/drive/My Drive/Plant/plant-pathology-2020-fgvc7.zip') as f:
print('Extracting')
f.extractall()
print('Done!!')
gcs_path = 'gs://kds-4d598c666e2db12886904a0a2d808a1259db3c0910143721bab174d1'
img_path = '/images/'
train_csv = pd.read_csv('train.csv')
labels = train_csv.iloc[:,1:].values
images_path = np.array([f'{gcs_path}{img_path}{image_id}.jpg' for image_id in train_csv['image_id']])
###Output
_____no_output_____
###Markdown
**Split data into train and validation set**
###Code
train_images, val_images, train_labels, val_labels = train_test_split(images_path ,labels , test_size=0.2, shuffle=True, random_state = 200)
###Output
_____no_output_____
###Markdown
**Class weights**
###Code
class_weights = compute_class_weight('balanced', np.unique(np.argmax(labels, axis = 1)), np.argmax(labels, axis = 1))
###Output
_____no_output_____
###Markdown
functions to image preprocessing
###Code
def decode_image(filename, label=None):
bits = tf.io.read_file(filename)
image = tf.image.decode_jpeg(bits, channels=3)
image = tf.cast(image, tf.float32) / 255.0
image = tf.image.resize(image, (IMG_SIZE,IMG_SIZE))
if label is None:
return image
else:
return image, label
def data_augment(filename, label=None, seed=200):
image, label = decode_image(filename, label)
image = tf.image.random_flip_left_right(image, seed=seed)
image = tf.image.random_flip_up_down(image, seed=seed)
image = tf.image.rot90(image)
if label is None:
return image
else:
return image, label
###Output
_____no_output_____
###Markdown
**Preparing train and validation sets**
###Code
train_dataset = (
tf.data.Dataset
.from_tensor_slices((train_images, train_labels))
.map(data_augment, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.repeat()
.prefetch(AUTO)
)
val_dataset = (
tf.data.Dataset
.from_tensor_slices((val_images,val_labels))
.map(decode_image, num_parallel_calls=AUTO)
.batch(val_images.shape[0])
.cache()
.prefetch(AUTO)
)
###Output
_____no_output_____
###Markdown
**Model architecture**
###Code
def create_model(trainable = True):
#Model structure
efficientnet = EfficientNetB7(weights = 'noisy-student', include_top=False, input_shape=(IMG_SIZE, IMG_SIZE, 3), pooling = 'avg')
output = Dense(classes, activation="softmax")(efficientnet.output)
model = Model(inputs=efficientnet.input, outputs=output)
if trainable == False:
model.trainable = False
print(model.summary())
return model
with tpu_strategy.scope():
model = convnet()
#Compilation of model
model.compile(optimizer= Adam(0.0005), loss= 'categorical_crossentropy', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
**Callbacks**
###Code
early_stopping = EarlyStopping(monitor = 'val_loss', patience = 5, mode = 'min')
reduce_lr = ReduceLROnPlateau(monitor = 'val_loss', factor = 0.6, patience = 2, mode = 'min', min_lr= 0.0000001)
checkpoint = ModelCheckpoint(checkpoint_name, save_best_only= True, save_weights_only= True ,mode = 'min', monitor= 'val_loss', verbose = 1)
#lr_schedule = LearningRateScheduler(schedule= lrschedule, verbose = 1)
STEPS_PER_EPOCH = train_images.shape[0] // BATCH_SIZE
EPOCHS = 20
class_dict = {i:val for i, val in enumerate(list(class_weights))}
history = model.fit(train_dataset,
steps_per_epoch=STEPS_PER_EPOCH,
epochs=EPOCHS,
verbose=1,
validation_data=val_dataset,
class_weight = class_dict,
callbacks = [early_stopping, reduce_lr, checkpoint]
)
def loss_acc_plot(history, accuracy = False):
data = pd.DataFrame(history.history)
plt.title('Training Loss vs Validation Loss')
plt.plot(data['loss'], c = 'b', label = 'loss', )
plt.plot(data['val_loss'], c = 'orange', label = 'val_loss')
plt.legend()
plt.show()
if accuracy == True:
plt.title('Training Accuracy vs Validation Accuracy')
plt.plot(data['accuracy'], c = 'b', label = 'accuracy')
plt.plot(data['val_accuracy'], c = 'orange', label = 'val_accuracy')
plt.legend()
plt.show()
loss_acc_plot(history, accuracy= True)
dev_pred = model.predict(val_dataset)
def make_prediction_label(label_data):
pred_label = np.zeros(shape = label_data.shape, dtype = 'int')
argmax = np.argmax(label_data, axis = 1)
for idx in range(label_data.shape[0]):
max_col = argmax[idx]
pred_label[idx][max_col] = int(1)
return pred_label
pred_label = make_prediction_label(dev_pred)
def plot_cm(true_labels, pred_labels, label_name):
max_true = np.argmax(true_labels, axis = 1)
max_pred = np.argmax(pred_labels, axis = 1)
assert true_labels.shape == pred_labels.shape
matrix = np.zeros(shape = (4,4), dtype = 'int')
for idx in range(true_labels.shape[0]):
matrix[max_true[idx]][max_pred[idx]] = matrix[max_true[idx]][max_pred[idx]] + 1
matrix = pd.DataFrame(matrix, index = label_name, columns= label_name)
return matrix
cm_matrix = plot_cm(val_labels, pred_label, ['h', 'm', 'r', 's'])
cm_matrix
###Output
_____no_output_____ |
scripts/terraclimate/01_terraclimate_to_zarr3.ipynb | ###Markdown
TERRACLIMATE to Zarr_by Joe Hamman (CarbonPlan), June 29, 2020_This notebook converts the raw TERAACLIMATE dataset to Zarr format.**Inputs:**- inake catalog: `climate.gridmet_opendap`**Outputs:**- Cloud copy of TERRACLIMATE**Notes:**- No reprojection or processing of the data is done in this notebook.
###Code
import os
import fsspec
import xarray as xr
import dask
from dask.distributed import Client
from dask_gateway import Gateway
from typing import List
import urlpath
from tqdm import tqdm
# options
name = "terraclimate"
chunks = {"lat": 1440, "lon": 1440, "time": 12}
years = list(range(1958, 2020))
cache_location = f"gs://carbonplan-scratch/{name}-cache/"
target_location = f"gs://carbonplan-data/raw/{name}/4000m/raster.zarr"
gateway = Gateway()
options = gateway.cluster_options()
options.worker_cores = 1
options.worker_memory = 42
cluster = gateway.new_cluster(cluster_options=options)
cluster.adapt(minimum=0, maximum=40)
client = cluster.get_client()
cluster
# client = Client(n_workers=2)
client
import gcsfs
fs = gcsfs.GCSFileSystem()
try:
_ = fs.rm(target_location, recursive=True)
except FileNotFoundError:
pass
# # uncomment to remove all temporary zarr stores
zarrs = [
fn + ".zarr" for fn in fs.glob("carbonplan-scratch/terraclimate-cache/*nc")
]
fs.rm(zarrs, recursive=True)
variables = [
"aet",
"def",
"pet",
"ppt",
"q",
"soil",
"srad",
"swe",
"tmax",
"tmin",
"vap",
"ws",
"vpd",
"PDSI",
]
rename_vars = {"PDSI": "pdsi"}
mask_opts = {
"PDSI": ("lt", 10),
"aet": ("lt", 32767),
"def": ("lt", 32767),
"pet": ("lt", 32767),
"ppt": ("lt", 32767),
"ppt_station_influence": None,
"q": ("lt", 2147483647),
"soil": ("lt", 32767),
"srad": ("lt", 32767),
"swe": ("lt", 10000),
"tmax": ("lt", 200),
"tmax_station_influence": None,
"tmin": ("lt", 200),
"tmin_station_influence": None,
"vap": ("lt", 300),
"vap_station_influence": None,
"vpd": ("lt", 300),
"ws": ("lt", 200),
}
def apply_mask(key, da):
"""helper function to mask DataArrays based on a threshold value"""
if mask_opts.get(key, None):
op, val = mask_opts[key]
if op == "lt":
da = da.where(da < val)
elif op == "neq":
da = da.where(da != val)
return da
def preproc(ds):
"""custom preprocessing function for terraclimate data"""
rename = {}
station_influence = ds.get("station_influence", None)
if station_influence is not None:
ds = ds.drop_vars("station_influence")
var = list(ds.data_vars)[0]
if var in rename_vars:
rename[var] = rename_vars[var]
if "day" in ds.coords:
rename["day"] = "time"
if station_influence is not None:
ds[f"{var}_station_influence"] = station_influence
if rename:
ds = ds.rename(rename)
return ds
def postproc(ds):
"""custom post processing function to clean up terraclimate data"""
drop_encoding = [
"chunksizes",
"fletcher32",
"shuffle",
"zlib",
"complevel",
"dtype",
"_Unsigned",
"missing_value",
"_FillValue",
"scale_factor",
"add_offset",
]
for v in ds.data_vars.keys():
with xr.set_options(keep_attrs=True):
ds[v] = apply_mask(v, ds[v])
for k in drop_encoding:
ds[v].encoding.pop(k, None)
return ds
def get_encoding(ds):
compressor = Blosc()
encoding = {key: {"compressor": compressor} for key in ds.data_vars}
return encoding
@dask.delayed
def download(source_url: str, cache_location: str) -> str:
"""
Download a remote file to a cache.
Parameters
----------
source_url : str
Path or url to the source file.
cache_location : str
Path or url to the target location for the source file.
Returns
-------
target_url : str
Path or url in the form of `{cache_location}/hash({source_url})`.
"""
fs = fsspec.get_filesystem_class(cache_location.split(":")[0])(
token="cloud"
)
name = urlpath.URL(source_url).name
target_url = os.path.join(cache_location, name)
# there is probably a better way to do caching!
try:
fs.open(target_url)
return target_url
except FileNotFoundError:
pass
with fsspec.open(source_url, mode="rb") as source:
with fs.open(target_url, mode="wb") as target:
target.write(source.read())
return target_url
@dask.delayed(pure=True, traverse=False)
def nc2zarr(source_url: str, cache_location: str) -> str:
"""convert netcdf data to zarr"""
fs = fsspec.get_filesystem_class(source_url.split(":")[0])(token="cloud")
print(source_url)
target_url = source_url + ".zarr"
if fs.exists(urlpath.URL(target_url) / ".zmetadata"):
return target_url
with dask.config.set(scheduler="single-threaded"):
try:
ds = (
xr.open_dataset(fs.open(source_url), engine="h5netcdf")
.pipe(preproc)
.pipe(postproc)
.load()
.chunk(chunks)
)
except Exception as e:
raise ValueError(source_url)
mapper = fs.get_mapper(target_url)
ds.to_zarr(mapper, mode="w", consolidated=True)
return target_url
source_url_pattern = "https://climate.northwestknowledge.net/TERRACLIMATE-DATA/TerraClimate_{var}_{year}.nc"
source_urls = []
for var in variables:
for year in years:
source_urls.append(source_url_pattern.format(var=var, year=year))
source_urls[:4]
downloads = [download(s, cache_location) for s in source_urls]
download_futures = client.compute(downloads, retries=1)
downloaded_files = [d.result() for d in download_futures]
downloaded_files[:4]
zarrs = [nc2zarr(s, cache_location) for s in downloaded_files]
zarr_urls = dask.compute(zarrs, retries=1, scheduler="single-threaded")
zarr_urls[:4]
ds_list = []
for var in variables:
temp = []
for year in tqdm(years):
mapper = fsspec.get_mapper(
f"gs://carbonplan-scratch/terraclimate-cache/TerraClimate_{var}_{year}.nc.zarr"
)
temp.append(xr.open_zarr(mapper, consolidated=True))
print(f"concat {var}")
ds_list.append(
xr.concat(temp, dim="time", coords="minimal", compat="override")
)
client.close()
cluster.close()
options.worker_cores = 4
options.worker_memory = 16
cluster = gateway.new_cluster(cluster_options=options)
cluster.adapt(minimum=1, maximum=40)
client = cluster.get_client()
cluster
import zarr
ds = xr.merge(ds_list, compat="override").chunk(chunks)
ds
mapper = fsspec.get_mapper(target_location)
task = ds.to_zarr(mapper, mode="w", compute=False)
dask.compute(task, retries=4)
zarr.consolidate_metadata(mapper)
client.close()
cluster.close()
###Output
_____no_output_____ |
mem_mem/tests/3cke/3cke.ipynb | ###Markdown
scheme:* 1) for data transfer, pick 1st sleep api (h2d) fo stream-0, current cc = 1 (concurrency),* 2) check whether there is overalp with stream-* 2) if there is overlap, finish cc=1, start from cc++ (cc=2), predit the future ending time* 3) during the predicted ending time, check whether there is overlap with stream-2* 4) if there is overalap, finish cc=2, start from cc++ (cc=3), predict the future ending time* 5) go to step 3) , search through all the cuda streams* 6) for each time range, we need to find out how many apis have overlap and which-pair have conflicts or not
###Code
%load_ext autoreload
%autoreload 2
import warnings
import pandas as pd
import numpy as np
import os
import sys # error msg, add the modules
import operator # sorting
from math import *
import matplotlib.pyplot as plt
sys.path.append('../../')
import cuda_timeline
import read_trace
import avgblk
import cke
from model_param import *
#from df_util import *
warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning)
###Output
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
gpu info
###Code
gtx950 = DeviceInfo()
gtx950.sm_num = 6
gtx950.sharedmem_per_sm = 49152
gtx950.reg_per_sm = 65536
gtx950.maxthreads_per_sm = 2048
# init SM resources
SM_resList, SM_traceList = init_gpu(gtx950)
#SM_resList[0]
SM_traceList[0]
###Output
_____no_output_____
###Markdown
Understand the input
###Code
trace_s1 = 'trace_s1_5m.csv'
df_trace_s1 = read_trace.Trace2dataframe(trace_s1)
trace_s2 = 'trace_s2_5m.csv'
df_trace_s2 = read_trace.Trace2dataframe(trace_s2)
trace_s3 = 'trace_s3_5m.csv'
df_trace_s3 = read_trace.Trace2dataframe(trace_s3)
df_trace_s1
cuda_timeline.plot_trace(df_trace_s1)
cuda_timeline.plot_trace(df_trace_s2)
cuda_timeline.plot_trace(df_trace_s3)
###Output
_____no_output_____
###Markdown
Kernel Info from the single stream
###Code
# extract kernel info from trace
# warning: currently lmted to one kernel
kernel = read_trace.GetKernelInfo(df_trace_s1, gtx950)
Dump_kernel_info(kernel)
###Output
Kernel Info
blockDim 256.0
gridkDim 19532.0
regs 28.0
shared memory 0.0
runtime (ms) 11.914429
average block execution time (ms) 0.0292737813268
start time (ms) 0
###Markdown
model 3 cuda streams
###Code
# for each stream, have a dd for each kernel
stream_kernel_list = []
stream_num = 3
for sid in range(stream_num):
#print sid
# key will be the kernel order
# value will be the kernel info
kern_dd = {}
kern_dd[0] = Copy_kernel_info(kernel)
stream_kernel_list.append(kern_dd)
Dump_kernel_info(stream_kernel_list[0][0])
###Output
Kernel Info
blockDim 256.0
gridkDim 19532.0
regs 28.0
shared memory 0.0
runtime (ms) 11.914429
average block execution time (ms) 0.0292737813268
start time (ms) 0
###Markdown
start kernel from beginning
###Code
df_s1_trace_timing = read_trace.Get_timing_from_trace(df_trace_s1)
df_s1 = read_trace.Reset_starting(df_s1_trace_timing)
df_s1
###Output
_____no_output_____
###Markdown
set the h2d start for all the cuda streams
###Code
# find when to start the stream and update the starting pos for the trace
H2D_H2D_OVLP_TH = 3.158431
df_cke_list = cke.init_trace_list(df_s1, stream_num = stream_num, h2d_ovlp_th = H2D_H2D_OVLP_TH)
df_cke_list[0]
df_cke_list[1]
df_cke_list[2]
###Output
_____no_output_____
###Markdown
merge all the cuda stream trace together
###Code
df_all_api = cke.init_sort_api_with_extra_cols(df_cke_list)
df_all_api
###Output
_____no_output_____
###Markdown
start algorithm
###Code
# stream_id list
stream_list = [float(x) for x in range(stream_num)]
# pick the 1st sleep api
df_all_api, r1, r1_stream = cke.pick_first_sleep(df_all_api)
df_all_api = SetWake(df_all_api, r1)
df_all_api = UpdateCell(df_all_api, r1, 'current_pos', get_rowinfo(df_all_api, r1)['start'])
df_all_api = UpdateCell(df_all_api, r1, 'pred_end', get_rowinfo(df_all_api, r1)['end'])
print('row {}, stream-id {}'.format(r1, r1_stream))
stream_queue = []
stream_queue.append(r1_stream)
## conconcurrency
cc = 1.0
# extract api calls from other streams
df_other = df_all_api.loc[df_all_api.stream_id <> r1_stream]
other_stream_ids = list(df_other.stream_id.unique())
other_stream_num = len(other_stream_ids)
for i in range(other_stream_num):
df_other, r2, r2_stream = cke.pick_first_sleep(df_other)
print('row {}, stream-id {}'.format(r2, r2_stream))
df_all_api = SetWake(df_all_api, r2)
df_all_api = UpdateCell(df_all_api, r2, 'current_pos', get_rowinfo(df_all_api, r2)['start'])
df_all_api = UpdateCell(df_all_api, r2, 'pred_end', get_rowinfo(df_all_api, r2)['end'])
#---------------
# if r1 and r2 are from the same stream, break the iteration, and finish r1
#---------------
if r1_stream == r2_stream:
break
# when they are not the same stream, check whether there is concurrency
#-----------------------
# move the current_pos to the starting of coming api r2, and update r1 status
#-----------------------
df_all_api = cke.StartNext_byType(df_all_api, [r1, r2])
#-----------------------------
# if one call is done, continue the next round
#-----------------------------
if cke.CheckRowDone(df_all_api, [r1, r2]):
continue
whichType = cke.CheckType(df_all_api, r1, r2) # check whether the same api
print whichType
if whichType == None:
# run noconflict
pass
elif whichType in ['h2d', 'd2h']: # data transfer in the same direction
cc = cc + 1
df_all_api = cke.Predict_transferOvlp(df_all_api, [r1, r2], ways = cc)
break
else:
# concurrent kernel: todo
pass
break
# other_stream_list = cke.find_unique_streams(df_other)
# find the 1st sleep api that is other stream
# if there is overlapping, we start ovlp mode, if not finish r1, start current
# go through each
# rest_stream_list = [x for x in stream_list if x <> r1_stream]
# print rest_stream_list
# for sid in rest_stream_list:
# df_stream = df_all_api.loc[df_all_api.stream_id == sid]
df_all_api
#
#
# run above
###Output
_____no_output_____
###Markdown
start algo
###Code
count = 0
# break_count = 7
break_count = 7
while not cke.AllDone(df_all_api):
count = count + 1
#if count == break_count: break
#-----------------------
# pick two api to model
#-----------------------
df_all_api, r1, r2 = cke.PickTwo(df_all_api)
#if count == break_count: break
#-----------------------
# check the last api or not
#-----------------------
last_api = False
if r1 == None and r2 == None:
last_api = True
if last_api == True: # go directly updating the last wake api
df_all_api = cke.UpdateStream_lastapi(df_all_api)
break
#-----------------------
# move the current_pos to the starting of coming api r2, and update r1 status
#-----------------------
df_all_api = cke.StartNext_byType(df_all_api, [r1, r2])
#if count == break_count: break
#-----------------------------
# if one call is done, continue the next round
#-----------------------------
if cke.CheckRowDone(df_all_api, r1, r2):
continue
#if count == break_count: break
#-----------------------------
# when all calls are active
#-----------------------------
#-----------------------------
# check whether the two calls are kerns, if yes
#-----------------------------
whichType = cke.CheckType(df_all_api, r1, r2) # check whether the same api
if whichType == None:
df_all_api = cke.Predict_noConflict(df_all_api, r1, r2)
elif whichType in ['h2d', 'd2h']: # data transfer in the same direction
df_all_api = cke.Predict_transferOvlp(df_all_api, r1, r2, ways = 2.0)
else: # concurrent kernel: todo
print('run cke model')
#cke.model_2cke(df_all_api, r1, r2)
#if count == break_count: break
r1_sid, r1_kid =cke.FindStreamAndKernID(df_all_api, r1)
#print('r1_stream_id {} , r1_kernel_id {}'.format(r1_sid, r1_kid))
r2_sid, r2_kid =cke.FindStreamAndKernID(df_all_api, r2)
#print('r2_stream_id {} , r2_kernel_id {}'.format(r2_sid, r2_kid))
r1_start_ms = cke.GetStartTime(df_all_api, r1)
r2_start_ms = cke.GetStartTime(df_all_api, r2)
#print r1_start_ms
#print r2_start_ms
#print('before:')
#print('r1 start :{} r2 start : {}'.format(stream_kernel_list[r1_sid][r1_kid].start_ms,
# stream_kernel_list[r2_sid][r2_kid].start_ms))
stream_kernel_list[0][0].start_ms = r1_start_ms
stream_kernel_list[1][0].start_ms = r2_start_ms
#print('after:')
#print('r1 start :{} r2 start : {}'.format(stream_kernel_list[r1_sid][r1_kid].start_ms,
# stream_kernel_list[r2_sid][r2_kid].start_ms))
#Dump_kern_info(stream_kernel_list[r1_sid][r1_kid])
#Dump_kern_info(stream_kernel_list[r2_sid][r2_kid])
kernels_ = []
kernels_.append(stream_kernel_list[r1_sid][r1_kid])
kernels_.append(stream_kernel_list[r2_sid][r2_kid])
SM_resList, SM_traceList = avgblk.cke_model(gtx950, SM_resList, SM_traceList, kernels_)
# find the kernel execution time from the sm trace table
result_kernel_runtime_dd = avgblk.Get_KernTime(SM_traceList)
#print result_kernel_runtime_dd
result_r1_start = result_kernel_runtime_dd[0][0]
result_r1_end = result_kernel_runtime_dd[0][1]
result_r2_start = result_kernel_runtime_dd[1][0]
result_r2_end = result_kernel_runtime_dd[1][1]
# r1 will be the 1st in dd, r2 will be the 2nd
df_all_api.set_value(r1, 'pred_end', result_r1_end)
df_all_api.set_value(r2, 'pred_end', result_r2_end) # Warning: it is better to have a pred_start
# Warning: but we care about the end timing for now
#if count == break_count: break
# check any of r1 and r2 has status done. if done, go to next
rangeT = cke.Get_pred_range(df_all_api)
print rangeT
#if count == break_count: break
extra_conc = cke.Check_cc_by_time(df_all_api, rangeT) # check whether there is conc during the rangeT
print('extra_conc {}'.format(extra_conc))
#if count == break_count: break
if extra_conc == 0:
if whichType in ['h2d', 'd2h']:
df_all_api = cke.Update_wake_transferOvlp(df_all_api, rangeT, ways = 2.0)
elif whichType == 'kern':
df_all_api = cke.Update_wake_kernOvlp(df_all_api)
else: # no overlapping
df_all_api = cke.Update_wake_noConflict(df_all_api, rangeT)
#if count == break_count: break
# check if any api is done, and update the timing for the other apis in that stream
df_all_api = cke.UpdateStreamTime(df_all_api)
#if count == break_count: break
else: # todo : when there is additional overlapping
pass
# if count == break_count:
# break
df_all_api
df_2stream_trace
df_s1
#
# run above
#
###Output
_____no_output_____ |
Week06/GHCN_data.ipynb | ###Markdown
We have been working with data obtained from [GHCN (Global Historical Climatology Network)-Daily](http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/) data. Convinient way to select data from there is to use [KNMI Climatological Service](http://climexp.knmi.nl/selectdailyseries.cgi?id)
###Code
%matplotlib inline
import pandas as pd
import numpy as np
msk = pd.read_table('xgdcnRSM00027612.dat.txt', sep='\s*', skiprows=5, \
parse_dates={'dates':[0, 1, 2]}, header=None, index_col=0, squeeze=True )
msk.plot()
###Output
_____no_output_____
###Markdown
Exercise- Select TMAX data set for your home city or nearby place- Open it with pandas- Plot data for 2000-2010- Find maximum and minimum TMAX for all observational period- Find mean temperature- Plot monthly means- Plot maximum/minimum temperatures for each month- Plot seasonal mean for one of the seasons- Plot overall monthly means (use groupby(msk.index.month))- Plot daily season cycle ( use index.dayofyear )- Plot daily seasonal cycle and +- standard deviation
###Code
msk['2000':'2010'].plot()
msk.min()
msk.max()
msk.mean()
msk.resample('M').plot()
msk.resample('M',how=['max','min']).plot()
msk_s = msk.resample('Q-NOV')
msk_s[msk_s.index.quarter==1].mean()
msk.groupby(msk.index.month).mean().plot(kind='bar')
msk.groupby(msk.index.dayofyear).mean().plot()
seas = msk.groupby(msk.index.dayofyear).mean()
seas_plus = seas + msk.groupby(msk.index.dayofyear).std()
seas_minus = seas - msk.groupby(msk.index.dayofyear).std()
seas.plot()
seas_plus.plot()
###Output
_____no_output_____ |
notebookcode/new/bokeh.ipynb | ###Markdown
200143
###Code
ce1 = pd.read_csv('../data/200143/ce.csv')
le1 = pd.read_csv('../data/200143/le.csv')
vd1 = pd.read_csv('../data/200143/vend.csv')
va1 = pd.read_csv('../data/200143/vaso.csv')
# create a new plot
og_starttime = 29
starttime = og_starttime - 24
og_endtime = 29+24
c11 = figure(width=900, plot_height=250, x_range=[-1,og_endtime+2])
c11.circle(ce1['charttime_h'], ce1['heartrate'], size=10, color="navy", alpha=0.5)
# s1.xaxis.formatter.hour
c11.xaxis.axis_label = 'Hour/h'
c11.yaxis.axis_label = 'Heart rate'
daylight_savings_start0 = Span(location=starttime,
dimension='height', line_color='green',
line_dash='dashed', line_width=3)
c11.add_layout(daylight_savings_start0)
daylight_savings_start = Span(location=og_starttime,
dimension='height', line_color='red',
line_dash='dashed', line_width=3)
c11.add_layout(daylight_savings_start)
daylight_savings_end = Span(location=og_endtime,
dimension='height', line_color='black',
line_dash='dashed', line_width=3)
c11.add_layout(daylight_savings_end)
# create another one
c12 = figure(width=900, height=250, x_range=[-1,og_endtime+2])
c12.circle(ce1['charttime_h'], ce1['sysbp'], size=10, color="firebrick", alpha=0.5)
c12.xaxis.axis_label = 'Hour/h'
c12.yaxis.axis_label = 'Systolic pressure'
daylight_savings_start0 = Span(location=starttime,
dimension='height', line_color='green',
line_dash='dashed', line_width=3)
c12.add_layout(daylight_savings_start0)
daylight_savings_start = Span(location=og_starttime,
dimension='height', line_color='red',
line_dash='dashed', line_width=3)
c12.add_layout(daylight_savings_start)
daylight_savings_end = Span(location=og_endtime,
dimension='height', line_color='black',
line_dash='dashed', line_width=3)
c12.add_layout(daylight_savings_end)
# create and another
c13 = figure(width=900, height=250, x_range=[-1,og_endtime+2])
c13.circle(ce1['charttime_h'], ce1['diasbp'], size=10, color="olive", alpha=0.5)
c13.xaxis.axis_label = 'Hour/h'
c13.yaxis.axis_label = 'Diastolic pressure'
daylight_savings_start0 = Span(location=starttime,
dimension='height', line_color='green',
line_dash='dashed', line_width=3)
c13.add_layout(daylight_savings_start0)
daylight_savings_start = Span(location=og_starttime,
dimension='height', line_color='red',
line_dash='dashed', line_width=3)
c13.add_layout(daylight_savings_start)
daylight_savings_end = Span(location=og_endtime,
dimension='height', line_color='black',
line_dash='dashed', line_width=3)
c13.add_layout(daylight_savings_end)
# create another one
c14 = figure(width=900, height=250, x_range=[-1,og_endtime+2])
c14.circle(ce1['charttime_h'], ce1['meanbp'], size=10, color="blue", alpha=0.5)
c14.xaxis.axis_label = 'Hour/h'
c14.yaxis.axis_label = 'Mean arterial pressure'
daylight_savings_start0 = Span(location=starttime,
dimension='height', line_color='green',
line_dash='dashed', line_width=3)
c14.add_layout(daylight_savings_start0)
daylight_savings_start = Span(location=og_starttime,
dimension='height', line_color='red',
line_dash='dashed', line_width=3)
c14.add_layout(daylight_savings_start)
daylight_savings_end = Span(location=og_endtime,
dimension='height', line_color='black',
line_dash='dashed', line_width=3)
c14.add_layout(daylight_savings_end)
# create another one
c15 = figure(width=900, height=250, x_range=[-1,og_endtime+2])
c15.circle(ce1['charttime_h'], ce1['resprate'], size=10, color="red", alpha=0.5)
c15.xaxis.axis_label = 'Hour/h'
c15.yaxis.axis_label = 'Respiratory rate'
# create span
daylight_savings_start0 = Span(location=starttime,
dimension='height', line_color='green',
line_dash='dashed', line_width=3)
c15.add_layout(daylight_savings_start0)
daylight_savings_start = Span(location=og_starttime,
dimension='height', line_color='red',
line_dash='dashed', line_width=3)
c15.add_layout(daylight_savings_start)
daylight_savings_end = Span(location=og_endtime,
dimension='height', line_color='black',
line_dash='dashed', line_width=3)
c15.add_layout(daylight_savings_end)
# create another one
c16 = figure(width=900, height=250, x_range=[-1,og_endtime+2])
c16.circle(ce1['charttime_h'], ce1['tempc'], size=10, color="green", alpha=0.5)
c16.xaxis.axis_label = 'Hour/h'
c16.yaxis.axis_label = 'Temperature'
# create span
daylight_savings_start0 = Span(location=starttime,
dimension='height', line_color='green',
line_dash='dashed', line_width=3)
c16.add_layout(daylight_savings_start0)
daylight_savings_start = Span(location=og_starttime,
dimension='height', line_color='red',
line_dash='dashed', line_width=3)
c16.add_layout(daylight_savings_start)
daylight_savings_end = Span(location=og_endtime,
dimension='height', line_color='black',
line_dash='dashed', line_width=3)
c16.add_layout(daylight_savings_end)
# create another one
c17 = figure(width=900, height=250, x_range=[-1,og_endtime+2])
c17.circle(ce1['charttime_h'], ce1['spo2'], size=10, color="yellow", alpha=0.5)
c17.xaxis.axis_label = 'Hour/h'
c17.yaxis.axis_label = 'Spo2'
# create span
daylight_savings_start0 = Span(location=starttime,
dimension='height', line_color='green',
line_dash='dashed', line_width=3)
c17.add_layout(daylight_savings_start0)
daylight_savings_start = Span(location=og_starttime,
dimension='height', line_color='red',
line_dash='dashed', line_width=3)
c17.add_layout(daylight_savings_start)
daylight_savings_end = Span(location=og_endtime,
dimension='height', line_color='black',
line_dash='dashed', line_width=3)
c17.add_layout(daylight_savings_end)
lab11 = figure(width=900, plot_height=250, x_range=[-1,og_endtime+2])
lab11.circle(le1['charttime_h'], le1['po2'], size=10, color="gray", alpha=0.5)
# s1.xaxis.formatter.hour
lab11.xaxis.axis_label = 'Hour/h'
lab11.yaxis.axis_label = 'PO2'
daylight_savings_start0 = Span(location=starttime,
dimension='height', line_color='green',
line_dash='dashed', line_width=3)
lab11.add_layout(daylight_savings_start0)
daylight_savings_start = Span(location=og_starttime,
dimension='height', line_color='red',
line_dash='dashed', line_width=3)
lab11.add_layout(daylight_savings_start)
daylight_savings_end = Span(location=og_endtime,
dimension='height', line_color='black',
line_dash='dashed', line_width=3)
lab11.add_layout(daylight_savings_end)
lab12 = figure(width=900, plot_height=250, x_range=[-1,og_endtime+2])
lab12.circle(le1['charttime_h'], le1['pco2'], size=10, color="pink", alpha=0.5)
# s1.xaxis.formatter.hour
lab12.xaxis.axis_label = 'Hour/h'
lab12.yaxis.axis_label = 'PCO2'
daylight_savings_start0 = Span(location=starttime,
dimension='height', line_color='green',
line_dash='dashed', line_width=3)
lab12.add_layout(daylight_savings_start0)
daylight_savings_start = Span(location=og_starttime,
dimension='height', line_color='red',
line_dash='dashed', line_width=3)
lab12.add_layout(daylight_savings_start)
daylight_savings_end = Span(location=og_endtime,
dimension='height', line_color='black',
line_dash='dashed', line_width=3)
lab12.add_layout(daylight_savings_end)
# show the results in a row
show(column(c11, c12, c13, c14, c15, c16, c17, lab11, lab12))
tmp = le1.drop(['subject_id','hadm_id','icustay_id','og_label','intime','og_starttime','charttime','charttime_h'],axis=1)
print(tmp.isnull().sum()/len(tmp))
# create a new plot
og_starttime = 29
og_endtime = 29+24
l_range = ['po2','pco2']
lab1 = figure(width=900, plot_height=250, x_range=[-1,og_endtime+2],y_range=l_range)
lab1.circle(le1['charttime_h'], le1['po2'], size=10, color="navy", alpha=0.5)
# s1.xaxis.formatter.hour
# c11.xaxis.axis_label = 'Hour/h'
# c11.yaxis.axis_label = 'Heart rate'
# daylight_savings_start = Span(location=og_starttime,
# dimension='height', line_color='red',
# line_dash='dashed', line_width=3)
# c11.add_layout(daylight_savings_start)
# daylight_savings_end = Span(location=og_endtime,
# dimension='height', line_color='black',
# line_dash='dashed', line_width=3)
# c11.add_layout(daylight_savings_end)
lab1.circle(le1['charttime_h'], le1['pco2'], size=10, color="red", alpha=0.5)
show(lab1)
# p.y_range.range_padding = 0.1
# p.ygrid.grid_line_color = None
# p.legend.location = "center_left"
import numpy as np
import bokeh.plotting as bp
from bokeh.objects import HoverTool
# bp.output_file('test.html')
fig = bp.figure(tools="reset,hover")
x = np.linspace(0,2*np.pi)
y1 = np.sin(x)
y2 = np.cos(x)
s1 = fig.scatter(x=x,y=y1,color='#0000ff',size=10,legend='sine')
s1.select(dict(type=HoverTool)).tooltips = {"x":"$x", "y":"$y"}
s2 = fig.scatter(x=x,y=y2,color='#ff0000',size=10,legend='cosine')
s2.select(dict(type=HoverTool)).tooltips = {"x":"$x", "y":"$y"}
bp.show()
###Output
_____no_output_____ |
getting-started-guides/csp/databricks/init-notebook-for-rapids-spark-xgboost-on-databricks-gpu-7.0-ml.ipynb | ###Markdown
Download latest Jars
###Code
dbutils.fs.mkdirs("dbfs:/FileStore/jars/")
%sh
cd ../../dbfs/FileStore/jars/
wget -O cudf-0.14.jar https://search.maven.org/remotecontent?filepath=ai/rapids/cudf/0.14/cudf-0.14.jar
wget -O rapids-4-spark_2.12-0.1.0-databricks.jar https://search.maven.org/remotecontent?filepath=com/nvidia/rapids-4-spark_2.12/0.1.0-databricks/rapids-4-spark_2.12-0.1.0-databricks.jar
wget -O xgboost4j_3.0-1.0.0-0.1.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j_3.0/1.0.0-0.1.0/xgboost4j_3.0-1.0.0-0.1.0.jar
wget -O xgboost4j-spark_3.0-1.0.0-0.1.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j-spark_3.0/1.0.0-0.1.0/xgboost4j-spark_3.0-1.0.0-0.1.0.jar
ls -ltr
# Your Jars are downloaded in dbfs:/FileStore/jars directory
###Output
_____no_output_____
###Markdown
Create a Directory for your init script
###Code
dbutils.fs.mkdirs("dbfs:/databricks/init_scripts/")
dbutils.fs.put("/databricks/init_scripts/init.sh","""
#!/bin/bash
sudo cp /dbfs/FileStore/jars/xgboost4j_3.0-1.0.0-0.1.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j_2.12--ml.dmlc__xgboost4j_2.12__1.0.0.jar
sudo cp /dbfs/FileStore/jars/cudf-0.14.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/rapids-4-spark_2.12-0.1.0-databricks.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/xgboost4j-spark_3.0-1.0.0-0.1.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j-spark_2.12--ml.dmlc__xgboost4j-spark_2.12__1.0.0.jar""", True)
###Output
_____no_output_____
###Markdown
Confirm your init script is in the new directory
###Code
%sh
cd ../../dbfs/databricks/init_scripts
pwd
ls -ltr
###Output
_____no_output_____
###Markdown
Download the Mortgage Dataset into your local machine and upload Data using import Data
###Code
dbutils.fs.mkdirs("dbfs:/FileStore/tables/")
%sh
cd /dbfs/FileStore/tables/
wget -O mortgage.zip https://rapidsai-data.s3.us-east-2.amazonaws.com/spark/mortgage.zip
ls
unzip mortgage.zip
%sh
pwd
cd ../../dbfs/FileStore/tables
ls -ltr mortgage/csv/*
###Output
_____no_output_____
###Markdown
Download latest Jars
###Code
dbutils.fs.mkdirs("dbfs:/FileStore/jars/")
%sh
cd ../../dbfs/FileStore/jars/
wget -O cudf-0.15.jar https://search.maven.org/remotecontent?filepath=ai/rapids/cudf/0.15/cudf-0.15.jar
wget -O rapids-4-spark_2.12-0.2.0-databricks.jar https://search.maven.org/remotecontent?filepath=com/nvidia/rapids-4-spark_2.12/0.2.0-databricks/rapids-4-spark_2.12-0.2.0-databricks.jar
wget -O xgboost4j_3.0-1.0.0-0.2.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j_3.0/1.0.0-0.2.0/xgboost4j_3.0-1.0.0-0.2.0.jar
wget -O xgboost4j-spark_3.0-1.0.0-0.2.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j-spark_3.0/1.0.0-0.2.0/xgboost4j-spark_3.0-1.0.0-0.2.0.jar
ls -ltr
# Your Jars are downloaded in dbfs:/FileStore/jars directory
###Output
_____no_output_____
###Markdown
Create a Directory for your init script
###Code
dbutils.fs.mkdirs("dbfs:/databricks/init_scripts/")
dbutils.fs.put("/databricks/init_scripts/init.sh","""
#!/bin/bash
sudo cp /dbfs/FileStore/jars/xgboost4j_3.0-1.0.0-0.2.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j_2.12--ml.dmlc__xgboost4j_2.12__1.0.0.jar
sudo cp /dbfs/FileStore/jars/cudf-0.15.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/rapids-4-spark_2.12-0.2.0-databricks.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/xgboost4j-spark_3.0-1.0.0-0.2.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j-spark_2.12--ml.dmlc__xgboost4j-spark_2.12__1.0.0.jar""", True)
###Output
_____no_output_____
###Markdown
Confirm your init script is in the new directory
###Code
%sh
cd ../../dbfs/databricks/init_scripts
pwd
ls -ltr
###Output
_____no_output_____
###Markdown
Download the Mortgage Dataset into your local machine and upload Data using import Data
###Code
dbutils.fs.mkdirs("dbfs:/FileStore/tables/")
%sh
cd /dbfs/FileStore/tables/
wget -O mortgage.zip https://rapidsai-data.s3.us-east-2.amazonaws.com/spark/mortgage.zip
ls
unzip mortgage.zip
%sh
pwd
cd ../../dbfs/FileStore/tables
ls -ltr mortgage/csv/*
###Output
_____no_output_____
###Markdown
Download latest Jars
###Code
dbutils.fs.mkdirs("dbfs:/FileStore/jars/")
%sh
cd ../../dbfs/FileStore/jars/
wget -O cudf-0.18.1.jar https://search.maven.org/remotecontent?filepath=ai/rapids/cudf/0.18.1/cudf-0.18.1.jar
wget -O rapids-4-spark_2.12-0.4.1.jar https://search.maven.org/remotecontent?filepath=com/nvidia/rapids-4-spark_2.12/0.4.1/rapids-4-spark_2.12-0.4.1.jar
wget -O xgboost4j_3.0-1.3.0-0.1.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j_3.0/1.3.0-0.1.0/xgboost4j_3.0-1.3.0-0.1.0.jar
wget -O xgboost4j-spark_3.0-1.3.0-0.1.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j-spark_3.0/1.3.0-0.1.0/xgboost4j-spark_3.0-1.3.0-0.1.0.jar
ls -ltr
# Your Jars are downloaded in dbfs:/FileStore/jars directory
###Output
_____no_output_____
###Markdown
Create a Directory for your init script
###Code
dbutils.fs.mkdirs("dbfs:/databricks/init_scripts/")
dbutils.fs.put("/databricks/init_scripts/init.sh","""
#!/bin/bash
sudo cp /dbfs/FileStore/jars/xgboost4j_3.0-1.3.0-0.1.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j_2.12--ml.dmlc__xgboost4j_2.12__1.0.0.jar
sudo cp /dbfs/FileStore/jars/cudf-0.18.1.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/rapids-4-spark_2.12-0.4.1.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/xgboost4j-spark_3.0-1.3.0-0.1.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j-spark_2.12--ml.dmlc__xgboost4j-spark_2.12__1.0.0.jar""", True)
###Output
_____no_output_____
###Markdown
Confirm your init script is in the new directory
###Code
%sh
cd ../../dbfs/databricks/init_scripts
pwd
ls -ltr
###Output
_____no_output_____
###Markdown
Download the Mortgage Dataset into your local machine and upload Data using import Data
###Code
dbutils.fs.mkdirs("dbfs:/FileStore/tables/")
%sh
cd /dbfs/FileStore/tables/
wget -O mortgage.zip https://rapidsai-data.s3.us-east-2.amazonaws.com/spark/mortgage.zip
ls
unzip mortgage.zip
%sh
pwd
cd ../../dbfs/FileStore/tables
ls -ltr mortgage/csv/*
###Output
_____no_output_____
###Markdown
Download latest Jars
###Code
dbutils.fs.mkdirs("dbfs:/FileStore/jars/")
%sh
cd ../../dbfs/FileStore/jars/
wget -O cudf-21.06.1.jar https://search.maven.org/remotecontent?filepath=ai/rapids/cudf/21.06.1/cudf-21.06.1.jar
wget -O rapids-4-spark_2.12-21.06.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/rapids-4-spark_2.12/21.06.0/rapids-4-spark_2.12-21.06.0.jar
wget -O xgboost4j_3.0-1.3.0-0.1.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j_3.0/1.3.0-0.1.0/xgboost4j_3.0-1.3.0-0.1.0.jar
wget -O xgboost4j-spark_3.0-1.3.0-0.1.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j-spark_3.0/1.3.0-0.1.0/xgboost4j-spark_3.0-1.3.0-0.1.0.jar
ls -ltr
# Your Jars are downloaded in dbfs:/FileStore/jars directory
###Output
_____no_output_____
###Markdown
Create a Directory for your init script
###Code
dbutils.fs.mkdirs("dbfs:/databricks/init_scripts/")
dbutils.fs.put("/databricks/init_scripts/init.sh","""
#!/bin/bash
sudo cp /dbfs/FileStore/jars/xgboost4j_3.0-1.3.0-0.1.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j_2.12--ml.dmlc__xgboost4j_2.12__1.0.0.jar
sudo cp /dbfs/FileStore/jars/cudf-21.06.1.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/rapids-4-spark_2.12-21.06.0.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/xgboost4j-spark_3.0-1.3.0-0.1.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j-spark_2.12--ml.dmlc__xgboost4j-spark_2.12__1.0.0.jar""", True)
###Output
_____no_output_____
###Markdown
Confirm your init script is in the new directory
###Code
%sh
cd ../../dbfs/databricks/init_scripts
pwd
ls -ltr
###Output
_____no_output_____
###Markdown
Download the Mortgage Dataset into your local machine and upload Data using import Data
###Code
dbutils.fs.mkdirs("dbfs:/FileStore/tables/")
%sh
cd /dbfs/FileStore/tables/
wget -O mortgage.zip https://rapidsai-data.s3.us-east-2.amazonaws.com/spark/mortgage.zip
ls
unzip mortgage.zip
%sh
pwd
cd ../../dbfs/FileStore/tables
ls -ltr mortgage/csv/*
###Output
_____no_output_____
###Markdown
Download latest Jars
###Code
dbutils.fs.mkdirs("dbfs:/FileStore/jars/")
%sh
cd ../../dbfs/FileStore/jars/
wget -O cudf-0.17.jar https://search.maven.org/remotecontent?filepath=ai/rapids/cudf/0.17/cudf-0.17.jar
wget -O rapids-4-spark_2.12-0.3.0-databricks.jar https://search.maven.org/remotecontent?filepath=com/nvidia/rapids-4-spark_2.12/0.3.0-databricks/rapids-4-spark_2.12-0.3.0-databricks.jar
wget -O xgboost4j_3.0-1.3.0-0.1.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j_3.0/1.3.0-0.1.0/xgboost4j_3.0-1.3.0-0.1.0.jar
wget -O xgboost4j-spark_3.0-1.3.0-0.1.0.jar https://search.maven.org/remotecontent?filepath=com/nvidia/xgboost4j-spark_3.0/1.3.0-0.1.0/xgboost4j-spark_3.0-1.3.0-0.1.0.jar
ls -ltr
# Your Jars are downloaded in dbfs:/FileStore/jars directory
###Output
_____no_output_____
###Markdown
Create a Directory for your init script
###Code
dbutils.fs.mkdirs("dbfs:/databricks/init_scripts/")
dbutils.fs.put("/databricks/init_scripts/init.sh","""
#!/bin/bash
sudo cp /dbfs/FileStore/jars/xgboost4j_3.0-1.3.0-0.1.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j_2.12--ml.dmlc__xgboost4j_2.12__1.0.0.jar
sudo cp /dbfs/FileStore/jars/cudf-0.17.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/rapids-4-spark_2.12-0.3.0-databricks.jar /databricks/jars/
sudo cp /dbfs/FileStore/jars/xgboost4j-spark_3.0-1.3.0-0.1.0.jar /databricks/jars/spark--maven-trees--ml--7.x--xgboost--ml.dmlc--xgboost4j-spark_2.12--ml.dmlc__xgboost4j-spark_2.12__1.0.0.jar""", True)
###Output
_____no_output_____
###Markdown
Confirm your init script is in the new directory
###Code
%sh
cd ../../dbfs/databricks/init_scripts
pwd
ls -ltr
###Output
_____no_output_____
###Markdown
Download the Mortgage Dataset into your local machine and upload Data using import Data
###Code
dbutils.fs.mkdirs("dbfs:/FileStore/tables/")
%sh
cd /dbfs/FileStore/tables/
wget -O mortgage.zip https://rapidsai-data.s3.us-east-2.amazonaws.com/spark/mortgage.zip
ls
unzip mortgage.zip
%sh
pwd
cd ../../dbfs/FileStore/tables
ls -ltr mortgage/csv/*
###Output
_____no_output_____ |
2. Machine_Learning_Regression/week 2 - multiple regression - Assignment 2.ipynb | ###Markdown
Next write a function that takes a data set, a list of features (e.g. [โsqft_livingโ, โbedroomsโ]), to be used as inputs, and a name of the output (e.g. โpriceโ). This function should return a features_matrix (2D array) consisting of first a column of ones followed by columns containing the values of the input features in the data set in the same order as the input list. It should also return an output_array which is an array of the values of the output in the data set (e.g. โpriceโ).
###Code
def get_numpy_data(data, features, output):
data['constant'] = 1 # add a constant column to a dataframe
# prepend variable 'constant' to the features list
features = ['constant'] + features
# select the columns of dataframe given by the โfeaturesโ list into the SFrame โfeatures_sframeโ
# this will convert the features_sframe into a numpy matrix with GraphLab Create >= 1.7!!
features_matrix = data[features].as_matrix(columns=None)
# assign the column of data_sframe associated with the target to the variable โoutput_sarrayโ
# this will convert the SArray into a numpy array:
output_array = data[output].as_matrix(columns=None) # GraphLab Create>= 1.7!!
return(features_matrix, output_array)
###Output
_____no_output_____
###Markdown
If the features matrix (including a column of 1s for the constant) is stored as a 2D array (or matrix) and the regression weights are stored as a 1D array then the predicted output is just the dot product between the features matrix and the weights (with the weights on the right). Write a function โpredict_outputโ which accepts a 2D array โfeature_matrixโ and a 1D array โweightsโ and returns a 1D array โpredictionsโ. e.g. in python:
###Code
def predict_outcome(feature_matrix, weights):
predictions = np.dot(feature_matrix, weights)
return(predictions)
###Output
_____no_output_____
###Markdown
If we have a the values of a single input feature in an array โfeatureโ and the prediction โerrorsโ (predictions - output) then the derivative of the regression cost function with respect to the weight of โfeatureโ is just twice the dot product between โfeatureโ and โerrorsโ. Write a function that accepts a โfeatureโ array and โerrorโ array and returns the โderivativeโ (a single number). e.g. in python:
###Code
def feature_derivative(errors, feature):
errors = predictions - output
derivative = 2* np.dot(errors,feature)
return(derivative)
###Output
_____no_output_____
###Markdown
Now we will use our predict_output and feature_derivative to write a gradient descent function. Although we can compute the derivative for all the features simultaneously (the gradient) we will explicitly loop over the features individually for simplicity. Write a gradient descent function that does the following:Accepts a numpy feature_matrix 2D array, a 1D output array, an array of initial weights, a step size and a convergence tolerance.While not converged updates each feature weight by subtracting the step size times the derivative for that feature given the current weightsAt each step computes the magnitude/length of the gradient (square root of the sum of squared components)When the magnitude of the gradient is smaller than the input tolerance returns the final weight vector.
###Code
def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance):
converged = False
weights = np.array(initial_weights)
while not converged:
# compute the predictions based on feature_matrix and weights:
predictions = np.dot(feature_matrix, weights)
# compute the errors as predictions - output:
errors = predictions - output
gradient_sum_squares = 0 # initialize the gradient
# while not converged, update each weight individually:
for i in range(len(weights)):
# Recall that feature_matrix[:, i] is the feature column associated with weights[i]
# compute the derivative for weight[i]:
derivative = 2* np.dot(errors, feature_matrix[:,i])
# add the squared derivative to the gradient magnitude
gradient_sum_squares += derivative**2
# update the weight based on step size and derivative:
weights[i] = weights[i] - step_size * derivative
gradient_magnitude = sqrt(gradient_sum_squares)
if gradient_magnitude < tolerance:
converged = True
return(weights)
###Output
_____no_output_____
###Markdown
Now we will run the regression_gradient_descent function on some actual data. In particular we will use the gradient descent to estimate the model from Week 1 using just an intercept and slope. Use the following parameters:features: โsqft_livingโoutput: โpriceโinitial weights: -47000, 1 (intercept, sqft_living respectively)step_size = 7e-12tolerance = 2.5e7
###Code
simple_features = ['sqft_living']
my_output= 'price'
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
initial_weights = np.array([-47000., 1.])
step_size = 7e-12
tolerance = 2.5e7
###Output
_____no_output_____
###Markdown
Use these parameters to estimate the slope and intercept for predicting prices based only on โsqft_livingโ.
###Code
simple_weights = regression_gradient_descent(simple_feature_matrix, output,initial_weights, step_size, tolerance)
###Output
_____no_output_____
###Markdown
Quiz Question: What is the value of the weight for sqft_living -- the second element of โsimple_weightsโ (rounded to 1 decimal place)?
###Code
simple_weights
###Output
_____no_output_____
###Markdown
Now build a corresponding โtest_simple_feature_matrixโ and โtest_outputโ using test_data. Using โtest_simple_feature_matrixโ and โsimple_weightsโ compute the predicted house prices on all the test data.
###Code
(test_simple_feature_matrix, test_output) = get_numpy_data(test_data,simple_features,my_output)
predicted_house_prices = predict_outcome(test_simple_feature_matrix, simple_weights)
###Output
_____no_output_____
###Markdown
Quiz Question: What is the predicted price for the 1st house in the Test data set for model 1 (round to nearest dollar)?
###Code
predicted_house_prices[0]
###Output
_____no_output_____
###Markdown
Now compute RSS on all test data for this model. Record the value and store it for later
###Code
RSS_model1 = np.sum((predicted_house_prices - test_output)**2)
RSS_model1
###Output
_____no_output_____
###Markdown
Now we will use the gradient descent to fit a model with more than 1 predictor variable (and an intercept). Use the following parameters:
###Code
model_features = ['sqft_living', 'sqft_living15']
my_output = 'price'
(feature_matrix, output) = get_numpy_data(train_data, model_features,my_output)
initial_weights = np.array([-100000., 1., 1.])
step_size = 4e-12
tolerance = 1e9
###Output
_____no_output_____
###Markdown
Note that sqft_living_15 is the average square feet of the nearest 15 neighbouring houses.Run gradient descent on a model with โsqft_livingโ and โsqft_living_15โ as well as an intercept with the above parameters. Save the resulting regression weights.
###Code
regression_weights = regression_gradient_descent(feature_matrix, output,initial_weights, step_size, tolerance)
regression_weights
###Output
_____no_output_____
###Markdown
Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)
###Code
(test_feature_matrix, test_output) = get_numpy_data(test_data,model_features,my_output)
predicted_house_prices_model2 = predict_outcome(test_feature_matrix, regression_weights)
predicted_house_prices_model2[0]
###Output
_____no_output_____
###Markdown
What is the actual price for the 1st house in the Test data set
###Code
test_data['price'][0]
###Output
_____no_output_____
###Markdown
Quiz Question: Which estimate was closer to the true price for the 1st house on the TEST data set, model 1 or model 2? Now compute RSS on all test data for the second model. Record the value and store it for later.
###Code
RSS_model2 = np.sum((predicted_house_prices_model2 - test_output)**2)
RSS_model2
###Output
_____no_output_____
###Markdown
Quiz Question: Which model (1 or 2) has lowest RSS on all of the TEST data?
###Code
RSS_model1 > RSS_model2
###Output
_____no_output_____ |
10_pipeline/kubeflow/02_Kubeflow_Pipeline_Simple.ipynb | ###Markdown
Kubeflow Pipelines The [Kubeflow Pipelines SDK](https://github.com/kubeflow/pipelines/tree/master/sdk) provides a set of Python packages that you can use to specify and run your machine learning (ML) workflows. A pipeline is a description of an ML workflow, including all of the components that make up the steps in the workflow and how the components interact with each other. Kubeflow website has a very detail expaination of kubeflow components, please go to [Introduction to the Pipelines SDK](https://www.kubeflow.org/docs/pipelines/sdk/sdk-overview/) for details Install the Kubeflow Pipelines SDK This guide tells you how to install the [Kubeflow Pipelines SDK](https://github.com/kubeflow/pipelines/tree/master/sdk) which you can use to build machine learning pipelines. You can use the SDK to execute your pipeline, or alternatively you can upload the pipeline to the Kubeflow Pipelines UI for execution.All of the SDKโs classes and methods are described in the auto-generated [SDK reference docs](https://kubeflow-pipelines.readthedocs.io/en/latest/). Run the following command to install the Kubeflow Pipelines SDK
###Code
!PYTHONWARNINGS=ignore::yaml.YAMLLoadWarning
!pip install https://storage.googleapis.com/ml-pipeline/release/0.1.29/kfp.tar.gz --upgrade --user
# Restart the kernel to pick up pip installed libraries
from IPython.core.display import HTML
HTML("<script>Jupyter.notebook.kernel.restart()</script>")
###Output
_____no_output_____
###Markdown
Build a Simple Pipeline In this example, we want to calculate sum of three numbers. 1. Let's assume we have a python image to use. It accepts two arguments and return sum of them. 2. The sum of a and b will be used to calculate final result with sum of c and d. In total, we will have three arithmetical operators. Then we use another echo operator to print the result. Create a Container Image for Each ComponentAssumes that you have already created a program to perform the task required in a particular step of your ML workflow. For example, if the task is to train an ML model, then you must have a program that does the training,Your component can create `outputs` that the downstream components can use as `inputs`. This will be used to build Job Directed Acyclic Graph (DAG) > In this case, we will use a python base image to do the calculation. We skip buiding our own image. Create a Python Function to Wrap Your ComponentDefine a Python function to describe the interactions with the Docker container image that contains your pipeline component.Here, in order to simplify the process, we use simple way to calculate sum. Ideally, you need to build a new container image for your code change.
###Code
import kfp
from kfp import dsl
def add_two_numbers(a, b):
return dsl.ContainerOp(
name="calculate_sum",
image="python:3.6.8",
command=["python", "-c"],
arguments=['with open("/tmp/results.txt", "a") as file: file.write(str({} + {}))'.format(a, b)],
file_outputs={
"data": "/tmp/results.txt",
},
)
def echo_op(text):
return dsl.ContainerOp(
name="echo", image="library/bash:4.4.23", command=["sh", "-c"], arguments=['echo "Result: {}"'.format(text)]
)
###Output
_____no_output_____
###Markdown
Define Your Pipeline as a Python FunctionDescribe each pipeline as a Python function.
###Code
@dsl.pipeline(name="Calculate sum pipeline", description="Calculate sum of numbers and prints the result.")
def calculate_sum(a=7, b=10, c=4, d=7):
"""A four-step pipeline with first two running in parallel."""
sum1 = add_two_numbers(a, b)
sum2 = add_two_numbers(c, d)
sum = add_two_numbers(sum1.output, sum2.output)
echo_task = echo_op(sum.output)
###Output
_____no_output_____
###Markdown
Compile the PipelineCompile the pipeline to generate a compressed YAML definition of the pipeline. The Kubeflow Pipelines service converts the static configuration into a set of Kubernetes resources for execution.
###Code
kfp.compiler.Compiler().compile(calculate_sum, "calculate-sum-pipeline.zip")
!ls -al ./calculate-sum-pipeline.zip
!unzip -o ./calculate-sum-pipeline.zip
!pygmentize pipeline.yaml
###Output
_____no_output_____
###Markdown
Deploy PipelineThere're two ways to deploy the pipeline. Either upload the generate `.tar.gz` file through the `Kubeflow Pipelines UI`, or use `Kubeflow Pipeline SDK` to deploy it.We will only show sdk usage here.
###Code
client = kfp.Client()
experiment = client.create_experiment(name="kubeflow")
my_run = client.run_pipeline(experiment.id, "calculate-sum-pipeline", "calculate-sum-pipeline.zip")
###Output
_____no_output_____ |
Diabetes_Early_Detection.ipynb | ###Markdown
Project Title: Diabetes Early Detection Abstract The menace of diabetes cannot be overemphasized as it is a prevalent disease among people. It is not even evident in young people and as such, numerous research has to be taken in place in order to get facts which can assist in early detection of diabetes. The main goal of this project is to collect anonymous data of diabetes patients and anaylyzie some of their persisitent health conditions observed in their body system before eventually being diagnosed of diabetes. The advantage of this model is that, it can easily examine some features and accurately tell if a patieent is likely to develop diabetes Introduction Diabetes is a group of disease that affect how the body uses blood sugar (Glucose). The glucose is a vital substance in our body that makes up healthy living since it's a very important source of energy for the cells that make up the muslces and tissues. Infact, the brain is highy dependent on it for source of fuel and functionaity. There are numerous underlying causes of diabetes and it varies by types. However, each type of diabetes has one sole purpose and that is to increase the amount of sugar in the body thereby leading to excess amount of it in the blood. This can lead to serious health complicaions. Chronic diabetes conditions include type 1 and type 2 diabetes. In some cases, there are the reversible forms of diabetes which are also known as prediabetes. It occurs in cases when the sugar in the blood is high than normal but it is not classified yet as dibetes and also gestational diabetes, which occurs during preganancy but may resolve after the baby is delivered **Importing necessary libraries for EDA and Data Cleaning**
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='darkgrid')
###Output
_____no_output_____
###Markdown
Load data
###Code
data = pd.read_csv('diabetes.csv')
#Data preview
data.head(5)
###Output
_____no_output_____
###Markdown
Check for missing values.There are several methods for checking for missing values. 1. We can start by showing the unique values in a categorical data.2. Use the isnull function to cumulate numbers of nan if there exists any3. We can equally replace missing numbers with np.nan
###Code
for i in data.columns:
print(data[i].unique())
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Above result indicate there are no missing values or data represented by '?' or unexpected character. 5 Data summary
###Code
data.describe(include='all')
###Output
_____no_output_____
###Markdown
There are 530 0bservations, with the average age to be placed around 48 years of age. EDA
###Code
# This function plots the varation of features as against the severity of having diabetes
#Can give us better insights on the powerful deciding features on the the possibility of developing diabetes.
def barPlot(column):
'''
column takes the value of the current column feature which is used to plot against the class of diabetes.
'''
fig = plt.figure(figsize = (15,6))
sns.barplot(column, 'Class', data=data)
plt.show()
###Output
_____no_output_____
###Markdown
Creating a list containing predictor variables and a seperate list which is the target variable. The age is not a factor here since we are more concered about the features that were recorded and if they eventually came psotive for the diabetes dignosis.However, we can assign the age to a class of Young, Adult/Middle age and Old age.
###Code
age = ['Age']
targ = ['Class', 'class']
pred_col = [x for x in data.columns if x not in age + targ]
data['Class'] = data['class'].map({'Positive':1, 'Negative':0})
data['Gender'].value_counts()
def ageGroup(x):
if x <= 25:
return 'Young'
elif x > 25 and x < 60:
return 'Middle Age'
else:
return 'Old'
data['Age Group'] = data['Age'].apply(ageGroup)
data['Age Group'].value_counts()
###Output
_____no_output_____
###Markdown
From the above observation, it is safe to ignore the age distrubuton and focus on other predictors.
###Code
pred_col1 = pred_col + ['Age Group']
for i in pred_col1:
barPlot(i)
#Further investigation on gender
sns.catplot('class', hue='Gender', data=data, kind='count')
###Output
/usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
With 1 representing Positive and 0 repesenting Negative, the above graphs can be further explained below: Obesity :Obesity does not clearly indicate that the patient is at the risk of having diabetes. It is not having enough evidence to dismiss it or accept it as a factor that can lead to diabetes. Alopecia : Having a conditon called alopecia has slighty shown that a patient is not at the risk of having diabetes. It is more aligned with having no big thing to do with diabetes. Muscle Stiffness : Musce stiffness might be able to tell if a patient is at the risk of having diabetes. More patient that were diagnosed with having diabetes showed symptoms of muscle stiffness in their bodies. Partial Paralysis : This is a clear evidence that a patient suffering from this condition is at high risk of having diabetes. More than 2/3 of the patients diagnosed with this condition eventually has diabetes. Delayed Healing : A delayed healing is not having a clear indication to come to the conclusion that a patient might be at the risk of having diabetes. It however does not dismiss that diabetes is present in the body of the patient. Irritability : Irritability is a good indicator of the presence of diabetes in a patients body. Though, more conditons would have to be tested for in order to ascertain completely if the patient is at the verge of having diabetes. Itching : Both patientd with or eithout itching showed possibility of having diabetes or not havng diabetes. This concludes that itching won't be a best fit condition to conclude that a patient may have diabetes or not. Other conditions would have to be taken into consideration Visual Blurring : This is a good indicator as to whether a patient might be at the risk of having it. More patient with blurry sight have proven to be diabetic than those without blurry eye sight condition. Genital Thrush : This is also a good indicator and with this condition, we can tick the boxes of underying health conditions that most likely would lead to diabetes in a patient. Polyphagia : As seen from the graph, it is a good indicator that it highlights the presence of diabetes in a patient. Weakness : Frequent weakness in the body system isn't a good sign as more diabetic patient has shown to suffer from such underlying health condition. Obesity: Obesity doesn't clearly indicate the presence of diabetes in a patient Sudden Weight Loss : Sudden weight loss is clearly a good sign that a patient might be at the risk of having diabetes Polydipsia : Polydipsia is a condition whereby a patient has excessive or abnormal thirst. With the presence of this condition in a human body, it brings more clarity to the presence of diabetes in the bod Ployuria : Polyuria is excessive or an abnormally large production or passage of urine. Increased production and passage of urine may also be termed diuresis. Polyuria often appears in conjunction with polydipsia, though it is possible to have one without the other, and the latter may be a cause or an effect The polyuria chart above further confirmed that a patient may be suffering from diabetes. It is a condition whereby a patient Model Development The problem above as indicated is a binary classification and as such will require binary classification model. I will be testing 3 top machine learning binary models on the dataset. I will be comparing the accuracies, true positives and false negatives. In this kind of health related problem, the true positive and true negative accuracy is highly important and as such, is needed to be highly accurate. It will be disastorus to have someone who truly has diabetes being predicted to be false (False Negative) and such person is being deprived off treatment or have someone who do not have diabetes and is being diagnosed as positive and have such person placed on diabetes drugs. The following models will be estensively tested with different amount of feature engineering performed on the dataset for each models if need be.Random Forest Logistic Regression Neural Network Random Forest Classification
###Code
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, roc_auc_score, roc_curve, auc
'''
The class below performs operations of split train test, transformation
using either Hotencoder or Labelencoder, conversion of data into numpy array and returns the X_train, y_train, X_test and y_test
'''
class ModDiabetes:
def __init__(self, data, transform, X_cols, y_col):
"""
data: Takes in the object of the dataframe used for analysis
transform: Takes in input string of 'LabelEncoder' for label encoding transformation
or 'Hotencoder' for one hot encoding transformation
X_cols: This is a list containing predictor variables headers. Helps in grabbing
the desired predictors from the dataframe.
y_col: Target variable column inputed as a string.
"""
X = data[X_cols]
y = data[y_col]
X = np.array(X)
y = np.array(y)
#One hot encoder transformation
if transform == 'HotEncode':
enc = OneHotEncoder(sparse=False)
X = enc.fit_transform(X)
#Label encoder transformation
elif transform == 'LabelEncoder':
X = data[X_cols].apply(LabelEncoder().fit_transform)
X = X.values
self.X = X
self.y = y
#Function to preview the X and y arrays.
def preLoad(self):
return self.X, self.y
#Function splits the array into X_train, y_train, X_test and y_test taking into consideration test size and random state
def splitter(self, size, r_s):
"""
r_s: Takes in an integer value specifying a random state value for the train test split
size: Takes in a float between 0.0 and 1.0 specifying the desired size of the test.
"""
X_train, X_test, y_train, y_test = train_test_split(self.X, self.y, random_state=r_s, test_size=size)
return X_train, X_test, y_train, y_test
def dataSet(self):
'''
Function returns an array consisting of X predictors and y target variable.
'''
return self.X, self.y
###Output
_____no_output_____
###Markdown
Using Hot Encoder for comparison of accuracy score, f1 score and precision score
###Code
Model = ModDiabetes(data, 'HotEncode', pred_col, 'Class')
Model.preLoad()
trainX, testX, trainy, testy = Model.splitter(0.3, 0)
mod = RandomForestClassifier(random_state=0)
mod_ = mod.fit(trainX, trainy)
mod_.predict(testX)
print(classification_report(testy, mod.predict(testX)))
###Output
precision recall f1-score support
0 0.97 0.95 0.96 62
1 0.97 0.98 0.97 94
accuracy 0.97 156
macro avg 0.97 0.97 0.97 156
weighted avg 0.97 0.97 0.97 156
###Markdown
Finding ROC AUC Score
###Code
print(roc_auc_score(testy, mod.predict(testX)))
fpr, tpr, thresholds = roc_curve(testy, mod.predict(testX))
###Output
_____no_output_____
###Markdown
Receiver of Curve Graph
###Code
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc_score(testy, mod.predict(testX)))
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Using Label encoder for comparison of f1 score, precision and accuracy
###Code
Model_ = ModDiabetes(data, 'LabelEncoder', pred_col, 'Class')
###Output
_____no_output_____
###Markdown
Splitting the dataset
###Code
trainX_, testX_, trainy_, testy_ = Model_.splitter(0.3, 0)
###Output
_____no_output_____
###Markdown
Model building
###Code
mod_ = RandomForestClassifier(random_state=0)
mod__ = mod_.fit(trainX_, trainy_)
mod__.predict(testX_)
###Output
_____no_output_____
###Markdown
Classification report
###Code
print(classification_report(testy_, mod_.predict(testX_)))
print(roc_auc_score(testy_, mod_.predict(testX_)))
fpr_, tpr_, thresholds_ = roc_curve(testy_, mod_.predict(testX_))
print(auc(fpr_, tpr_))
plt.figure()
lw = 2
plt.plot(fpr_, tpr_, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc_score(testy_, mod_.predict(testX_)))
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Logistics Regression Generating class model for logistics classification. Model returns data splitted into X arrays and y arrays. Functions that will split into train test will be called and values of Xtest, ytest, xtrain and ytrain will be stored as a variable.
###Code
LogModel = ModDiabetes(data, 'LabelEncoder', pred_col, 'Class' )
#Splitting data into train and test set using 20% as test size
Xtrain, Xtest, ytrain, ytest = LogModel.splitter(0.2, 0)
###Output
_____no_output_____
###Markdown
Xtrain
###Code
Log = LogisticRegression()
Log.fit(Xtrain, ytrain)
Log.predict(Xtest)
###Output
_____no_output_____
###Markdown
Classification Report of Logistics Regression Model
###Code
print(classification_report(ytest, Log.predict(Xtest)))
###Output
precision recall f1-score support
0 0.95 0.93 0.94 40
1 0.95 0.97 0.96 64
accuracy 0.95 104
macro avg 0.95 0.95 0.95 104
weighted avg 0.95 0.95 0.95 104
###Markdown
Receiver of Curve Graph and Score
###Code
print(roc_auc_score(ytest, Log.predict(Xtest)))
#Lfpr, Ltpr and Lthresholds representing variable for logistics model class
Lfpr, Ltpr, Lthresholds = roc_curve(testy_, mod_.predict(testX_))
plt.figure()
lw = 2
plt.plot(Lfpr, Ltpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc_score(ytest, Log.predict(Xtest)))
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Neural Network (Keras)
###Code
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
from keras import Input
###Output
_____no_output_____
###Markdown
Parsing Data into Class function Feature engineering is performed on dataset, predictor variables are specified and the target variable is equally specified
###Code
NN = ModDiabetes(data, 'HotEncode', pred_col, 'Class')
###Output
_____no_output_____
###Markdown
Grabbing the X and y array dataset.
###Code
X, y = NN.dataSet()
###Output
_____no_output_____
###Markdown
Model development
###Code
model = Sequential()
X.shape
#Create model, add dense layers each by specifying activation function
model.add(Input(shape=(30,)))
model.add(Dense(32, activation='relu'))
model.add(Dense(38, activation='relu'))
model.add(Dense(30, activation='relu'))
model.add(Dense(35, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
#Compile the model using adam gradient descent(optimized)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=100, batch_size=1)
scores = model.evaluate(X, y)
print("\n%s: %.2f%%" %(model.metrics_names[1], scores[1]*100))
scores
###Output
_____no_output_____
###Markdown
Project Title: Diabetes Early Detection Abstract The menace of diabetes cannot be overemphasized as it is a prevalent disease among people. It is not even evident in young people and as such, numerous research has to be taken in place in order to get facts which can assist in early detection of diabetes. The main goal of this project is to collect anonymous data of diabetes patients and anaylyzie some of their persisitent health conditions observed in their body system before eventually being diagnosed of diabetes. The advantage of this model is that, it can easily examine some features and accurately tell if a patieent is likely to develop diabetes Introduction Diabetes is a group of disease that affect how the body uses blood sugar (Glucose). The glucose is a vital substance in our body that makes up healthy living since it's a very important source of energy for the cells that make up the muslces and tissues. Infact, the brain is highy dependent on it for source of fuel and functionaity. There are numerous underlying causes of diabetes and it varies by types. However, each type of diabetes has one sole purpose and that is to increase the amount of sugar in the body thereby leading to excess amount of it in the blood. This can lead to serious health complicaions. Chronic diabetes conditions include type 1 and type 2 diabetes. In some cases, there are the reversible forms of diabetes which are also known as prediabetes. It occurs in cases when the sugar in the blood is high than normal but it is not classified yet as dibetes and also gestational diabetes, which occurs during preganancy but may resolve after the baby is delivered **Importing necessary libraries for EDA and Data Cleaning**
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='darkgrid')
###Output
_____no_output_____
###Markdown
Load data
###Code
data = pd.read_csv('diabetes.csv')
#Data preview
data.head(5)
data.tail(5)
###Output
_____no_output_____
###Markdown
Check for missing values.There are several methods for checking for missing values. 1. We can start by showing the unique values in a categorical data.2. Use the isnull function to cumulate numbers of nan if there exists any3. We can equally replace missing numbers with np.nan
###Code
for i in data.columns:
print(data[i].unique())
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Above result indicate there are no missing values or data represented by '?' or unexpected character. 5 Data summary
###Code
data.describe(include='all')
###Output
_____no_output_____
###Markdown
There are 530 0bservations, with the average age to be placed around 48 years of age. EDA
###Code
# This function plots the varation of features as against the severity of having diabetes
#Can give us better insights on the powerful deciding features on the the possibility of developing diabetes.
def barPlot(column):
'''
column takes the value of the current column feature which is used to plot against the class of diabetes.
'''
fig = plt.figure(figsize = (15,6))
sns.barplot(column, 'Class', data=data)
plt.show()
###Output
_____no_output_____
###Markdown
Creating a list containing predictor variables and a seperate list which is the target variable. The age is not a factor here since we are more concered about the features that were recorded and if they eventually came psotive for the diabetes dignosis.However, we can assign the age to a class of Young, Adult/Middle age and Old age.
###Code
age = ['Age']
targ = ['Class', 'class']
pred_col = [x for x in data.columns if x not in age + targ]
data['Class'] = data['class'].map({'Positive':1, 'Negative':0})
data['Gender'].value_counts()
def ageGroup(x):
if x <= 25:
return 'Young'
elif x > 25 and x < 60:
return 'Middle Age'
else:
return 'Old'
data['Age Group'] = data['Age'].apply(ageGroup)
data['Age Group'].value_counts()
###Output
_____no_output_____
###Markdown
From the above observation, it is safe to ignore the age distrubuton and focus on other predictors.
###Code
pred_col1 = pred_col + ['Age Group']
for i in pred_col1:
barPlot(i)
#Further investigation on gender
sns.catplot('class', hue='Gender', data=data, kind='count')
###Output
/usr/local/lib/python3.6/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
With 1 representing Positive and 0 repesenting Negative, the above graphs can be further explained below: Obesity :Obesity does not clearly indicate that the patient is at the risk of having diabetes. It is not having enough evidence to dismiss it or accept it as a factor that can lead to diabetes. Alopecia : Having a conditon called alopecia has slighty shown that a patient is not at the risk of having diabetes. It is more aligned with having no big thing to do with diabetes. Muscle Stiffness : Musce stiffness might be able to tell if a patient is at the risk of having diabetes. More patient that were diagnosed with having diabetes showed symptoms of muscle stiffness in their bodies. Partial Paralysis : This is a clear evidence that a patient suffering from this condition is at high risk of having diabetes. More than 2/3 of the patients diagnosed with this condition eventually has diabetes. Delayed Healing : A delayed healing is not having a clear indication to come to the conclusion that a patient might be at the risk of having diabetes. It however does not dismiss that diabetes is present in the body of the patient. Irritability : Irritability is a good indicator of the presence of diabetes in a patients body. Though, more conditons would have to be tested for in order to ascertain completely if the patient is at the verge of having diabetes. Itching : Both patientd with or eithout itching showed possibility of having diabetes or not havng diabetes. This concludes that itching won't be a best fit condition to conclude that a patient may have diabetes or not. Other conditions would have to be taken into consideration Visual Blurring : This is a good indicator as to whether a patient might be at the risk of having it. More patient with blurry sight have proven to be diabetic than those without blurry eye sight condition. Genital Thrush : This is also a good indicator and with this condition, we can tick the boxes of underying health conditions that most likely would lead to diabetes in a patient. Polyphagia : As seen from the graph, it is a good indicator that it highlights the presence of diabetes in a patient. Weakness : Frequent weakness in the body system isn't a good sign as more diabetic patient has shown to suffer from such underlying health condition. Obesity: Obesity doesn't clearly indicate the presence of diabetes in a patient Sudden Weight Loss : Sudden weight loss is clearly a good sign that a patient might be at the risk of having diabetes Polydipsia : Polydipsia is a condition whereby a patient has excessive or abnormal thirst. With the presence of this condition in a human body, it brings more clarity to the presence of diabetes in the bod Ployuria : Polyuria is excessive or an abnormally large production or passage of urine. Increased production and passage of urine may also be termed diuresis. Polyuria often appears in conjunction with polydipsia, though it is possible to have one without the other, and the latter may be a cause or an effect The polyuria chart above further confirmed that a patient may be suffering from diabetes. It is a condition whereby a patient Model Development The problem above as indicated is a binary classification and as such will require binary classification model. I will be testing 3 top machine learning binary models on the dataset. I will be comparing the accuracies, true positives and false negatives. In this kind of health related problem, the true positive and true negative accuracy is highly important and as such, is needed to be highly accurate. It will be disastorus to have someone who truly has diabetes being predicted to be false (False Negative) and such person is being deprived off treatment or have someone who do not have diabetes and is being diagnosed as positive and have such person placed on diabetes drugs. The following models will be estensively tested with different amount of feature engineering performed on the dataset for each models if need be.Random Forest Logistic Regression Neural Network Random Forest Classification
###Code
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report, roc_auc_score, roc_curve, auc
'''
The class below performs operations of split train test, transformation
using either Hotencoder or Labelencoder, conversion of data into numpy array and returns the X_train, y_train, X_test and y_test
'''
class ModDiabetes:
def __init__(self, data, transform, X_cols, y_col):
"""
data: Takes in the object of the dataframe used for analysis
transform: Takes in input string of 'LabelEncoder' for label encoding transformation
or 'Hotencoder' for one hot encoding transformation
X_cols: This is a list containing predictor variables headers. Helps in grabbing
the desired predictors from the dataframe.
y_col: Target variable column inputed as a string.
"""
X = data[X_cols]
y = data[y_col]
X = np.array(X)
y = np.array(y)
#One hot encoder transformation
if transform == 'HotEncode':
enc = OneHotEncoder(sparse=False)
X = enc.fit_transform(X)
#Label encoder transformation
elif transform == 'LabelEncoder':
X = data[X_cols].apply(LabelEncoder().fit_transform)
X = X.values
self.X = X
self.y = y
#Function to preview the X and y arrays.
def preLoad(self):
return self.X, self.y
#Function splits the array into X_train, y_train, X_test and y_test taking into consideration test size and random state
def splitter(self, size, r_s):
"""
r_s: Takes in an integer value specifying a random state value for the train test split
size: Takes in a float between 0.0 and 1.0 specifying the desired size of the test.
"""
X_train, X_test, y_train, y_test = train_test_split(self.X, self.y, random_state=r_s, test_size=size)
return X_train, X_test, y_train, y_test
def dataSet(self):
'''
Function returns an array consisting of X predictors and y target variable.
'''
return self.X, self.y
###Output
_____no_output_____
###Markdown
Using Hot Encoder for comparison of accuracy score, f1 score and precision score
###Code
Model = ModDiabetes(data, 'HotEncode', pred_col, 'Class')
Model.preLoad()
trainX, testX, trainy, testy = Model.splitter(0.3, 0)
mod = RandomForestClassifier(random_state=0)
mod_ = mod.fit(trainX, trainy)
mod_.predict(testX)
print(classification_report(testy, mod.predict(testX)))
###Output
precision recall f1-score support
0 0.97 0.95 0.96 62
1 0.97 0.98 0.97 94
accuracy 0.97 156
macro avg 0.97 0.97 0.97 156
weighted avg 0.97 0.97 0.97 156
###Markdown
Finding ROC AUC Score
###Code
print(roc_auc_score(testy, mod.predict(testX)))
fpr, tpr, thresholds = roc_curve(testy, mod.predict(testX))
###Output
_____no_output_____
###Markdown
Receiver of Curve Graph
###Code
plt.figure()
lw = 2
plt.plot(fpr, tpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc_score(testy, mod.predict(testX)))
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Using Label encoder for comparison of f1 score, precision and accuracy
###Code
Model_ = ModDiabetes(data, 'LabelEncoder', pred_col, 'Class')
###Output
_____no_output_____
###Markdown
Splitting the dataset
###Code
trainX_, testX_, trainy_, testy_ = Model_.splitter(0.3, 0)
###Output
_____no_output_____
###Markdown
Model building
###Code
mod_ = RandomForestClassifier(random_state=0)
mod__ = mod_.fit(trainX_, trainy_)
mod__.predict(testX_)
###Output
_____no_output_____
###Markdown
Classification report
###Code
print(classification_report(testy_, mod_.predict(testX_)))
print(roc_auc_score(testy_, mod_.predict(testX_)))
fpr_, tpr_, thresholds_ = roc_curve(testy_, mod_.predict(testX_))
print(auc(fpr_, tpr_))
plt.figure()
lw = 2
plt.plot(fpr_, tpr_, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc_score(testy_, mod_.predict(testX_)))
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Logistics Regression Generating class model for logistics classification. Model returns data splitted into X arrays and y arrays. Functions that will split into train test will be called and values of Xtest, ytest, xtrain and ytrain will be stored as a variable.
###Code
LogModel = ModDiabetes(data, 'LabelEncoder', pred_col, 'Class' )
#Splitting data into train and test set using 20% as test size
Xtrain, Xtest, ytrain, ytest = LogModel.splitter(0.2, 0)
###Output
_____no_output_____
###Markdown
Xtrain
###Code
Log = LogisticRegression()
Log.fit(Xtrain, ytrain)
Log.predict(Xtest)
###Output
_____no_output_____
###Markdown
Classification Report of Logistics Regression Model
###Code
print(classification_report(ytest, Log.predict(Xtest)))
###Output
precision recall f1-score support
0 0.95 0.93 0.94 40
1 0.95 0.97 0.96 64
accuracy 0.95 104
macro avg 0.95 0.95 0.95 104
weighted avg 0.95 0.95 0.95 104
###Markdown
Receiver of Curve Graph and Score
###Code
print(roc_auc_score(ytest, Log.predict(Xtest)))
#Lfpr, Ltpr and Lthresholds representing variable for logistics model class
Lfpr, Ltpr, Lthresholds = roc_curve(testy_, mod_.predict(testX_))
plt.figure()
lw = 2
plt.plot(Lfpr, Ltpr, color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc_score(ytest, Log.predict(Xtest)))
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Neural Network (Keras)
###Code
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense
from keras import Input
###Output
_____no_output_____
###Markdown
Parsing Data into Class function Feature engineering is performed on dataset, predictor variables are specified and the target variable is equally specified
###Code
NN = ModDiabetes(data, 'HotEncode', pred_col, 'Class')
###Output
_____no_output_____
###Markdown
Grabbing the X and y array dataset.
###Code
X, y = NN.dataSet()
###Output
_____no_output_____
###Markdown
Model development
###Code
model = Sequential()
X.shape
#Create model, add dense layers each by specifying activation function
model.add(Input(shape=(30,)))
model.add(Dense(32, activation='relu'))
model.add(Dense(38, activation='relu'))
model.add(Dense(30, activation='relu'))
model.add(Dense(35, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
#Compile the model using adam gradient descent(optimized)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X, y, epochs=100, batch_size=1)
scores = model.evaluate(X, y)
print("\n%s: %.2f%%" %(model.metrics_names[1], scores[1]*100))
scores
###Output
_____no_output_____
###Markdown
Saving the NN Model
###Code
model.save('drive/MyDrive/myModel')
testX
answer = model.predict(np.array([[1,0,1,0,0,1,1,0,0,1,1,0,1,0,1,0,1,0,1,0,0,1,0,1,0,1,1,0,1,0]]))
if answer < 1:
print('Negative')
else:
print('Positive')
data[pred_col].iloc[[0]]
ans2 = model.predict(np.array([np.concatenate((gender_yes, Polyuria_yes, polydi, suddenwe, weak, polypha, genit, visual, itch, irrit, delayed,
partial, muscle, alope, obe), axis=0)]))
if ans2 < 1:
print('Negative')
else:
print('Positive')
###Output
Negative
|
TBv2_Py-3-Modeling-Cataloging.ipynb | ###Markdown
TechBytes: Using Python with Teradata Vantage Part 3: Modeling with Vantage Analytic Functions - Model CatalogingThe contents of this file are Teradata Public Content and have been released to the Public Domain.Please see _license.txt_ file in the package for more information.Alexander Kolovos and Tim Miller - May 2021 - v.2.0 \Copyright (c) 2021 by Teradata \Licensed under BSDThis TechByte demonstrates how to* invoke and use Vantage analytic functions through their teradataml Python wrapper functions.* use options to display the actual SQL query submitted by teradataml to the Database.* persist analytical results in teradataml DataFrames as Database tables.* train and score models in-Database with Vantage analytic functions. A use case is shown with XGBoost and Decision Forest analyses, where we employ Vantage Machine Learning (ML) Engine analytic functions to predict the propensity of bank customers to open a new credit card account. The example further demonstrates a comparison of the 2 models via confusion matrix analysis.* save, inspect, retrieve, and reuse models created with Vantage analytic functions by means of the teradataml Model Cataloging feature._Note_: To use Model Cataloging on your target Advanced SQL Engine, visit first the teradataml page on the website downloads.teradata.com, and ask your Database administrator to install and enable this feature on your Vantage system.Contributions by:- Alexander Kolovos, Sr Staff Software Architect, Teradata Product Engineering / Vantage Cloud and Applications.- Tim Miller, Principal Software Architect, Teradata Product Management / Advanced Analytics. Initial Steps: Load libraries and create a Vantage connection
###Code
# Load teradataml and dependency packages.
#
import os
import getpass as gp
from teradataml import create_context, remove_context, get_context
from teradataml import DataFrame, copy_to_sql, in_schema
from teradataml.options.display import display
from teradataml import XGBoost, XGBoostPredict, ConfusionMatrix
from teradataml import DecisionForest, DecisionForestEvaluator, DecisionForestPredict
from teradataml import save_model, list_models, describe_model, retrieve_model
from teradataml import publish_model, delete_model
import pandas as pd
import numpy as np
# Specify a Teradata Vantage server to connect to. In the following statement,
# replace the following argument values with strings as follows:
# <HOST> : Specify your target Vantage system hostname (or IP address).
# <UID> : Specify your Database username.
# <PWD> : Specify your password. You can also use encrypted passwords via
# the Stored Password Protection feature.
#con = create_context(host = <HOST>, username = <UID>, password = <PWD>,
# database = <DB_Name>, "temp_database_name" = <Temp_DB_Name>)
#
con = create_context(host = "<Host_Name>", username = "<Username>",
password = gp.getpass(prompt='Password:'),
logmech = "LDAP", database = "TRNG_TECHBYTES",
temp_database_name = "<Database_Name>")
# Create a teradataml DataFrame from the ADS we need, and take a glimpse at it.
#
td_ADS_Py = DataFrame("ak_TBv2_ADS_Py")
td_ADS_Py.to_pandas().head(5)
# Split the ADS into 2 samples, each with 60% and 40% of total rows.
# Use the 60% sample to train, and the 40% sample to test/score.
# Persist the samples as tables in the Database, and create DataFrames.
#
td_Train_Test_ADS = td_ADS_Py.sample(frac = [0.6, 0.4])
Train_ADS = td_Train_Test_ADS[td_Train_Test_ADS.sampleid == "1"]
copy_to_sql(Train_ADS, table_name="ak_TBv2_Train_ADS_Py", if_exists="replace")
td_Train_ADS = DataFrame("ak_TBv2_Train_ADS_Py")
Test_ADS = td_Train_Test_ADS[td_Train_Test_ADS.sampleid == "2"]
copy_to_sql(Test_ADS, table_name="ak_TBv2_Test_ADS_Py", if_exists="replace")
td_Test_ADS = DataFrame("ak_TBv2_Test_ADS_Py")
###Output
_____no_output_____
###Markdown
1. Using the ML Engine analytic functionsAssume the use case of predicting credit card account ownership based on independent variables of interest. We will be training models, scoring the test data with them, comparing models and storing them for retrieval.
###Code
# Use the teradataml option to print the SQL code of calls to Advanced SQL
# or ML Engines analytic functions.
#
display.print_sqlmr_query = True
###Output
_____no_output_____
###Markdown
1.1. Model training and scoring with XGBoost
###Code
# First, construct a formula to predict Credit Card account ownership based on
# the following independent variables of interest:
#
formula = "cc_acct_ind ~ income + age + tot_cust_years + tot_children + female_ind + single_ind " \
"+ married_ind + separated_ind + ca_resident_ind + ny_resident_ind + tx_resident_ind " \
"+ il_resident_ind + az_resident_ind + oh_resident_ind + ck_acct_ind + sv_acct_ind " \
"+ ck_avg_bal + sv_avg_bal + ck_avg_tran_amt + sv_avg_tran_amt"
# Then, train an XGBoost model to predict Credit Card account ownership on the
# basis of the above formula.
#
td_xgboost_model = XGBoost(data = td_Train_ADS,
id_column = 'cust_id',
formula = formula,
num_boosted_trees = 4,
loss_function = 'binomial',
prediction_type = 'classification',
reg_lambda =1.0,
shrinkage_factor = 0.1,
iter_num = 10,
min_node_size = 1,
max_depth = 6
)
#print(td_xgboost_model)
print("Training complete.")
# Score the XGBoost model against the holdout and compare actuals to predicted.
#
td_xgboost_predict = XGBoostPredict(td_xgboost_model,
newdata = td_Test_ADS,
object_order_column = ['tree_id','iter','class_num'],
id_column = 'cust_id',
terms = 'cc_acct_ind',
num_boosted_trees = 4
)
# Persist the XGBoostPredict output
#
try:
db_drop_table("ak_TBv2_Py_XGBoost_Scores")
except:
pass
td_xgboost_predict.result.to_sql(if_exists = "replace", table_name = "ak_TBv2_Py_XGBoost_Scores")
td_XGBoost_Scores = DataFrame("ak_TBv2_Py_XGBoost_Scores")
td_XGBoost_Scores.head(5)
###Output
_____no_output_____
###Markdown
1.2. Model training and scoring with Decision Forests
###Code
# In a different approach, train a Decicion Forests model to predict the same
# target, so we can compare and see which algorithm fits best the data.
#
td_decisionforest_model = DecisionForest(formula = formula,
data = td_Train_ADS,
tree_type = "classification",
ntree = 500,
nodesize = 1,
variance = 0.0,
max_depth = 12,
mtry = 5,
mtry_seed = 100,
seed = 100
)
#print(td_decisionforest_model)
print("Training complete.")
# Call the DecisionForestEvaluator() function to determine the most important
# variables in the Decision Forest model.
#
td_decisionforest_model_evaluator = DecisionForestEvaluator(object = td_decisionforest_model,
num_levels = 5)
# In the following, the describe() method provides summary statistics across
# trees over grouping by each variable. One can consider the mean importance
# across all trees as the importance for each variable.
#
td_variable_importance = td_decisionforest_model_evaluator.result.select(["variable_col", "importance"]).groupby("variable_col").describe()
print(td_variable_importance)
#print("Variable importance analysis complete.")
# Score the Decision Forest model
#
td_decisionforest_predict = DecisionForestPredict(td_decisionforest_model,
newdata = td_Test_ADS,
id_column = "cust_id",
detailed = False,
terms = ["cc_acct_ind"]
)
# Persist the DecisionForestPredict output
try:
db_drop_table("ak_TBv2_Py_DecisionForest_Scores")
except:
pass
copy_to_sql(td_decisionforest_predict.result, if_exists = "replace",
table_name="ak_TBv2_Py_DecisionForest_Scores")
td_DecisionForest_Scores = DataFrame("ak_TBv2_Py_DecisionForest_Scores")
td_DecisionForest_Scores.head(5)
###Output
_____no_output_____
###Markdown
1.3. Inspect the 2 modeling approaches through their Confusion Matrix
###Code
# Look at the confusion matrix for the XGBoost model.
#
confusion_matrix_XGB = ConfusionMatrix(data = td_XGBoost_Scores,
reference = "cc_acct_ind",
prediction = "prediction"
)
print(confusion_matrix_XGB)
# Look at the confusion matrix for Random Forest model.
#
confusion_matrix_DF = ConfusionMatrix(data = td_DecisionForest_Scores,
reference = "cc_acct_ind",
prediction = "prediction"
)
print(confusion_matrix_DF)
###Output
_____no_output_____
###Markdown
2. Model CatalogingTools to save, inspect, retrieve, and reuse models created either in the Advanced SQL Engine or the ML Engine.
###Code
# Save the XGBoost and Decision Forest models.
#
save_model(model = td_xgboost_model, name = "ak_TBv2_Py_CC_XGB_model",
description = "TechBytes (Python): XGBoost for credit card analysis")
save_model(model = td_decisionforest_model, name = "ak_TBv2_Py_CC_DF_model",
description = "TechBytes (Python): DF for credit card analysis")
# Inspect presently saved models.
#
list_models()
# Print details about a specific model.
#
describe_model(name = "ak_TBv2_Py_CC_DF_model")
# Recreate a teradataml Analytic Function object from the information saved
# with the Model Catalog
td_retrieved_DF_model = retrieve_model("ak_TBv2_Py_CC_DF_model")
# Assume that on the basis of the earlier model comparison, we choose to keep
# the Decision Forests model and discard the XGBoost one.
#
# The publish_model() function enables sharing the selected models with
# other users, and specifying a status among the available options
# of "In-Development", "Candidate", "Active", "Production", and "Retired".
#
publish_model("ak_TBv2_Py_CC_DF_model", grantee = "public", status = "Active")
# Discarding a model no longer needed.
#
delete_model("ak_TBv2_Py_CC_DF_model")
delete_model("ak_TBv2_Py_CC_XGB_model")
###Output
_____no_output_____
###Markdown
End of session
###Code
# Remove the context of present teradataml session and terminate the Python
# session. It is recommended to call the remove_context() function for session
# cleanup. Temporary objects are removed at the end of the session.
#
remove_context()
###Output
_____no_output_____ |
_notebooks/2022-01-15-spirals.ipynb | ###Markdown
Generating Spirals> Using polar co-ordinates to better understand circles and spirals- toc:true- badges: true- comments: true- author: Ishaan- categories: [maths, curves]
###Code
#hide
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation, rc
%matplotlib inline
rc('animation', html='html5')
###Output
_____no_output_____
###Markdown
CircleIf a line is rotated about a point, any fixed point on the line traces out a circle. This is illustrated in the animation below where a green point and a red point on the blue stick are seen to trace out their circles when the stick completes a full rotation.
###Code
#hide_input
a1 = 5
a2 = 10
fig = plt.figure()
ax = fig.add_subplot(111, projection='polar')
ax.set_aspect('equal')
line1, = ax.plot([0, 0],[0,a2], 'b', lw=2)
line2, = ax.plot([],[], 'g.', lw=1)
line3, = ax.plot([],[], 'r.', lw=1)
rs2 = []
rs3 = []
thetas = []
ax.set_ylim(0, a2)
def animate(theta):
line1.set_data([theta, theta],[0,a2])
rs2.append(a1)
rs3.append(a2)
thetas.append(theta)
line2.set_data(thetas, rs2)
line3.set_data(thetas, rs3)
return line1,line2,line3
# create animation using the animate() function
frames = np.linspace(0,2*np.pi,60)
anim = animation.FuncAnimation(fig, animate, frames=frames, \
interval=50, blit=True, repeat=False)
plt.close() # Important: Gives an additional figure if omitted
anim
###Output
_____no_output_____
###Markdown
SpiralsBut if a point is allowed to be moved outwards along the stick while the stick is being rotated, it traces out a spiral. If the outward movement is directly proportional to the angle of rotation, we get a *Linear spiral* or *Archimedean spiral* (blue). In other words, the point's linear velocity along the stick is constant just like the angular velocity of the stick's rotation. In polar co-ordinates $$r = a\theta$$ If the linear velocity along the stick increases exponentially with the angle of rotation, we get a *Logarithmic spiral* (red). In polar co-ordinates $$ r = ae^{b\theta}$$
###Code
#hide_input
fig = plt.figure()
ax = fig.add_subplot(111, projection='polar')
ax.set_aspect('equal')
line1, = ax.plot([],[],'b.', lw=1)
line2, = ax.plot([],[],'r.', lw=1)
# Params
# Note N = 6, a = 1.00, b = 0.20, ax.set_ylim(0,50) works well for illustration
N = 6
a = 1.00
b = 0.20
R_MIN = 0
R_MAX = 50
N_PI = N * np.pi
rs1 = []
rs2 = []
thetas = []
ax.set_ylim(R_MIN,R_MAX)
def animate(theta):
rs1.append(a*theta)
rs2.append(a*np.exp(b*theta))
thetas.append(theta)
line1.set_data(thetas, rs1)
line2.set_data(thetas, rs2)
return line1,line2
# create animation using the animate() function
anim_rt_spirals = animation.FuncAnimation(fig, animate, frames=np.arange(0.0, N_PI, 0.1), \
interval=20, blit=True, repeat=False)
plt.close()
anim_rt_spirals
###Output
_____no_output_____ |
exercises/09-Yelp_reviews.ipynb | ###Markdown
Exercise 9 Naive Bayes with Yelp review text Using the yelp reviews database create a Naive Bayes model to predict the star rating for reviewsRead `yelp.csv` into a DataFrame.
###Code
# access yelp.csv using a relative path
import pandas as pd
yelp = pd.read_csv('yelp.csv')
yelp.head(1)
###Output
_____no_output_____
###Markdown
Create a new DataFrame that only contains the 5-star and 1-star reviews.
###Code
# filter the DataFrame using an OR condition
yelp_best_worst = yelp[(yelp.stars==5) | (yelp.stars==1)]
###Output
_____no_output_____
###Markdown
Split the new DataFrame into training and testing sets, using the review text as the only feature and the star rating as the response.
###Code
# define X and y
X = yelp_best_worst.text
y = yelp_best_worst.stars
# split into training and testing sets
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
###Output
_____no_output_____ |
templates/template3_rename_parts_in_plan.ipynb | ###Markdown
Rename parts This template is used for cases where we have an assembly plan with undomesticated and unprefixed part names.1. a. Create an assembly plan using `template3_plan_template.ods`. Use the `original_names` sheet and specify the position prefixes used in GeneDom in the header, to create a final plan in the `plan` sheet. b. Alternatively, use the first section below to add prefixes, based on a column: prefix lookup list.2. Export the final plan to csv (`template3_plan.csv`).3. Run the below code to create new sequence files that are named according to the plan. These can be domesticated with GeneDom. --- Optional section (1b): prefix part names in planParameters:
###Code
# Note that the first prefix is empty, for the backbone, but may be utilised in other cases:
column_prefixes = ["", "e1e2", "e2e3", "e3e4", "e4e5", "e5e0"]
path_to_plan_csv = "template3_plan_noprefix.csv"
prefixed_plan_path = "template3_plan_prefixed.csv"
import pandas as pd
plan = pd.read_csv(path_to_plan_csv, header=None)
prefixes = [""]
for prefix in column_prefixes:
if prefix == "":
prefixes += [prefix]
else: # not empty, need separator character
prefixes += [prefix + "_"]
for col in plan.columns:
prefix = prefixes[col]
plan[col] = prefix + plan[col].astype(str)
plan.to_csv(prefixed_plan_path, header=None, index=None)
###Output
_____no_output_____
###Markdown
--- Section 3:Parameters:
###Code
dir_to_process = "original_parts/"
assembly_plan_path = "template3_plan.csv"
export_dir = "prefixed_sequences/"
###Output
_____no_output_____
###Markdown
Load in the part sequence files. This assumes that the file names are the sequence IDs:
###Code
import dnacauldron as dc
seq_records = dc.biotools.load_records_from_files(folder=dir_to_process, use_file_names_as_ids=True)
seq_records_names = [record.id for record in seq_records]
print(len(seq_records))
###Output
_____no_output_____
###Markdown
Read plan and obtain the part names:
###Code
import pandas as pd
plan = pd.read_csv(assembly_plan_path, header=None)
plan
l = plan.iloc[:, 2:].values.tolist() # first column is construct name, second column is backbone
flat_list = [item for sublist in l for item in sublist if str(item) != 'nan']
parts_in_plan = list(set(flat_list))
parts_in_plan
###Output
_____no_output_____
###Markdown
Make a dictionary, find a record with matching name, save with new name in another list(some records may be exported into multiple variants, if the same part is used in multiple positions):
###Code
dict_pos_name = {}
for part in parts_in_plan:
part_cut = part.split('_', 1)[1] # we split at the first underscore
dict_pos_name[part] = part_cut
dict_pos_name
import copy
pos_records = [] # collects records with position prefix added
for pos_name, old_name in dict_pos_name.items():
for record in seq_records:
if record.id == old_name:
new_record = copy.deepcopy(record)
new_record.name = pos_name
new_record.id = pos_name
pos_records.append(new_record)
break
###Output
_____no_output_____
###Markdown
Save sequences
###Code
import os
# os.mkdir(export_dir)
for record in pos_records:
filepath = os.path.join(export_dir, (record.name + ".gb"))
with open(filepath, "w") as output_handle:
SeqIO.write(record, output_handle, "genbank")
###Output
_____no_output_____ |
Python/data_science/data_analysis/06-Data-Visualization-with-Seaborn/07-Seaborn Exercises.ipynb | ###Markdown
The DataWe will be working with a famous titanic data set for these exercises. Later on in the Machine Learning section of the course, we will revisit this data, and use it to predict survival rates of passengers. For now, we'll just focus on the visualization of the data with seaborn:
###Code
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style('whitegrid')
titanic = sns.load_dataset('titanic')
titanic.head()
###Output
_____no_output_____
###Markdown
Exercises** Recreate the plots below using the titanic dataframe. There are very few hints since most of the plots can be done with just one or two lines of code and a hint would basically give away the solution. Keep careful attention to the x and y labels for hints.**** *Note! In order to not lose the plot image, make sure you don't code in the cell that is directly above the plot, there is an extra cell above that one which won't overwrite that plot!* **
###Code
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.jointplot(x='fare', y='age', data=titanic, kind='scatter', xlim=(-100, 600))
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.distplot(titanic['fare'],kde=False, bins=30, color='red')
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.boxplot(x='class', y='age', data=titanic, palette='rainbow')
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.swarmplot(x='class', y='age', data=titanic, palette='Set2')
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.countplot(x='sex', data=titanic)
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
sns.heatmap(titanic.corr(), cmap='coolwarm')
plt.title('titanic.corr()')
titanic.head()
# CODE HERE
# REPLICATE EXERCISE PLOT IMAGE BELOW
# BE CAREFUL NOT TO OVERWRITE CELL BELOW
# THAT WOULD REMOVE THE EXERCISE PLOT IMAGE!
g = sns.FacetGrid(titanic, col='sex')
g.map(plt.hist, 'age') # or g.map(sns.distplot, 'age')
###Output
_____no_output_____ |
rms_titanic_eda.ipynb | ###Markdown
PREAMBLE PREAMBLEIn this project, I will analyze The Titanic dataset and then communicate my findings about it, using the Python libraries NumPy, Pandas, and Matplotlib to make my analysis easier.**What do I need to install?**I need an installation of _Python_, plus the following libraries:pandasnumpymatplotlibcsv or unicodecsvinstalling _Anaconda_ is the best option, which comes with all of the necessary packages, as well as IPython notebook. **Why this Project?**This project will introduce me to the data analysis process. In this project, I will go through the entire process so that I know how all the pieces fit together. In this project, I will also gain experience using the Python libraries NumPy, Pandas, and Matplotlib, which make writing data analysis code in Python a lot easier!**What will I learn?**After completing the project, I will:Know all the steps involved in a typical data analysis process,Be comfortable posing questions that can be answered with a given dataset and then answering those questions,Know how to investigate problems in a dataset and wrangle the data into a format that can be usedHave practice communicating the results of my analysisBe able to use vectorized operations in NumPy and Pandas to speed up data analysis codeBe familiar with Pandas' Series and DataFrame objects, which us access data more convenientlyKnow how to use Matplotlib to produce plots showing my findings**Why is this Important to my Career?**This project will show off a variety of data analysis skills, as well as showing everyone that I know how to go through the entire data analysis process. RMS TitanicThe RMS Titanic was a British passenger liner that sank in the North Atlantic Ocean in the early morning hours of _15 April 1912_, after it collided with an iceberg during its maiden voyage from _Southampton to New York City_. There were an estimated _2,224_ passengers and crew aboard the ship, and more than _1,500_ died, making it one of the deadliest commercial peacetime maritime disasters in modern history. The RMS Titanic was the largest ship afloat at the time it entered service and was the second of three Olympic-class ocean liners operated by the _White Star Line_.The Titanic was built by the _Harland and Wolff shipyard in Belfast_. Thomas Andrews, her architect, died in the disaster._The Titanic sits near the dock at Belfast, Northern Ireland soon before starting its maiden voyage. Circa April 1912_. EDA _img courtesy of [Data Camp](https://www.datacamp.com)_ Exploratory Data Analysis of The Titanic Data SetThis Data set consists of passengers of the Titanic.The essence of this analysis is to provide more insights about the Titanic data set. I would go through this analysis with an open mind, looking at the data, Asking relevant questions, evaluating metrics and displaying similarities and or differences in data variables that may have been consequential in affecting : * _**Passengers that survived**_, * _**Passengers that died**_, * _**Any other valuable insights from the data**_. Let's begin by importing some Libraries for data analysis and visualization
###Code
import numpy as np # for numerical analysis
import pandas as pd # for a tabular display of the data
import matplotlib as mpl # for visualization
import matplotlib.pyplot as plt # for visualization using the scripting layer
import seaborn as sns # for advanced visualization
import sklearn # for prediction or machine learning
import folium # for creating interactive maps
from PIL import Image # converting images into arrays
from wordcloud import WordCloud, STOPWORDS # for word cloud creation
!pip install pywaffle
from pywaffle import Waffle # for waffle charts creation
print('All modules imported successfully')
###Output
Requirement already satisfied: pywaffle in /usr/local/lib/python3.6/dist-packages (0.2.1)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.6/dist-packages (from pywaffle) (3.0.3)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib->pywaffle) (0.10.0)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->pywaffle) (2.5.3)
Requirement already satisfied: numpy>=1.10.0 in /usr/local/lib/python3.6/dist-packages (from matplotlib->pywaffle) (1.16.3)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->pywaffle) (1.1.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib->pywaffle) (2.4.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from cycler>=0.10->matplotlib->pywaffle) (1.12.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib->pywaffle) (41.0.1)
All modules imported successfully
###Markdown
Loading The Titanic Data set to a pandas Data Frame: Note that the Titanic Data set we would use for this analysis can be downloaded from the project lesson of _**Udacity- Intro to Data Analysis course**_,Which is a free course available at [Udacity](https://www.udacity.com/course/intro-to-data-analysis--ud170) Another way to directly load a copy of this data set to a Data Frame, is from the Seaborn Library data sets, just like this:(_**although for this project we would stick to the data set from Udacity**_)
###Code
# Loading Titanic data set into a pandas dataframe from seaborn library in just one line of code
titanic_df = sns.load_dataset('titanic')
# Visualizing the first 3 rows of the data frame
titanic_df.head(3)
###Output
_____no_output_____
###Markdown
So let's get to our Titanic Data set from Udacity. I preloaded it in github for easy access to colab. So we would import the data set from github.Features of The Titanic Data Set.**PassengerId** - Numeric Id for each passenger onboard**survived** - Survival (0 = No; 1 = Yes)**Pclass** - Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)**name** - Name**sex** - Sex**age** - Age**sibsp** - Number of Siblings/Spouses Aboard**parch** - Number of Parents/Children Aboard**ticket** - Ticket Number**fare** - Passenger Fare**cabin** - Cabin**embarked** - Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton) Importing The raw Titanic Data set from github
###Code
titanic_data = 'https://raw.githubusercontent.com/Blackman9t/EDA/master/titanic_data.csv'
###Output
_____no_output_____
###Markdown
Reading it into a Pandas Data Frame Although pandas has a robust missing-values detection algorithm, experience has taught us that some missing value types may go undetected, unless we hard-code them.Let's add some more possible missing value types to the default pandas collection
###Code
# Making a list of additional missing value types added to the default NA type that pandas can detect
missing_values = ["n/a", "na", "--",'?']
titanic_df = pd.read_csv(titanic_data, na_values = missing_values)
# Let's view the first 10 entries of the data set
titanic_df.head(10)
###Output
_____no_output_____
###Markdown
Let's check the shape to know how many total rows and columns are involved
###Code
titanic_df.shape
###Output
_____no_output_____
###Markdown
This tells us there are 891 passenger entries in the Titanic and 12 passenger features... Let's see the default summary statistics of the Data set
###Code
titanic_df.describe(include='all')
# By default only numeric columns are computed.
# If we want to view summary statistics for all columns then run; titanic_df.describe(include='all')
###Output
_____no_output_____
###Markdown
We can see from the summary statistics that:- The average survival rate when The Titanic sank was 38% only. Age column has only 714 numeric entries as against 891 like the rest of the numeric data columns... The average or mean age of passengers aboard the titanic was about 30 years. The oldest person or maximum age was 80 years old. The minimum age was less than a year... we can investigate further Also the average passenger fare was about 32 pounds While the most expensive tickets sold for slightly above 500 pounds... interesting. Let's look at the data types of all the columns to confirm that the right data types are in place before we start the analysis:
###Code
titanic_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.6+ KB
###Markdown
Okay, all numerical columns have the right int or float type to make vectorized computations easy, the rest are also in good order. **Next let's check for the number of NaN or unknown values per column.**
###Code
titanic_df.isna().sum()
###Output
_____no_output_____
###Markdown
We an clearly see that almost all columns are clean except:-Age: with 177 missing values,Cabin: with 687 missing valuesEmbarked: 2 missing values...We shall deal with these soon. To find total number of NaN values
###Code
titanic_df.isna().sum().sum()
###Output
_____no_output_____
###Markdown
To check if missing values in one column
###Code
titanic_df.Age.isna().values.any()
###Output
_____no_output_____
###Markdown
Visualizing missing values
###Code
import missingno as msno
msno.bar(titanic_df, figsize=(12, 6), fontsize=12, color='steelblue')
plt.show()
###Output
_____no_output_____
###Markdown
Fixing the missing Age Column: Let's visualize the age column with a histogram, to see the distribution of ages
###Code
x = titanic_df.Age.copy(deep=True)
x.dropna(axis=0, inplace=True)
len(x)
count, bin_edges = np.histogram(x, bins=10, range=(0,100))
print('count is',count,'\nBin edges are:',bin_edges)
plt.hist(x, bins=bin_edges, edgecolor='black')
plt.title('Histogram of Age Distribution.')
plt.xlabel('Age-Range')
plt.ylabel('Frequency')
plt
plt.show()
###Output
_____no_output_____
###Markdown
Let's print out the measures of central tendency of the distribution
###Code
mode_age = (x.mode())[0]
mean_age = x.mean()
median_age = x.median()
print('Mean age is',mean_age)
print('Median age is',median_age)
print('Mode age is',mode_age)
###Output
Mean age is 29.69911764705882
Median age is 28.0
Mode age is 24.0
###Markdown
We will now summarize the main features of the distribution of ages as it appears from the histogram:**Shape:** The distribution of ages is skewed right. This means we have a concentration of data of young people in The Titanic, And a progressively fewer number of older people, making the histogram to skew to the right.It is also **unimodal** in shape, with just one dominant mode range of passenger ages between 20 - 30 years.**Center:** The data seem to be centered around 28 years old. Note that this implies that roughly half the passengers in the Titanic are less than 30 years old.This is also reflected by a mean, median and modal age of 30 years.**Spread:** The data range is from about 0 to about 80, so the approximate range equals 80 - 0 = 80.**Outliers:** There are no outliers in the Age data as all values seem evenly distributed, with a steady decrease of the number of passengers above the 30 - 40 age group.We can conclude that The Titanic had more passengers in the age range 0 to 30 years,And the most frequent age-range of all Titanic passengers was 20 - 30 years of age. finally we shall define a method that randomly replaces the missing age values with either the mean, median or mode values.
###Code
def rand_age(x):
""" Takes a value x, and returns the float form of x.
If x gives an error, then return either the mode, median or mean age"""
try:
int(x)
return float(x)
except:
i = [mean_age, mode_age, median_age]
y = np.random.randint(0, len(i))
return i[y]
###Output
_____no_output_____
###Markdown
Next, lets apply that method to the age column
###Code
titanic_df.Age = titanic_df.Age.apply(rand_age)
###Output
_____no_output_____
###Markdown
Let's confirm the changes
###Code
titanic_df.Age.isna().any()
###Output
_____no_output_____
###Markdown
Let's look at the 3 different classes of passengers: First let's check the distribution of passengers in each class
###Code
# we need to be sure that there are only 3 classses (3, 2, 1) in the data set.
# let's use the unique method of pandas to verify
titanic_df.Pclass.unique()
# next let's check the distribution size of each class of passengers
# we can easily do this with pandas groupby function.
# Let's group Pclasss by size and cast to a Data Frame.
classes = titanic_df.groupby('Pclass').size().to_frame()
# Let's rename the column
classes.rename(columns={0:'total'}, inplace=True)
# Let's customize the index
classes.index = ['1st Class','2nd Class','3rd Class']
# and display the result
classes
###Output
_____no_output_____
###Markdown
We can see that out of `891 passengers` aboard the Titanic, `216` were in 1st Class, `184` in 2nd class and a whopping `491` in 3rd Class. Visualizing Passenger Distribution of The Titanic Using Bar and Pie plots.
###Code
plt.figure(figsize=(18, 6))
sns.set(font_scale=1.2)
sns.set_style('ticks') # change background to white background
plt.suptitle('Visualizing Passenger Distribution of The Titanic using a Bar and Pie plot', y=1.05)
# For The Bar chart
plt.subplot(121)
color_list = ['gold','purple','brown']
plt.bar(x=classes.index, height=classes.total, data=classes, color= color_list, width=0.5)
plt.title('Bar Chart showing Distribution of Passengers in The Titanic')
plt.xlabel('Classes')
plt.ylabel('Number of Passengers')
for x,y in zip(classes.index, classes.total):
label = round(y,2) # could also be written as:- "{:.2f}".format(y)
plt.annotate(label, # this is the text
(x,y), # this is the point to label
textcoords="offset points", # how to position the text
xytext=(0,4), # distance from text to points (x,y)
ha='center',) # horizontal alignment can be left, right or center
# For The Pie chart
plt.subplot(122)
plt.pie(classes.total,
data=classes,
autopct='%1.1f%%',
colors=color_list,
startangle=90,
shadow=True,
pctdistance=1.15)
plt.title('Pie Chart showing Percentage Distribution of Passengers in The Titanic')
plt.axis('equal')
plt.legend(labels=classes.index, loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
Average Ticket Fares to The Titanic Per Passenger Class The above plots represent the norm in most human activities, regular tickets often sell more than VIP tickets.Let's find out the average and max ticket fares for 1st Class, 2nd Class and 3rd Class passengers
###Code
# first we form two groups of average and max ticket fares per class
ave_ticket_per_class = titanic_df[['Pclass','Fare']].groupby('Pclass').mean()
max_ticket_per_class = titanic_df[['Pclass','Fare']].groupby('Pclass').max()
# Next we group them together
ave_ticket_per_class['Max_Fare'] = max_ticket_per_class['Fare']
ave_ticket_per_class.rename(columns={'Fare':'Ave_Fare'},inplace=True)
# Finally rename it
ticket_range = ave_ticket_per_class
# And display
#ticket_range.sort_values(['Pclass'],ascending=True, inplace=True)
ticket_range
ticket_range.sort_index(ascending=False, inplace=True)
ticket_range = ticket_range.transpose()
ticket_range
###Output
_____no_output_____
###Markdown
With the average price of a 3rd class ticket going for about 14 pounds and that of a 1st class ticket going for 84 pounds on average;It can be inferred that ticket price could be one of the reasons why more than half the passengers aboard the Titanic were in 3rd class.Notice that the price difference between 3rd and 2nd Class tickets is minimal Visualizing average to max range of Ticket Fares per Passenger Class of The Titanic Using a Box Plot.
###Code
plt.figure(figsize=(10,6))
sns.boxplot('Pclass','Fare', data=titanic_df)
sns.set(font_scale=1.3)
plt.title('Boxplot showing range of Ticket Fares for The Titanic')
plt.xlabel('Passenger Classes')
plt.ylabel('Ticket Fare')
plt.show()
###Output
_____no_output_____
###Markdown
Once again the boxplot shows striking similarities between the ticket fares for 3rd and 2nd class passengers of The Titanic Let's see the correlation between fares and passenger classes.
###Code
titanic_df['Pclass'].corr(titanic_df['Fare'])
###Output
_____no_output_____
###Markdown
A correlation figure of -0.55 indicates an above average negative relationship between passenger classes and ticket fares.This could mean that as ticket fares tend to rise, the number of passengers tend to drop and vice-versa. Note that correlation does not imply causation... The fact that two variables seem to have a negative, positive or no relationship, does not imply that one variable causes the other to occur or not. See Visualization below
###Code
corr_data= titanic_df.corr()
plt.figure(figsize=(12,8))
sns.heatmap(corr_data, annot=True)
plt.title('Heat-Map showing The correlation of variables in Titanic Data Set')
plt.show()
###Output
_____no_output_____
###Markdown
Let's look at the age distribution of passengers aboard The Titanic: Earlier we saw that: The average age of all passengers was about 30 years.The minimum age was about 5 months (0.42 * 12).The maximum was 80 years. **Let's check the minimum age again**
###Code
# First lets investigate the minimum age again
titanic_df.Age.min()
###Output
_____no_output_____
###Markdown
It's possible that there were babies just a few months old aboard the Titanic.This could be the reason why we have some age less than one year old,Let's look at the distribution of passengers below one year old.
###Code
titanic_df[(titanic_df.Age<1)]
###Output
_____no_output_____
###Markdown
We can see 7 passengers below one year old. Made up of 5 little boys and 2 pretty little girls.And from their titles (`Master` or `Miss`), we can safely assume they were indeed just a few months old when the Titanic crashed.On a positive note, it is good that all these infants survived as their `Survived` status is 1. The details of the youngest passenger aboard The Titanic
###Code
titanic_df[np.logical_and(titanic_df.Name, titanic_df.Age==titanic_df.Age.min())]
###Output
_____no_output_____
###Markdown
The details of the oldest passenger aboard The Titanic
###Code
titanic_df[np.logical_and(titanic_df.Name, titanic_df.Age==titanic_df.Age.max())]
###Output
_____no_output_____
###Markdown
The Wealthiest passengerJohn Jacob Astor IV. Did you know?The Titanic was over 882 feet long (almost 3 football fields)... And it weighed 52,310 tonscourtesy [goodhousekeeping](https://www.goodhousekeeping.com/life/g19809308/titanic-facts/?slide=3) So we have 177 NaN values. Add that to 714 and total is 891 entries as expected.When it comes to dealing with NaN values, we usually have the following options:-1. We can leave it the way it is, if this option would not affect our computational or visual analysis2. We can replace NaN values with either the mean or mode of the distribution3. We can delete NaN values from the Data set, this would mean reducing the Data size and is best suited for large data sets with a few NaN values. **Let's see the Class and Sex summary of passengers with NaN Age values.** There is no given pattern, but clearly more passengers in 3rd class do not have their age values.First class has 30 entries and 2nd class about a dozen entries. **Let's replace all NaN Age values with the average age of passengers in the Data set** Visualizing Age Distribution of Passengers using a Hist and Dist Plot
###Code
fig = plt.figure(figsize=(18, 6))
sns.set_style('ticks')
ax0 = fig.add_subplot(121)
ax1 = fig.add_subplot(122)
plt.suptitle('Visualizing Age Distribution of Passengers using a Hist and Dist Plot', y=1.05)
#Histogram
titanic_df.Age.plot(kind='hist', edgecolor='navy', ax=ax0, facecolor='peru')
ax0.set_title('Histogram of Age Distribution of Titanic Passengers')
ax0.set_xlabel('Age')
#Distplot
sns.distplot(titanic_df.Age, hist=False, color='r', ax=ax1, label='Age Distribution')
ax1.set_title('Distplot of Age Distribution of Titanic Passengers')
plt.show()
###Output
_____no_output_____
###Markdown
We will now summarize the main features of the distribution of ages as it appears from the histogram:**Shape:** The distribution of ages is skewed right. This means we have a concentration of data of young people in The Titanic, And a progressively fewer number of older people, making the histogram to skew to the right.It is also **unimodal** in shape, with just one dominant mode range of passenger ages between 20 - 30 years.**Center:** The data seem to be centered around 30 years old. Note that this implies that roughly half the passengers in the Titanic are less than 30 years old.This is also reflected by a mean, median and modal age of 30 years.**Spread:** The data range is from about 0 to about 80, so the approximate range equals 80 - 0 = 80.**Outliers:** There are no outliers in the Age data as all values seem evenly distributed, with a steady decrease of the number of passengers above the 30 - 40 age group.We can conclude that The Titanic had more passengers in the age range 0 to 30 years,And the most frequent age-range of all Titanic passengers was 20 - 30 years of age.
###Code
# further proof of the above assertion can be seen below
# Titanic had 586 passengers below age 30 and 305 passengers above 30 years.
titanic_df.groupby(titanic_df['Age'] <= 30).size()
###Output
_____no_output_____
###Markdown
Code above shows that out of the `891` passengers, `586` `(66%)` were less than 30 years old and `305` `(34%)` were 30 years and above. Let's compare the age distribution of Males and Females aboard The Titanic First Let's create two separate Data Frames for Males and Females.
###Code
males = titanic_df[titanic_df.Sex=='male']
females = titanic_df[titanic_df.Sex=='female']
###Output
_____no_output_____
###Markdown
Let's see the summary statistics for males and females...
###Code
males.describe()
females.describe()
###Output
_____no_output_____
###Markdown
Summary Statistics From the summary above, we can see that:- 1. There were 577 males and 314 female passengers on The Titanic. 2. The average age for males was about 30 and about 28 for women. 3. Interestingly women paid 45 pounds for a ticket, while men paid 26 pounds on average... We would investigate why, but I'm thinking the difference may be as a result of more women in 1st class and 2nd class than men.Or a greater proportion of women in 1st and 2nd class seats than men4. The maximum age for women was 63 and max age for men was 80 years. Visualizing Age-Group Distribution of Male and Female Passengers using a Horizontal Bar Plot
###Code
age_range = ['[0.0 - 10]','[10 - 20]','[20 - 30]','[30 - 40]','[40 - 50]','[50 - 60]','[60 - 70]','[70 - 80]']
age_dict = {}
def age_grades(dataframe):
start = 0
for i in age_range:
stop = start + 10
x = dataframe[np.logical_and(dataframe.Age>start, dataframe.Age<=stop)].shape[0]
age_dict[i] = [x]
start += 10
return age_dict
males_dict = age_grades(males)
males_df = pd.DataFrame(males_dict)
males_df = males_df.transpose()
females_dict = age_grades(females)
females_df = pd.DataFrame(females_dict)
females_df = females_df.transpose()
maleFemaleAgeRange = pd.concat([males_df, females_df], axis=1)
maleFemaleAgeRange.columns = ['Male_Age_Range', 'Female_Age_Range']
maleFemaleAgeRange
sns.set_style('ticks')
ax = maleFemaleAgeRange.plot(kind='barh', color=['blue','red'], figsize=(14,10), width=0.75)
sns.set(font_scale=1.5)
ax.set_title('Age-Group Distribution for Males and Females aboard The Titanic')
ax.set_xlabel('Number of Passengers')
ax.set_ylabel('Age-Group')
for i in ax.patches:
# get_width pulls left or right; get_y pushes up or down
ax.text(i.get_width()+3, i.get_y()+.3, \
str(round((i.get_width()), 2)), fontsize=15, color='black')
# invert to set y-axis in ascending order
ax.invert_yaxis()
plt.show()
###Output
_____no_output_____
###Markdown
From the Horizontal Bar plot above, we can easily see that:- 1. The `20 - 30` age group has the highest concentration of passengers. 273 males and 134 females with a total count of 407 passengers. ``` (407 divided by 891) * 100 = 46% of passengers. ``` 2. The next most populous age-group is the `30 - 40` group consisting of a total of 155 passengers. Okay, lets look at The distribution of males and females in the three passenger classes.Recall that the average price of female tickets was about 45 pounds which was about 75% more expensive than the average male passenger ticket of 26 pounds.One possible reason could be that there were more female passengers in the higher classes(1st class, 2nd class) than male passengers.Or that the percentage of females to the population of females (proportion) is higher than the proportion of males in the higher classes of passengers. Let's see to that. **Let's define a Data frame for the number of men and women per class**
###Code
sex_per_class = titanic_df.groupby(['Pclass','Sex']).size().to_frame()
sex_per_class.reset_index(inplace=True)
sex_per_class
###Output
_____no_output_____
###Markdown
**Next let's define a simple method that calculates the total proportion of males and females per class**
###Code
total_f = len(titanic_df[titanic_df.Sex=='female'])
total_m = len(titanic_df[titanic_df.Sex=='male'])
def pct_(series):
"""Takes a series of numeric values and converts each value
to a percent based on its index.
Returns a list of converted values to pct,
For total males and females of the Titanic."""
x = list(series)
for i in range(len(x)):
if i % 2 == 0:
x[i] = round((x[i] / total_f)*100)
else:
x[i] = round((x[i] / total_m)*100)
return x
###Output
_____no_output_____
###Markdown
**Next let's append that proportion as a column to the sex_per_class data frame and rename the 0 column to 'Count'.**
###Code
sex_per_class['Pct_of_total(M/F)_per_class'] = pct_(sex_per_class[0])
sex_per_class.rename(columns={0:'Count'}, inplace=True)
###Output
_____no_output_____
###Markdown
**Finally we can view it**
###Code
sex_per_class
###Output
_____no_output_____
###Markdown
We can clearly see the following:1. The count of males in each passenger class is higher than the count of females.2. But the proportion of females in the higher classes(1st, 2nd) is more than the proportion of males.3. We can see that the proprtion of females to males in 1st class is 30% against 21%, And 24% against 19% in 2nd class... While on the flip side the males have 60% of their population in 3rd class against 46% for the females.4. This accounts for why female tickets on average cost more than male tickets, because the percentage of females in higher classes is more than males, and as a result the average ticket fare for females is 45 pounds, against 26 pounds for males. Visualizing The proportions of male and female passengers per class, using Waffle Charts
###Code
sns.set(font_scale=1.5)
sns.set_style('ticks')
first_class = {'Females': 30, 'Males': 21}
second_class = {'Females': 24, 'Males': 19}
third_class = {'Females': 46, 'Males': 60}
plt.figure(
FigureClass=Waffle,
rows=5,
values=first_class,
colors=("#983D3D", "#232066"),
title={'label': 'Proportion of male and female passengers in 1st Class', 'loc': 'left'},
labels=["{0} ({1}%)".format(k, v) for k, v in first_class.items()],
legend={'loc': 'lower left', 'bbox_to_anchor': (0, -0.45), 'ncol': len(first_class), 'framealpha': 0},
plot_direction='NW',
)
plt.figure(
FigureClass=Waffle,
rows=5,
values=second_class,
colors=("#983D3D", "#232066"),
title={'label': 'Proportion of male and female passengers in 2nd Class', 'loc': 'left', 'color':'darkgreen'},
labels=["{0} ({1}%)".format(k, v) for k, v in second_class.items()],
legend={'loc': 'lower left', 'bbox_to_anchor': (0, -0.4), 'ncol': len(second_class), 'framealpha': 0},
plot_direction='NW',
)
plt.figure(
FigureClass=Waffle,
rows=7,
values=third_class,
colors=("#983D3D", "#232066"),
title={'label': 'Proportion of male and female passengers in 3rd Class', 'loc': 'left', 'color':'navy'},
labels=["{0} ({1}%)".format(k, v) for k, v in third_class.items()],
legend={'loc': 'lower left', 'bbox_to_anchor': (0, -0.45), 'ncol': len(third_class), 'framealpha': 0},
plot_direction='NW',
)
fig.gca().set_facecolor('#EEEEEE')
fig.set_facecolor('#EEEEEE')
plt.show()
###Output
/usr/local/lib/python3.6/dist-packages/matplotlib/figure.py:2369: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
warnings.warn("This figure includes Axes that are not compatible "
###Markdown
**next, let's view who bought the most xpensive tickets, survival data per class, survival data per sex, word cloud in tribute to those who died in 1st , 2nd or 3rd classes. use of lambda expressions, map of the location where titanic crashed. ** [titanic_crash_site](http://www.shipwreckworld.com/maps/rms-titanic)
###Code
# we select the 3rd class passengers in a group using pandas.
third_class_group = titanic_df['Pclass'] == 3
# then we assign that selection to the slice of titanic_df
third_class_df = titanic_df[third_class_group]
# we view the first 5 entries of the 3rd class passengers data frame.
third_class_df.head()
third_class_df.shape
# Do a word cloud in tribute to those in third class with their names.
###Output
_____no_output_____ |
regionsSP/DRS_Covid19_v3.ipynb | ###Markdown
COVID19 - District Region Install necessary packages for parallel computation:```pip install ipyparallelipcluster nbextension enablepip install parallel-execute```To install for all users on JupyterHub, as root:```jupyter nbextension install --sys-prefix --py ipyparalleljupyter nbextension enable --sys-prefix --py ipyparalleljupyter serverextension enable --sys-prefix --py ipyparallel```start cluster at jupyter notebook interface
###Code
import urllib.request
import pandas as pd
import numpy as np
# Download data
import get_data
LoadData=False
if LoadData:
get_data.get_data()
dfSP = pd.read_csv("data/dados_municipios_SP.csv")
dfSP
# Model
# lista DRSs
DRS = list(dfSP["DRS"].unique())
DRS.remove("Indefinido")
DRS
###Output
_____no_output_____
###Markdown
SEAIR-D Model Equations$$\begin{array}{l}\frac{d s}{d t}=-[\beta i(t) + \beta_2 a(t)-\mu] \cdot s(t)\\ \frac{d e}{d t}=[\beta i(t) + \beta_2 a(t)] \cdot s(t) -(\sigma+\mu) \cdot e(t)\\ \frac{d a}{d t}=\sigma e(t) \cdot (1-p)-(\gamma+\mu) \cdot a(t) \\\frac{d i}{d t}=\sigma e(t) \cdot p - (\gamma + \sigma_2 + \sigma_3 + \mu) \cdot i(t)\\ \frac{d r}{d t}=(b + \sigma_2) \cdot i(t) + \gamma \cdot a(t) - \mu \cdot r(t)\\\frac{d k}{d t}=(a + \sigma_3 - \mu) \cdot d(t)\end{array}$$The last equation does not need to be solve because:$$\frac{d k}{d t}=-(\frac{d e}{d t}+\frac{d a}{d t}+\frac{d i}{d t}+\frac{d r}{d t})$$The sum of all rates are equal to zero! The importance of this equation is that it conservates the rates. Parameters $\beta$: Effective contact rate [1/min] $\gamma$: Recovery(+Mortality) rate $\gamma=(a+b)$ [1/min]$a$: mortality of healed [1/min]$b$: recovery rate [1/min]$\sigma$: is the rate at which individuals move from the exposed to the infectious classes. Its reciprocal ($1/\sigma$) is the average latent (exposed) period.$\sigma_2$: is the rate at which individuals move from the infectious to the healed classes. Its reciprocal ($1/\sigma_2$) is the average latent (exposed) period$\sigma_3$: is the rate at which individuals move from the infectious to the dead classes. Its reciprocal ($1/\sigma_3$) is the average latent (exposed) period $p$: is the fraction of the exposed which become symptomatic infectious sub-population.$(1-p)$: is the fraction of the exposed which becomes asymptomatic infectious sub-population.
###Code
#objective function Odeint solver
from scipy.integrate import odeint
#objective function Odeint solver
def lossOdeint(point, data, death, s_0, e_0, a_0, i_0, r_0, d_0, startNCases, ratioRecovered, weigthCases, weigthRecov):
size = len(data)
beta, beta2, sigma, sigma2, sigma3, gamma, b, mu = point
def SEAIRD(y,t):
S = y[0]
E = y[1]
A = y[2]
I = y[3]
R = y[4]
D = y[5]
p=0.2
# beta2=beta
y0=-(beta2*A+beta*I)*S+mu*S #S
y1=(beta2*A+beta*I)*S-sigma*E-mu*E #E
y2=sigma*E*(1-p)-gamma*A-mu*A #A
y3=sigma*E*p-gamma*I-sigma2*I-sigma3*I-mu*I#I
y4=b*I+gamma*A+sigma2*I-mu*R #R
y5=(-(y0+y1+y2+y3+y4)) #D
return [y0,y1,y2,y3,y4,y5]
y0=[s_0,e_0,a_0,i_0,r_0,d_0]
tspan=np.arange(0, size, 1)
res=odeint(SEAIRD,y0,tspan,hmax=0.01)
l1=0
l2=0
l3=0
tot=0
for i in range(0,len(data.values)):
if data.values[i]>startNCases:
l1 = l1+(res[i,3] - data.values[i])**2
l2 = l2+(res[i,5] - death.values[i])**2
newRecovered=min(1e6,data.values[i]*ratioRecovered)
l3 = l3+(res[i,4] - newRecovered)**2
tot+=1
l1=np.sqrt(l1/max(1,tot))
l2=np.sqrt(l2/max(1,tot))
l3=np.sqrt(l3/max(1,tot))
#weight for cases
u = weigthCases #Brazil US 0.1
w = weigthRecov
#weight for deaths
v = max(0,1. - u - w)
return u*l1 + v*l2 + w*l3
# Initial parameters
dfparam = pd.read_csv("data/param.csv")
dfparam
# Initial parameter optimization
# Load solver
GlobalOptimization=False
import ray
if GlobalOptimization:
import ray
import LearnerGlobalOpt as Learner # basinhopping global optimization (several times minimize)
else:
import Learner #minimize
allDistricts=False
results=[]
if allDistricts:
for districtRegion in DRS:
query = dfparam.query('DRS == "{}"'.format(districtRegion)).reset_index()
parameters = np.array(query.iloc[:, 2:])[0]
learner = Learner.Learner.remote(districtRegion, lossOdeint, *parameters)
#learner.train()
#add function evaluation to the queue
results.append(learner.train.remote())
else:
districtRegion= 'DRS 08 - Franca' #'DRS 14 - Sรฃo Joรฃo da Boa Vista' #'DRS 04 - Baixada Santista' \
#'DRS 11 - Presidente Prudente' #'DRS 13 - Ribeirรฃo Preto' \
#'DRS 05 - Barretos' #'DRS 12 - Registro' #'DRS 15 - Sรฃo Josรฉ do Rio Preto' \
#'DRS 10 - Piracicaba'#'DRS 17 - Taubatรฉ'#'DRS 02 - Araรงatuba'# \
#'DRS 03 - Araraquara' #DRS 07 - Campinas'#'DRS 16 - Sorocaba'#'DRS 06 - Bauru' \
#'DRS 09 - Marรญlia' #"DRS 01 - Grande Sรฃo Paulo"
query = dfparam.query('DRS == "{}"'.format(districtRegion)).reset_index()
parameters = np.array(query.iloc[:, 2:])[0]
learner = Learner.Learner.remote(districtRegion, lossOdeint, *parameters)
#learner.train()
#add function evaluation to the queue
results.append(learner.train.remote())
# #execute all the queue with max_runner_cap at a time
results = ray.get(results)
# Save data as csv
import glob
import os
path = './results/data'
files = glob.glob(os.path.join(path, "*.csv"))
df = (pd.read_csv(f).assign(DRS = f.split(" - ")[-1].split(".")[0]) for f in files)
df_all_drs = pd.concat(df, ignore_index=True)
df_all_drs.index.name = 'index'
df_all_drs.to_csv('./data/SEAIRD_sigmaOpt_AllDRS'+'.csv', sep=",")
###Output
_____no_output_____
###Markdown
Plots
###Code
import matplotlib.pyplot as plt
import covid_plots
def loadDataFrame(filename):
df= pd.read_pickle(filename)
df.columns = [c.lower().replace(' ', '_') for c in df.columns]
df.columns = [c.lower().replace('(', '') for c in df.columns]
df.columns = [c.lower().replace(')', '') for c in df.columns]
return df
#DRS 01 - Grande Sรฃo Paulo
#DRS 02 - Araรงatuba
#DRS 03 - Araraquara
#DRS 04 - Baixada Santista
#DRS 05 - Barretos
#DRS 06 - Bauru
#DRS 07 - Campinas
#DRS 08 - Franca
#DRS 09 - Marรญlia
#DRS 10 - Piracicaba
#DRS 11 - Presidente Prudente
#DRS 12 - Registro
#DRS 13 - Ribeirรฃo Preto
#DRS 14 - Sรฃo Joรฃo da Boa Vista
#DRS 15 - Sรฃo Josรฉ do Rio Preto
#DRS 16 - Sorocaba
#DRS 17 - Taubatรฉ
#select districts for plotting
districts4Plot=['DRS 01 - Grande Sรฃo Paulo',
'DRS 04 - Baixada Santista',
'DRS 07 - Campinas',
'DRS 05 - Barretos',
districtRegion]
#main district region for analysis
#districtRegion = "DRS 01 - Grande Sรฃo Paulo"
#Choose here your options
#opt=0 all plots
#opt=1 corona log plot
#opt=2 logistic model prediction
#opt=3 bar plot with growth rate
#opt=4 log plot + bar plot
#opt=5 SEAIR-D Model
opt = 0
#versio'n to identify the png file result
version = "1"
#parameters for plotting
query = dfparam.query('DRS == "{}"'.format(districtRegion)).reset_index()
startdate = query['start-date'][0]
predict_range = query['prediction-range'][0]
#do not allow the scrolling of the plots
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines){
return false;
}
#number of cases to start plotting model in log graph - real data = 100
startCase=1
covid_plots.covid_plots(districtRegion, districts4Plot, startdate,predict_range, startCase, 5, version, show=True)
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.