markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Time-Series Analysis
from statsmodels.tsa.arima_process import arma_generate_sample # Gerando dados np.random.seed(12345) arparams = np.array([.75, -.25]) maparams = np.array([.65, .35]) # Parâmetros arparams = np.r_[1, -arparams] maparam = np.r_[1, maparams] nobs = 250 y = arma_generate_sample(arparams, maparams, nobs) dates = sm.tsa.datetools.dates_from_range('1980m1', length=nobs) y = pd.Series(y, index=dates) arma_mod = sm.tsa.ARMA(y, order=(2,2)) arma_res = arma_mod.fit(trend='nc', disp=-1) print(arma_res.summary())
ARMA Model Results ============================================================================== Dep. Variable: y No. Observations: 250 Model: ARMA(2, 2) Log Likelihood -245.887 Method: css-mle S.D. of innovations 0.645 Date: Mon, 23 Mar 2020 AIC 501.773 Time: 20:09:11 BIC 519.381 Sample: 01-31-1980 HQIC 508.860 - 10-31-2000 ============================================================================== coef std err z P>|z| [0.025 0.975] ------------------------------------------------------------------------------ ar.L1.y 0.8411 0.403 2.089 0.037 0.052 1.630 ar.L2.y -0.2693 0.247 -1.092 0.275 -0.753 0.214 ma.L1.y 0.5352 0.412 1.299 0.194 -0.273 1.343 ma.L2.y 0.0157 0.306 0.051 0.959 -0.585 0.616 Roots ============================================================================= Real Imaginary Modulus Frequency ----------------------------------------------------------------------------- AR.1 1.5618 -1.1289j 1.9271 -0.0996 AR.2 1.5618 +1.1289j 1.9271 0.0996 MA.1 -1.9835 +0.0000j 1.9835 0.5000 MA.2 -32.1793 +0.0000j 32.1793 0.5000 -----------------------------------------------------------------------------
MIT
Data Science Academy/Cap08/Notebooks/DSA-Python-Cap08-07-StatsModels.ipynb
srgbastos/Artificial-Intelligence
Bring Your Own Algorithm to SageMaker Architecture of this notebook 1. Traininga. [Bring Your Own Container](byoc)b. [Training locally](local_train)c. [Trigger remote training job](remote_train)d. [Test locally](local_test) 2. Deploy EndPoint[Deploy model to SageMaker Endpoint](deploy_endpoint) 3. Build Lambda Functiona. [Construct lambda function](build_lambda_function)b. [Test lambda](lambda_test) 4. Configure API Gatewaya. [Construct and setting api gateway](api-gateway)b. [Configure for passing binary media to Lambda Function](binary-content)c. [test api gateway](test-api) BYOC (Bring Your Own Container) for Example Audio Classification Algorithm * prepare necessry variablesusing `boto3` to get region and account_id for later usage - ECR uri construction
import boto3 session = boto3.session.Session() region = session.region_name client = boto3.client("sts") account_id = client.get_caller_identity()["Account"] algorithm_name = "vgg16-audio"
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
3 elements to build bring your own container * `build_and_push.sh` is the script communicating with ECR * `Dockerfile` defines the training and serving environment * `code/train` and `code/serve` defines entry point of our container
!./build_and_push.sh !cat Dockerfile !cat build_and_push.sh
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
* construct image uri by account_id, region and algorithm_name
image_uri=f"{account_id}.dkr.ecr.{region}.amazonaws.com/{algorithm_name}" image_uri
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
* prepare necessary variables/object for training
import sagemaker session = sagemaker.session.Session() bucket = session.default_bucket() from sagemaker import get_execution_role role = get_execution_role() print(role) s3_path = f"s3://{bucket}/data/competition" s3_path
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
Dataset Description - Dataset used in this workshop can be obtained from [Dog Bark Sound AI competition](https://tbrain.trendmicro.com.tw/Competitions/Details/15) hold by the world leading pet camera brand [Tomofun](https://en.wikipedia.org/wiki/Tomofun). The url below will be invalid after workshop.
# s3://tomofun-audio-classification-yianc # data/data.zip !wget https://www.dropbox.com/s/gvcswtrmdnhyiwo/Final_Training_Dataset.zip?dl=1 !unzip -o Final_Training_Dataset.zip?dl=1 !mv Final_Training_Dataset/train.zip ./ !unzip -o train.zip !aws s3 cp --recursive ./train/ $s3_path
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
Train model in a docker container with terminal interface * start container in interactive mode```IMAGE_ID=$(sudo docker images --filter=reference=vgg16-audio --format "{{.ID}}")nvidia-docker run -it -v $PWD:/opt/ml --entrypoint '' $IMAGE_ID bash ```* train model based on README.md```python train.py --csv_path=/opt/ml/input/data/competition/meta_train.csv --data_dir=/opt/ml/input/data/competition/train --epochs=50 --val_split 0.1```
from datetime import datetime now = datetime.now() timestamp = datetime.timestamp(now) job_name = "audio-{}".format(str(int(timestamp))) job_name
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
Start SageMaker Training Job* sagemaker training jobs can run either locally or remotely
mode = 'remote' if mode == 'local': csess = sagemaker.local.LocalSession() else: csess = session print(csess) estimator = sagemaker.estimator.Estimator( role=role, image_uri=image_uri, instance_count=1, # instance_type='local_gpu', instance_type='ml.p3.8xlarge', sagemaker_session=csess, volume_size=100, debugger_hook_config=False ) estimator.fit(inputs={"competition":s3_path}, job_name=job_name) estimator.model_data model_s3_path = estimator.model_data !aws s3 cp $model_s3_path . !tar -xvf model.tar.gz !mkdir -p model !mv final_model.pkl model/
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
Test Model Locally * start container in interactive mode```IMAGE_ID=$(sudo docker images --filter=reference=vgg16-audio --format "{{.ID}}")nvidia-docker run -it -v $PWD:/opt/ml --entrypoint '' $IMAGE_ID bash ```* test model based on README.md```python test.py --test_csv /opt/ml/input/data/competition/meta_train.csv --data_dir /opt/ml/input/data/competition/train --model_name VGGish --model_path /opt/ml/model --saved_root results/test --saved_name test_result``` Deploy SageMaker Endpoint
predictor = estimator.deploy(instance_type='ml.p2.xlarge', initial_instance_count=1, serializer=sagemaker.serializers.IdentitySerializer()) # predictor = estimator.deploy(instance_type='local_gpu', initial_instance_count=1, serializer=sagemaker.serializers.IdentitySerializer()) endpoint_name = predictor.endpoint_name
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
You can deploy by using model file directly The Source code is as below. we can use model locally trained to deploy a sagemaker endpoint get example model file from s3 ```source_model_data_url = 'https://tinyurl.com/yh7tw3hj'!wget -O model.tar.gz $source_model_data_urlMODEL_PATH = f's3://{bucket}/model'model_data_s3_uri = f'{MODEL_PATH}/model.tar.gz'!aws s3 cp model.tar.gz $model_data_s3_uri``` build endpoint from the model file ```import time mode = 'remote'if mode == 'local': csess = sagemaker.local.LocalSession()else: csess = sessionmodel = sagemaker.model.Model(image_uri, model_data = model_data_s3_uri, role = role, predictor_cls = sagemaker.predictor.Predictor, sagemaker_session = csess)now = datetime.now()timestamp = datetime.timestamp(now)new_endpoint_name = "audio-{}".format(str(int(timestamp))) object_detector = model.deploy(initial_instance_count = 1, instance_type = 'ml.p2.xlarge', instance_type = 'local_gpu', endpoint_name = new_endpoint_name, serializer = sagemaker.serializers.IdentitySerializer())``` You can also update endpoint based on following example code ```new_detector = sagemaker.predictor.Predictor(endpoint_name = endpoint_name) new_detector.update_endpoint(model_name=model.name, initial_instance_count = 1, instance_type = 'ml.m4.xlarge')```
import json file_name = "./input/data/competition/train/train_00002.wav" with open(file_name, 'rb') as image: f = image.read() b = bytearray(f) results = predictor.predict(b) detections = json.loads(results) print(detections)
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
Create Lambda Function
import time iam = boto3.client("iam") role_name = "AmazonSageMaker-LambdaExecutionRole" assume_role_policy_document = { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": ["sagemaker.amazonaws.com", "lambda.amazonaws.com"] }, "Action": "sts:AssumeRole" } ] } create_role_response = iam.create_role( RoleName = role_name, AssumeRolePolicyDocument = json.dumps(assume_role_policy_document) ) # Now add S3 support iam.attach_role_policy( PolicyArn='arn:aws:iam::aws:policy/AmazonS3FullAccess', RoleName=role_name ) iam.attach_role_policy( PolicyArn='arn:aws:iam::aws:policy/AmazonSQSFullAccess', RoleName=role_name ) iam.attach_role_policy( PolicyArn='arn:aws:iam::aws:policy/AmazonSageMakerFullAccess', RoleName=role_name ) time.sleep(60) # wait for a minute to allow IAM role policy attachment to propagate lambda_role_arn = create_role_response["Role"]["Arn"] print(lambda_role_arn) %%bash -s "$bucket" cd invoke_endpoint zip -r invoke_endpoint.zip . aws s3 cp invoke_endpoint.zip s3://$1/lambda/ import os cwd = os.getcwd() !aws lambda create-function --function-name invoke_endpoint --zip-file fileb://$cwd/invoke_endpoint/invoke_endpoint.zip --handler lambda_function.lambda_handler --runtime python3.7 --role $lambda_role_arn endpoint_name = predictor.endpoint_name bucket_key = "audio-demo" variables = f"ENDPOINT_NAME={endpoint_name}" env = "Variables={"+variables+"}" !aws lambda update-function-configuration --function-name invoke_endpoint --environment "$env"
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
Test Material ```{ "content": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wCEAAoHCBYWFRgWFRUZGRgYGBgYGBoYGBoYGBgYGhgZGRgYGBgcIS4lHB4rIRgYJjgmKy8xNTU1GiQ7QDs0Py40NTEBDAwMEA8QHxISHjQrJCQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NDQ0NP/AABEIAMMBAgMBIgACEQEDEQH/xAAbAAABBQEBAAAAAAAAAAAAAAAEAAIDBQYBB//EADkQAAEDAgQEBAMGBgIDAAAAAAEAAhEDIQQSMUEFBlFhEyJxgTKhsUKRwdHh8BQVUmJygiPxB5Ky/8QAGQEAAwEBAQAAAAAAAAAAAAAAAAECAwQF/8QAJBEAAgICAwACAgMBAAAAAAAAAAECESExAxJBE1EiMgRhgXH/2gAMAwEAAhEDEQA/AABjTG6aKyAZWC67EhcDbDsWLn2QtWoVC3EAqUEKbYdgd9KdUFX4c0o6u6NEmGVomIqTw0dENW4SDstA4BDuTUmBS0uDiVLU4SNCrunAXKjgm5sADDcHaRGUKj4hwlzXWG61rKkBQYhwKcZtOxUYd+FcNk0UndFr3YVpSfgGwtPlY8mNLT0SAK1zeHN6JzeGN6I+VBZksvUKMha+rwodFEzgwnRC5UKyhZmy2BQtQkm62/8ALQGqlxXB5d5QlHkVgmZ9JXL+COCHfwp4WqnFjtFckjv5a9NdgHBO0FoDSUpw7ui4aThsU7QyNJSCiei4WEbItAMBSKSUIASSRCSAEkkkgDXGuVzxVASmBy5EhBza6kOKIQTXJr3lNxFQd/FSnnGQFVhyeRZLqJhYxqe3EKtIU9JKgDhXSa8ygy6EvGRQFk+quEygGVC4wFe4HglR0F8U2/3fEfRv5otIqKlJ0kQ4PDh13TlHTUnoFd/wdF7LMLSdHAkkHvJgqQiixuUbD7h17pYbiDG7SD1tH5LGUm3g7YcUVGmsgNPgDyfLUZHW/wAwmYzg1anfLnb/AFMkx6iJC0+EOe9o7z+KPZiabCJqNHUG6PkfoPgi9HnLnwuU6q9QxWDoV2Fr8pkGHNABaToQUPgeWMMwAtYXnWXmflp8k1NUYvgdmCbRe4SGOI6wYXKZG4Xrjn5GEaNANsrdIVZR5awrpc5mYuuczjN+gBgeyblHwn4JHmtVzUK9jVveJcq4UuysL6ZO4OZs9CHfmlheWcOxozAVXaOLiW/+rZshTig+CVmDZRaVFWwjdoXoLKGHpkinTDHdYLj9529E19U7NbHVoAjuRCPmXhov4rrLPPsNwd9Qwxhd6Cw9ToEY/k6pEuLG9nOP1AIWsdiHTrPr+Sc7ENcMpspfO/DSP8WK2zB47l+pREvZDTo5pDmH/YWB7FVVTBjovTWPyTo5hEOY64c3cELMce4c2k/yXY8B7JuQD9knqD+C0hy9jDm4emVoyD+HjonNwQ6K2cJ2SFNbdznKl3D5GiFOAutHkEIOsxNTHZWfwASRmUrqfYLDGUJTH0YKJpPSqGVnYwRzEwMRLmKNzYRYEMQkSpMspgYiwIi1PYYTnNUYclQrHuN04XSaJTiIQInw9QsJe0eYAlvr+cSh8XzBVcfjOnX8FzOg8XQnzN9x36hHVXk24pUqHjiT+pU9HiD5EEkuP00CqQYN0XhjLgRtum4pG/Z/ZccR5kfTbkYb6SCbnc+nRZ3+ZVnuBL3EylisI74j6W2RXC8KHvAPwi59AqqEY3RPaUpbwekcv8ScWMzAxb1J6wt3hXktDoi2i8xw/GKVDLmgWnsBt7q84TzjTqmA6DMRtA3XG00rrB0YeLNgAXlzSYGVxI9GkoWhXd11+SJ5ZqNqB5I1zN9oVFUxDWPex5gA2J0hS1UVL7sE/wAnEvThcw819x++qz/N1Ooyk57ZMAzHyPqoqfMtNj8hqNE6EuhaV9dlRmUwQ6x/RKsZQ7aPH8BzkWuAcTGkm/4LT8O4uysYY4B/TZ3p+SxfNPAhTqPDLQ4/mFV8IbUaZBjKZB6EdCuh8UJR7QZHyyUqkj1l7JvEOF+nuFVcRxTWAuecsW/6R2D4wx1NheQX5fNpqLGR+9VVcb4hScIDG37yfumAsEldGzdK7HYfEB1Jry17swJADg2GCwJsTJIMe3VVfF6pc5l5bkGWdQNCD3kIetxBxblFhA+URHawQufSSt4xrJx83KpKkyVRl9106KGbrQ5iYlNc0LhNlHmQJnMgSTl1FsQxlRPc9KrRhOpM7JlCYDqlUKMY2yGfR1QDQM1yTinChdJ1IpEkVRQEoh7CozQKYDfEXDUXXUCuHDlNIdHJXXFMa0p5TBCYAdQnuaFxgXXBSxj2MEQpOG4bK5xm0W97/gowYRVB3lJlTLRpwvNFdj8F4jrHzDrpCs+W+WanjDzNiL7g9h30+9Op4HP5i8NHpeB6LX8Ir0KNMPfNogadSJ7mFPeTj1R0qMU+zN5ytg8gaAIDRfue6B5q4CXguYGk31EiFQYPnOS4spuIsBDgIJMCVpeX+YxiC6i9jvEa0lwdAcWyBItfULX4bhTM/lqdo8pfyjUc5xcXQTldo2zbi3QbEL0Hlrh7WMaA5xDRAzOLr+6t+NcGNsroB+1Egjo4dUM0CiweabWgRJ9FzzlJvq/DaPWrj6ed89tjEubPlLWu++fnqsy58aLU84DNUncMbJtrLjH3ELLuYtOJLqc/M25UJlSVMAhskKRj1ZiSOXIXQ5LMihCkppUkrhRQUNldyroCdMIoBmQpKXOEk6YqQhVlPY8KLw4TgEgoKZXCe9wIQDmFRlzkUCDMi5lUdOoVIXooZwNvdEeGIQbqhldNYwl1AkewJll2kZ1XKohOgGeCCoX0EQEg2UACikniknudCZnQA0005php/f73XAV10EaxEEe179kpLBUHUkF4WuAWg2uB6g6fijuNYx9TDupC0GZGttB81VcWwrm0i9mkNe0ja6bw/iQqAtLocR5mnQ9x1CcI1+RvOVqjPYZxaS3PAMAwT5TJE+0fNek/+I8C5uIdWNUuY1mT+0ucQXBpJkgR2uvOcVSptcZdfNcC8X6jZej8kcwUKNFzc7Z11yxGgJN102YHp3NXFGUcK+pJMRA1MyNFh38S8anTqmxJyR/cQTMeyp+LcXdxF4oUSW0ZzPfoXwW/1XiJ7kxoNdlyvwSS177MYZYztoCfYBcvNBykkjfjkoxyUXH+DEMNR5Azhoa37XwgG+5Xn+Jplji06g/9Fem88uL8Y0D4W0wBqbkmbfcvPeOUyHj/AB1PqUsRfVCmrj2YCxdKa0JwKujnOTCRK48pjE6AJY+y4EwBdDUBY8PSe6UxwhNzpCsV0kvEC4jsItxQldGHurQUhoFK3DBTZp1KR7OygqM7K6xVMdFBTpgo7B1KtrCphRI1CsG0bpzxKVh1K4YeV11EKwDAntoAosVFO+idkvDO6tBSE3ThRBTsKKhrF1jJVwcM0KIUgEdvA6lX4EqB1FXooKF2HCaYqKplJONIaQrNmGB0RLMFNgJhDd4KSoi4dRz0sr7QS2+mUjT0us3U4Pkqmm6A5zXZO9iT8gfvWqxOJyQ0jaD0lA8QoGs1rmOAqMByE76QAdiri+rLu2ZkcEOkbj63Wm4TwSmKbxiDkDv+Ng3L3tcWH08s+yhwPNLaYIq4QmoCIc13lJ7tIstNw59TFllSrTbSptcHBhIzPI0uR0kbfEffZtGlxStGo5Q5dpYZjAYLy3M+e9hbQDULR1MSxj8jdSJPtsqHhwqOqOc9rhmFyLgCIG+yu6GFp2MS7QTss1oybtlPx3AzUa/KYy3cBN9gs5zByz4tPMweZoJZ/deXMPrt3XqXgiIOioeKUgHQ0RfQad1hzRcfyTNYSUl1Z4ezCHQjRNfhey2vNHDPDf4jR5HmT0Dzcj3uVngJKcZWrMJR6uipOGURoFW9SneEhTA1VWTRU+GUvDKtHsErhYITsKKt7ZUIZdW3gjRdZhQCkKiq/h+xSV/4DUkBQeydVwYqLJ+HqWQ1dt1kW2OqvlDiWlTNlJ7eqdBY9olNewhcBI0Uk9U6GRN1RTBZQ09ZTnPG6SQqGVmrlJymzghQYcyYVD0Odqk9kXUlamBdNAzeil0sieBuddYQpXUQdCh3UyClGSehJ4Jm0byp8HZ+utvyQZqGLIjhzC57fWVcbspBGNwuYEbqh8CPKTBBsfpC1WIYZsEFiKDHxnEEHUReFpJDRDgwx13tBdu4tABP7hX/AA54eHUnR8JyWgAi7dFFgeDtPwmw7D9J2ROH4U5rpDmxmsTMiDe4uNxupbaKikyy4RiX5AxpfrveOy1OEpwJce/Qqnw4ywA6ZvbtrfZWmHcSeoVRFIsWGUPxDCBzZA8wuETSTqxsfRXOKlFpkxbTwYzjODD6b2ET5SW/5C4XnIIb6r07HOgx2K874jhQDmA66dVxccqbRvyxtdgCq64JTKhulVqhOosButbOehGFG6l0UtWn0TaQKOyYtkGUyngKQtukXABJyoUmkR5ikn5uySnuT2D3EBMIBUlOnm12UlKnJ00Qk0WlggZTJcOinLBOiKFMLjqZBkXQUC1RA0QzKbiVYam6Ip5WppCor6dM9FFVwpVg/wCKdihnvgmUk2NkDKDhpom5YMoh2IMABRYhx2CdhSHsEi6ieSNkwVSLFEZ5Gil2weUDsc4qSgSTB0RTQMkkKNjxeE44EkkQVG3MKfhNQh8dj9FD4l0qZh2b1Vp07GlbLx7utwi8Ng21D8J/1k/NC8NcH6wJ6q+/k5DbSJ6GxXRtD0GcNwQZ5XAjYZomehg2Vo2iwGIF/wB7rP4Z+TyukHrNj6zoiGVMzhc/eoWBsKxNZjHEQQNo2JROG4nSFhmn/FR4/DtIzbIdlNpNvdVoRo6FYOEt0TsTUAbqqrDPyugaKTHPdIt+ScpVFhGNsqOKN8xOyyHEsoYT0d9bfitjxIyDbQLG8bHkcB2+oleev2OmX6FFVc03hQvrCLJswLqNlOdFvSOR5HGuUn4qEQWNAAUL6QzRsmkhJEuGqA6rkCUjQACYwQjAMfASTfDHVdRQYLV1AsGZS4Zh1K7XJMAqdlOQob8G/oY8rrRNgoatM6dE6gYN0k3YqZ12G1umuGVhUue5hQvqzbYaqrKqkCUMzutlPVph0HfdE0y0adLqFo887FIWyM0w1wJ0TKwzOFrKxq0xv7KAggTCd+DS8IG4e8kWClZTgF0WRAe0sJNioXPLmgDRCCiBtbMCIsmsokbJ9NwY2SLSp6r9C3dF+j3kHdoLaJuNgttspGVQTA21UmJALHAawUWEcMfy/wCV2ZzrDYR8zqvQsFjmPYBoR7ELFcvYIvYSW7fhCt8Dw17I809Jm/a6h8soukdKgpLJd43CAtzWWcw9fLULdpWjq4kMovzkCAd1g34uX5mrpcrSZjVOjWcW4jDMo1gFZ6hxZwdrH5LtWqXAON7KnxVTKZCbb2SjUt4kYDswgant6rmH44+q8AAwPU/UarJsxudjmt+KQ33NgfqtfwSm1rAGiwGvU3ue65+abqkb8UVst8U2Wz1aZWO4405JA+IgBbl7paFhOZJDwxn2Jn3/AEWaWQm/xaM/icNbulQow2US9zgbiU1rXAXFlon9nKo0DvbFkOHkOhWzKQ33TBgW5rXVJjoHLbXTcRTltkY6gu0sOfZFhTsqfAPVJW2XskixUi0xNNoEoZmINxsEXVF9fZcr4fSNwpoojw+YDM4WOiY+mbuUhaWsh5k7BDGsbNE7J2BPQpF11ABE23hWAEXE2GnVNa+ZJHeEMAdrCRA1KjewsIEI6k9okxB2Ujhm9OqT+gKr+Jc4y4aaIwMloKDreVxbsiWYkmGgbITBsixFIWaTGYp4pgGBpomPZL2z9ko6vVAdLR2TDwEdQtBvdReDY9Roj2sIAdsUI6pBch6Ahw9K56zdTGBMCbGfuTsKYc4kahPADWGR5nAj71KQKrL7l+uMhbpb6K8wjg45Tvp6rz7g+LLX/rZbHD1SRLetuxSkns6lnBTc+4ktc2ju5oe70kgfQrLMqWhaXn6jm8LEaOA8CoDsfM9h/wDv5LIStLXhjK/S8p4mGBVON8036rj6ktDeikwGFNR7GAwXECTt1Psm5dsErBY8h8vGs573khgIA7ka+116GOFeHGUS3r27qy4Nw9lGk2mwQAPck3JPeSjy20dlq+FNZ2NcrWjNYp+VpHRYvGed7ndSTKv+N4rzvpg6Egn20+aochgCNd1yqLtj5HdIFpYa8qDEsdtBurxtIBgEKsbTJcRBsrZD0QmkIEld8rW21KJGHBaUIyh5ydgEqFsTaTnRGv4JPYS4ibiy7hn5Sb3UrGfaPcosLAPDd1SRXjDskpsLRZOy5p2At6phY4X13soalBpEOdMaAdUTTpwASU78E8AeOY98ZbRclJjIZLrkHXsj2vF56dIlQCsxxiPKQhO0CyDOxT5GQW6rj6paQALm8o52Roa0aaJ2IcA3L133SavYmrIW1IcCRqNFI6mbAm0yY2GyFrNI8wM2hcw1Qxc/EIgpppDqjmOZNgNxB6qYPDQDG111gmGkjsigWFhYRfcpoa/sDxFAkAt3Ejsm0BaHbI+ocryGXsIH1KHqSLRrc9ymxNHK1VobF5GnSUPSbml3TVKrTJiGz1SwRnMGjVTkYmOBGvmCcxs63Kc+kGuEtk9dkZSoZrsItr3TX0CZS1aJY8dDcHv0K9J5W4eGMOaC4w70svO+MVXCm5wjyOafuN16by2ZoMeT8QmT02W/Ck7sJSdIF504U2rhK8Nl7WF7Y1LmDMBbUkAj3Xj+AJeMoBJGkX/7XvzKrSYDgT6rEcxcuspVRXpeRr3f8jRGUO/qaNpOo6p8kE8oSk9Hn7qBGv6q85RwZfiaZIsCXH2B/RS4pmYg6iTeNI0Wo5CwZyvqOGrsrf8AEakep+ijjjchtmybog+L4o06T3jZpj1hGtCoObHODGR8JfleOxEj5hdL0QYfEvIaHmS57jJ9bk/RPfWOUEjQW/fsiuN0w0sDfhAJM9SqmpUdodtLLhSawzWT7Owo4kua0gqKnWIJFpO/Zcp0BldJhxiFG6mdBroTO6b+xaHY5xbGUz1UNFtySNpKTKLgMsiZTcNVcHunTf0CS/slHXsBEgKei5rmEdFHVuCR8M2UT3Whvb37JWCwxuXskjG0XwPKkn1GOZTJIaPsiSVM5pI8okk23v0TGvJGZrSLkO7jpKcxjpgyA3zAQpik0Rs5iGuiLE7t6dVXV6TgczB0kKw8W0kOBOi7lJhgudZ7I65GmBeLmcGvERcdlNXJJ0kDdS0cK0ODyPNJEnS3ULrKbs+YeYGTA09AihtWyZlItYHGJ1ynYIFkuu34h5vWeiMqsc5jnEXFo0tuhKbHAtY0AWtPT1T14JANZ7/EzQQALd1aNcSJLdQCnNfmIkD23jYItktGVwEuNwNh0lLDHvYJQxOd0htxA/RTPpPnQEjYnRDswuXOWzBNvVcwdUvBaTBbr39U442LQTWeAC82AOUDqYuoWGAMgOZ2oAn9hTupjKM1xJ9j1hLh77Py7yJjZV6CYBXD3ENIOt/RdoPcxttydFY4t4BY0C9iSO43QviDxcjT8IMnp1RVDA6+Fzse12pY6B0sYPrKIxvNQo0WU2vAaKTSASPhDToNzYe0qUsJzECQ5sT0jqsVx7hDzVaARGRzBmmMjiTY9QSR6ALTjlVjLvB/+SqjmEljWFrmhzgZsS6C1ojTyg3KvsJzd/EANcWkOny3u4CWuA2vIhYXhnJpMB7iWnUC3uDC1OD4cykcrGRlEAm5deZLlU31Vh2VFg+sfOyLOEgdFt+UCf4ZhdYmY/xBIb8gF589rg62oMQfxWw5e4kxwYxpBjM0xMBwixP+rrdkcLy7E8mtBVNzU5ow7i7QFpHrmH6qehiTlcbE+YtA94H4LPczcWY+gaYcDUztGQXJElaSlgHGjMYnGF5G4M6bKTxRkDRBMx3ChqUpbPwlsTFtUxtGCHToR7rktpheAhxizm6iL7nZVbKD2u1Nzp0V3iCHDPN7QNRG6GZiQ55tobH6pyBjXUjAgibn23lDeH10Fz3CmL5e63l0/ROd5hMRt6KW1QXaAKeKaXuYJLdk3MWlvc2PvuiDRIJbF5F46p9TDicua4+UpJWGy3ZiLC40C6qvKz+o/ckrt/Y8ljiWwbW1+pQ+GeSDJJnXvdJJJaJZeY6g3yCBAbooOL0gDTIES38l1JOWmUAvYIIjdR4Wzvl8kkllHz/RLQ2qLn0H1T8WwZGmL6T/AKlJJOQIFw7bjtP0CN3aerSuJKVofgqf2RsSbKPFMANhrKSSa2J6HYNsi9/N9FC+zXRbyn6pJKvCfoJpsGVhi5YPqVVlgFV9tXD6JJKXtif7FjR+A93ge3RRcwYZnggxcPsekriSqO0UN4V8KO+27s0JJLfm/Vf9BbG1qLYeYvDbyVk34t9LHUKdNxYx1RxLW2Bza/U/eupJQ2UjR4jidVlSnlqESwk6X/5ag/Afcp6dIZ3GLnKSdz5Ekkp/qUypYJeJ3cQe6Kx9qrgNMuiSSyWjN6Ez4PcKFjfID/e76lJJNAywawZXW3UlKmC15jqfkkkmg8BKHxj0/BQYqg3MDH2ep6pJJS0xouaOGblFth16JJJKCj//2Q=="}``` Configure API Gateway * Finally, we need an API to have the service accessible* This API accepts image POST to it and pass the image to ```invoke_image_object_detection``````curl -X POST -H 'content-type: application/octet-stream' --data-binary @$f $OD_API | jq .``` * we can create it by console also by aws cli
!aws lambda add-permission \ --function-name invoke_endpoint \ --action lambda:InvokeFunction \ --statement-id apigateway \ --principal apigateway.amazonaws.com !sed "s/<account_id>/$account_id/g" latestswagger2-template.json > latestswagger2-tmp.json !sed "s/<region>/$region/g" latestswagger2-tmp.json > latestswagger2.json api_info = !aws apigateway import-rest-api \ --fail-on-warnings \ --body 'file:////home/ec2-user/SageMaker/incremental-training-mlops/01-byoc/latestswagger2.json' api_info api_obj = json.loads(''.join(api_info)) api_id = api_obj['id'] api_id !aws apigateway create-deployment --rest-api-id $api_id --stage-name dev
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
Manually Setup API-Gateway in Console Create Restful API Create resource and methods * click the drop down manual and name your resource * focus on the resource just created, click the drop down manual and select create method, then select backend lambda function Configurations for passing the binary content to backend* Add binary media type in ```Settings``` of this API * Configure which attribute to extract and fill it in event in Lambda integration Test API Gateway
api_endpoint = "https://{}.execute-api.{}.amazonaws.com/dev/classify".format(api_id, region) !curl -X POST -H 'content-type: application/octet-stream' --data-binary @./input/data/competition/train/train_00002.wav $api_endpoint %store endpoint_name %store lambda_role_arn %store model_s3_path
_____no_output_____
MIT-0
01-byoc/audio.ipynb
asfhiolNick/incremental-training-mlops
Hybrid Recommendations with the Movie Lens Dataset __Note:__ It is recommended that you complete the companion [__als_bqml.ipynb__](../solutions/als_bqml.ipynb) notebook before continuing with this __als_bqml_hybrid.ipynb__ notebook. If you already have the movielens dataset and trained model you can skip the "Import the dataset and trained model" section. Learning Objectives1. Know extract user and product factors from a BigQuery Matrix Factorizarion Model2. Know how to format inputs for a BigQuery Hybrid Recommendation Model
import os import tensorflow as tf PROJECT = "your-project-id-here" # REPLACE WITH YOUR PROJECT ID # Do not change these os.environ["PROJECT"] = PROJECT os.environ["TFVERSION"] = '2.5'
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
Import the dataset and trained modelIn the previous notebook, you imported 20 million movie recommendations and trained an ALS model with BigQuery MLTo save you the steps of having to do so again (if this is a new environment) you can run the below commands to copy over the clean data and trained model. First create the BigQuery dataset and copy over the data
!bq mk movielens
BigQuery error in mk operation: Dataset 'qwiklabs-gcp-00-20dab82189fb:movielens' already exists.
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
Next, copy over the trained recommendation model. Note that if you're project is in the EU you will need to change the location from US to EU below. Note that as of the time of writing you cannot copy models across regions with `bq cp`.
%%bash bq --location=US cp \ cloud-training-demos:movielens.recommender_16 \ movielens.recommender_16 bq --location=US cp \ cloud-training-demos:movielens.recommender_hybrid \ movielens.recommender_hybrid
Table 'cloud-training-demos:movielens.recommender_16' successfully copied to 'qwiklabs-gcp-00-20dab82189fb:movielens.recommender_16' Table 'cloud-training-demos:movielens.recommender_hybrid' successfully copied to 'qwiklabs-gcp-00-20dab82189fb:movielens.recommender_hybrid'
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
Next, ensure the model still works by invoking predictions for movie recommendations:
%%bigquery --project $PROJECT SELECT * FROM ML.PREDICT(MODEL `movielens.recommender_16`, ( SELECT movieId, title, 903 AS userId FROM movielens.movies, UNNEST(genres) g WHERE g = 'Comedy' )) ORDER BY predicted_rating DESC LIMIT 5
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
Incorporating user and movie information The matrix factorization approach does not use any information about users or movies beyond what is available from the ratings matrix. However, we will often have user information (such as the city they live, their annual income, their annual expenditure, etc.) and we will almost always have more information about the products in our catalog. How do we incorporate this information in our recommendation model?The answer lies in recognizing that the user factors and product factors that result from the matrix factorization approach end up being a concise representation of the information about users and products available from the ratings matrix. We can concatenate this information with other information we have available and train a regression model to predict the rating. Obtaining user and product factorsWe can get the user factors or product factors from ML.WEIGHTS. For example to get the product factors for movieId=96481 and user factors for userId=54192, we would do:
%%bigquery --project $PROJECT SELECT processed_input, feature, TO_JSON_STRING(factor_weights) AS factor_weights, intercept FROM ML.WEIGHTS(MODEL `movielens.recommender_16`) WHERE (processed_input = 'movieId' AND feature = '96481') OR (processed_input = 'userId' AND feature = '54192')
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
Multiplying these weights and adding the intercept is how we get the predicted rating for this combination of movieId and userId in the matrix factorization approach.These weights also serve as a low-dimensional representation of the movie and user behavior. We can create a regression model to predict the rating given the user factors, product factors, and any other information we know about our users and products. Creating input featuresThe MovieLens dataset does not have any user information, and has very little information about the movies themselves. To illustrate the concept, therefore, let’s create some synthetic information about users:
%%bigquery --project $PROJECT CREATE OR REPLACE TABLE movielens.users AS SELECT userId, RAND() * COUNT(rating) AS loyalty, CONCAT(SUBSTR(CAST(userId AS STRING), 0, 2)) AS postcode FROM movielens.ratings GROUP BY userId
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
Input features about users can be obtained by joining the user table with the ML weights and selecting all the user information and the user factors from the weights array.
%%bigquery --project $PROJECT WITH userFeatures AS ( SELECT u.*, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors FROM movielens.users u JOIN ML.WEIGHTS(MODEL movielens.recommender_16) w ON processed_input = 'userId' AND feature = CAST(u.userId AS STRING) ) SELECT * FROM userFeatures LIMIT 5
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
Similarly, we can get product features for the movies data, except that we have to decide how to handle the genre since a movie could have more than one genre. If we decide to create a separate training row for each genre, then we can construct the product features using.
%%bigquery --project $PROJECT WITH productFeatures AS ( SELECT p.* EXCEPT(genres), g, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS product_factors FROM movielens.movies p, UNNEST(genres) g JOIN ML.WEIGHTS(MODEL movielens.recommender_16) w ON processed_input = 'movieId' AND feature = CAST(p.movieId AS STRING) ) SELECT * FROM productFeatures LIMIT 5
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
Combining these two WITH clauses and pulling in the rating corresponding the movieId-userId combination (if it exists in the ratings table), we can create the training dataset.**TODO 1**: Combine the above two queries to get the user factors and product factor for each rating. **NOTE**: The below cell will take approximately 4~5 minutes for the completion.
%%bigquery --project $PROJECT CREATE OR REPLACE TABLE movielens.hybrid_dataset AS WITH userFeatures AS ( SELECT u.*, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS user_factors FROM movielens.users u JOIN ML.WEIGHTS(MODEL movielens.recommender_16) w ON processed_input = 'userId' AND feature = CAST(u.userId AS STRING) ), productFeatures AS ( SELECT p.* EXCEPT(genres), g, (SELECT ARRAY_AGG(weight) FROM UNNEST(factor_weights)) AS product_factors FROM movielens.movies p, UNNEST(genres) g JOIN ML.WEIGHTS(MODEL movielens.recommender_16) w ON processed_input = 'movieId' AND feature = CAST(p.movieId AS STRING) ) SELECT p.* EXCEPT(movieId), u.* EXCEPT(userId), rating FROM productFeatures p, userFeatures u JOIN movielens.ratings r ON r.movieId = p.movieId AND r.userId = u.userId
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
One of the rows of this table looks like this:
%%bigquery --project $PROJECT SELECT * FROM movielens.hybrid_dataset LIMIT 1
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
Essentially, we have a couple of attributes about the movie, the product factors array corresponding to the movie, a couple of attributes about the user, and the user factors array corresponding to the user. These form the inputs to our “hybrid” recommendations model that builds off the matrix factorization model and adds in metadata about users and movies. Training hybrid recommendation modelAt the time of writing, BigQuery ML can not handle arrays as inputs to a regression model. Let’s, therefore, define a function to convert arrays to a struct where the array elements are its fields:
%%bigquery --project $PROJECT CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_users(u ARRAY<FLOAT64>) RETURNS STRUCT< u1 FLOAT64, u2 FLOAT64, u3 FLOAT64, u4 FLOAT64, u5 FLOAT64, u6 FLOAT64, u7 FLOAT64, u8 FLOAT64, u9 FLOAT64, u10 FLOAT64, u11 FLOAT64, u12 FLOAT64, u13 FLOAT64, u14 FLOAT64, u15 FLOAT64, u16 FLOAT64 > AS (STRUCT( u[OFFSET(0)], u[OFFSET(1)], u[OFFSET(2)], u[OFFSET(3)], u[OFFSET(4)], u[OFFSET(5)], u[OFFSET(6)], u[OFFSET(7)], u[OFFSET(8)], u[OFFSET(9)], u[OFFSET(10)], u[OFFSET(11)], u[OFFSET(12)], u[OFFSET(13)], u[OFFSET(14)], u[OFFSET(15)] ));
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
which gives:
%%bigquery --project $PROJECT SELECT movielens.arr_to_input_16_users(u).* FROM (SELECT [0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15.] AS u)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
We can create a similar function named movielens.arr_to_input_16_products to convert the product factor array into named columns.**TODO 2**: Create a function that returns named columns from a size 16 product factor array.
%%bigquery --project $PROJECT CREATE OR REPLACE FUNCTION movielens.arr_to_input_16_products(p ARRAY<FLOAT64>) RETURNS STRUCT< p1 FLOAT64, p2 FLOAT64, p3 FLOAT64, p4 FLOAT64, p5 FLOAT64, p6 FLOAT64, p7 FLOAT64, p8 FLOAT64, p9 FLOAT64, p10 FLOAT64, p11 FLOAT64, p12 FLOAT64, p13 FLOAT64, p14 FLOAT64, p15 FLOAT64, p16 FLOAT64 > AS (STRUCT( p[OFFSET(0)], p[OFFSET(1)], p[OFFSET(2)], p[OFFSET(3)], p[OFFSET(4)], p[OFFSET(5)], p[OFFSET(6)], p[OFFSET(7)], p[OFFSET(8)], p[OFFSET(9)], p[OFFSET(10)], p[OFFSET(11)], p[OFFSET(12)], p[OFFSET(13)], p[OFFSET(14)], p[OFFSET(15)] ));
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
Then, we can tie together metadata about users and products with the user factors and product factors obtained from the matrix factorization approach to create a regression model to predict the rating: **NOTE**: The below cell will take approximately 25~30 minutes for the completion.
%%bigquery --project $PROJECT CREATE OR REPLACE MODEL movielens.recommender_hybrid OPTIONS(model_type='linear_reg', input_label_cols=['rating']) AS SELECT * EXCEPT(user_factors, product_factors), movielens.arr_to_input_16_users(user_factors).*, movielens.arr_to_input_16_products(product_factors).* FROM movielens.hybrid_dataset
Executing query with job ID: 3ccc5208-b63e-479e-980f-2e472e0d65ba Query executing: 1327.21s
Apache-2.0
courses/machine_learning/deepdive2/recommendation_systems/solutions/als_bqml_hybrid.ipynb
Glairly/introduction_to_tensorflow
Outliers
# day_1 (should be 0-3) fig = plt.figure(figsize=(20, 5)) plt.subplot(1, 3, 1) ax = df_num_data['day_1'].hist(bins=60) ax.set_ylabel("Ionograms") ax.set_xlabel("day_1") # day_1 outliers above 10 plt.subplot(1, 3, 2) ax = df_num_data[df_num_data['day_1']>=10]['day_1'].hist(bins=50) ax.set_xlabel("day_1") # day_1 below 10 plt.subplot(1, 3, 3) ax = df_num_data[df_num_data['day_1']<10]['day_1'].hist(bins=10) ax.set_xlabel("day_1") print(df_num_data[df_num_data['day_1']>=10].shape[0]) print(df_num_data[df_num_data['day_1']>3].shape[0]) print(df_num_data[df_num_data['day_1']<=3].shape[0]) print("% error:", 100 * df_num_data[df_num_data['day_1']>3].shape[0] / df_num_data.shape[0]) # hour_1 (should be 0-2) fig = plt.figure(figsize=(20, 5)) plt.subplot(1, 3, 1) ax = df_num_data['hour_1'].hist(bins=60) ax.set_ylabel("Ionograms") ax.set_xlabel("hour_1") # # hour_1 outliers above 10 plt.subplot(1, 3, 2) df_num_data[df_num_data['hour_1']>=10]['hour_1'].hist(bins=50) ax.set_xlabel("hour_1") # # hour_1 outliers below 10 plt.subplot(1, 3, 3) df_num_data[df_num_data['hour_1']<10]['hour_1'].hist(bins=10) ax.set_xlabel("hour_1") print(df_num_data[df_num_data['hour_1']>=10].shape[0]) print(df_num_data[df_num_data['hour_1']>2].shape[0]) print(df_num_data[df_num_data['hour_1']<=2].shape[0]) print("% error:", 100 * df_num_data[df_num_data['hour_1']>2].shape[0] / df_num_data.shape[0]) # minute_1 (should be 0-6) fig = plt.figure(figsize=(20, 5)) plt.subplot(1, 3, 1) ax = df_num_data['minute_1'].hist(bins=60) ax.set_ylabel("Ionograms") ax.set_xlabel("minute_1") plt.subplot(1, 3, 2) df_num_data[df_num_data['minute_1']>6]['minute_1'].hist(bins=55) ax.set_xlabel("minute_1") plt.subplot(1, 3, 3) df_num_data[df_num_data['minute_1']<=10]['minute_1'].hist(bins=10) ax.set_xlabel("minute_1") print(df_num_data[df_num_data['minute_1']>=10].shape[0]) print(df_num_data[df_num_data['minute_1']>6].shape[0]) print(df_num_data[df_num_data['minute_1']<=6].shape[0]) print("% error:", 100 * df_num_data[df_num_data['minute_1']>6].shape[0] / df_num_data.shape[0]) # year (should be 0-10) fig = plt.figure(figsize=(20, 5)) plt.subplot(1, 3, 1) ax = df_num_data['year'].hist(bins=range(0,90)) ax.set_xlabel("year") ax.set_ylabel("Ionograms") plt.subplot(1, 3, 2) ax = df_num_data[df_num_data['year'] < 20]['year'].hist(bins=range(0,20)) ax.set_xlabel("year") plt.subplot(1, 3, 3) ax = df_num_data[df_num_data['year'] <= 11]['year'].hist(bins=range(0,12)) ax.set_xlabel("year") print(df_num_data[df_num_data['year']>10].shape[0]) print(df_num_data[df_num_data['year']<=10].shape[0]) print("% error:", 100 * df_num_data[df_num_data['year']>10].shape[0] / df_num_data.shape[0]) # station number 1 (should be 0-7) fig = plt.figure(figsize=(20, 5)) plt.subplot(1, 3, 1) ax = df_num_data['station_number_1'].hist(bins=60) ax.set_xlabel("station number") ax.set_ylabel("Ionograms") plt.subplot(1, 3, 2) ax = df_num_data[df_num_data['station_number_1'] > 7]['station_number_1'].hist(bins=15) ax.set_xlabel("station number") print(df_num_data[df_num_data['station_number_1']>7].shape[0]) print(df_num_data[df_num_data['station_number_1']<=7].shape[0]) print("% error:", 100 * df_num_data[df_num_data['station_number_1']>7].shape[0] / df_num_data.shape[0]) # satellite number (should be 1) fig = plt.figure(figsize=(20, 5)) plt.subplot(1, 3, 1) ax = df_num_data['satellite_number'].hist(bins=60) ax.set_xlabel("satellite_number") ax.set_ylabel("Ionograms") plt.subplot(1, 3, 2) ax = df_num_data[df_num_data['satellite_number'] > 1]['satellite_number'].hist(bins=50) ax.set_xlabel("satellite_number") plt.subplot(1, 3, 3) ax = df_num_data[df_num_data['satellite_number'] < 10]['satellite_number'].hist(bins=10) ax.set_xlabel("satellite_number") print(df_num_data[df_num_data['satellite_number']>1].shape[0]) print(df_num_data[df_num_data['satellite_number']==1].shape[0]) print("% error:", 100 * df_num_data[df_num_data['satellite_number']!=1].shape[0] / df_num_data.shape[0])
14487 421789 % error: 9.830987481187577
MIT
data_cleaning/notebooks/data_cleaning.ipynb
CamRoy008/AlouetteApp
Output data
df_num_data.to_csv("data/all_num_data.csv") df_dot_data.to_csv("data/all_dot_data.csv") df_loss.to_csv("data/all_loss.csv") df_outlier.to_csv("data/all_outlier.csv") df_num_data = pd.read_csv("data/all_num_data.csv")
_____no_output_____
MIT
data_cleaning/notebooks/data_cleaning.ipynb
CamRoy008/AlouetteApp
Combine columns
df_num_data['day'] = df_num_data.apply(lambda x: int(str(x['day_1']) + str(x['day_2']) + str(x['day_3'])), axis=1) df_num_data['hour'] = df_num_data.apply(lambda x: int(str(x['hour_1']) + str(x['hour_2'])), axis=1) df_num_data['minute'] = df_num_data.apply(lambda x: int(str(x['minute_1']) + str(x['minute_2'])), axis=1) df_num_data['second'] = df_num_data.apply(lambda x: int(str(x['second_1']) + str(x['second_2'])), axis=1) df_num_data['station_number'] = df_num_data.apply(lambda x: int(str(x['station_number_1']) + str(x['station_number_2'])), axis=1) df_num_data.head() rows = df_num_data.shape[0] print("Rows in unfiltered df:", rows) print() filtered_df = df_num_data[df_num_data.year <= 12] print("Errors in 'year':", rows - filtered_df.shape[0]) rows = filtered_df.shape[0] filtered_df = filtered_df[filtered_df.day <= 365] print("Errors in 'day':", rows - filtered_df.shape[0]) rows = filtered_df.shape[0] filtered_df = filtered_df[filtered_df.hour <= 24] print("Errors in 'hour':", rows - filtered_df.shape[0]) rows = filtered_df.shape[0] filtered_df = filtered_df[filtered_df.minute <= 60] print("Errors in 'minute':", rows - filtered_df.shape[0]) rows = filtered_df.shape[0] filtered_df = filtered_df[filtered_df.second <= 60] print("Errors in 'second':", rows - filtered_df.shape[0]) rows = filtered_df.shape[0] filtered_df = filtered_df[filtered_df.station_number <= 99] print("Errors in 'station_number':", rows - filtered_df.shape[0]) rows = filtered_df.shape[0] filtered_df = filtered_df[filtered_df.satellite_number == 1] print("Errors in 'satellite_number':", rows - filtered_df.shape[0]) rows = filtered_df.shape[0] print() print("Rows in filtered df:", rows) print("Total error rate:", 100 - 100 * rows / df_num_data.shape[0])
Rows in unfiltered df: 467776 Errors in 'year': 258 Errors in 'day': 32383 Errors in 'hour': 8860 Errors in 'minute': 4337 Errors in 'second': 8107 Errors in 'station_number': 194 Errors in 'satellite_number': 6332 Rows in filtered df: 407305 Total error rate: 12.92734129155835
MIT
data_cleaning/notebooks/data_cleaning.ipynb
CamRoy008/AlouetteApp
Convert to datetime object
filtered_df2 = filtered_df.copy() filtered_df['timestamp'] = filtered_df.apply(lambda x: datetime.datetime(year=1962, month=1, day=1) + \ relativedelta(years=x['year'], days=x['day']-1, hours=x['hour'], minutes=x['minute'], seconds=x['second']), axis=1) filtered_df2['timestamp'] = filtered_df2.apply(lambda x: datetime.datetime(year=1960, month=1, day=1) + \ relativedelta(years=x['year'], days=x['day']-1, hours=x['hour'], minutes=x['minute'], seconds=x['second']), axis=1) # fig = plt.figure(figsize=(20, 5)) plt.subplot(1, 3, 1) ax = filtered_df['timestamp'].hist(bins=100) ax.set_xlabel("satellite_number") ax.set_ylabel("Ionograms") plt.subplot(1, 3, 2) ax = filtered_df2['timestamp'].hist(bins=100) ax.set_xlabel("satellite_number") ax.set_ylabel("Ionograms") print("Rows in df:", filtered_df.shape[0]) alouette_launch_date = datetime.datetime(year=1962, month=9, day=29) alouette_deactivation_date = datetime.datetime(year=1972, month=12, day=31) # don't know eact date filtered_df = filtered_df[filtered_df.timestamp >= alouette_launch_date] print("Errors in timestamp (date too early):", rows - filtered_df.shape[0]) rows = filtered_df.shape[0] filtered_df = filtered_df[filtered_df.timestamp <= alouette_deactivation_date] print("Errors in timestamp (date too late):", rows - filtered_df.shape[0]) print("Total error rate:", 100 - 100 * rows / filtered_df.shape[0]) filtered_df2 = filtered_df2[filtered_df2.timestamp >= alouette_launch_date] print("Errors in timestamp (date too early):", rows - filtered_df2.shape[0]) rows = filtered_df2.shape[0] filtered_df2 = filtered_df2[filtered_df2.timestamp <= alouette_deactivation_date] print("Errors in timestamp (date too late):", rows - filtered_df2.shape[0]) print("Total error rate:", 100 - 100 * rows / filtered_df2.shape[0]) # fig = plt.figure(figsize=(20, 5)) plt.subplot(1, 3, 1) ax = filtered_df['timestamp'].hist(bins=100) ax.set_xlabel("satellite_number") ax.set_ylabel("Ionograms") plt.subplot(1, 3, 2) ax = filtered_df2['timestamp'].hist(bins=100) ax.set_xlabel("satellite_number") ax.set_ylabel("Ionograms") # Match station_numbers with their respective station names and locations df_stations = pd.read_csv("data/station_codes.csv") df_stations.columns = ['station_name', 'station_number', 'before_07_01_1965','3_letter_code', 'lat', 'lon'] df_stations.astype({'station_number': 'int32'}).dtypes stations_dict=df_stationscsvict('list') df_stations.astype({'station_name': 'str'}).dtypes stations_dict['station_number'].index(6) stations_dict['station_name'][0] #filtered_df.astype({'station_number': 'int32'}).dtypes filtered_df.columns type(get_station_name(1, datetime.datetime.strptime('1965-05-25 16:48:01','%Y-%m-%d %H:%M:%S'))) df = pd.DataFrame([[1, 2], [3, 4], [5, 6]], index=['a', 'b', 'c'], columns=['A', 'B']) df['C'] = "Hello" type(df['C'][0]) df_stations["station_name"] = df_stations["station_name"].astype(str, errors='raise') def get_station_name(station_number, timestamp): # print(station_number, timestamp) if timestamp >= datetime.datetime(year=1965, month=7, day=1) and station_number in stations_dict['station_number']: name = stations_dict['station_name'][stations_dict['station_number'].index(station_number)] code = stations_dict['3_letter_code'][stations_dict['station_number'].index(station_number)] lat = stations_dict['lat'][stations_dict['station_number'].index(station_number)] lon = stations_dict['lon'][stations_dict['station_number'].index(station_number)] elif timestamp < datetime.datetime(year=1965, month=7, day=1) and station_number in stations_dict['before_07_01_1965']: name = stations_dict['station_name'][stations_dict['before_07_01_1965'].index(station_number)] code = stations_dict['3_letter_code'][stations_dict['before_07_01_1965'].index(station_number)] lat = stations_dict['lat'][stations_dict['before_07_01_1965'].index(station_number)] lon = stations_dict['lon'][stations_dict['before_07_01_1965'].index(station_number)] elif timestamp < datetime.datetime(year=1963, month=4, day=25) and station_number == 10: name = stations_dict['station_name'][stations_dict['station_name'].index('Winkfield, England')] code = stations_dict['3_letter_code'][stations_dict['station_name'].index('Winkfield, England')] lat = stations_dict['lat'][stations_dict['station_name'].index('Winkfield, England')] lon = stations_dict['lon'][stations_dict['station_name'].index('Winkfield, England')] # assumption, need to look into these: elif timestamp >= datetime.datetime(year=1965, month=7, day=1) and station_number == 9: name = stations_dict['station_name'][stations_dict['station_name'].index('South Atlantic, Falkland Islands')] code = stations_dict['3_letter_code'][stations_dict['station_name'].index('South Atlantic, Falkland Islands')] lat = stations_dict['lat'][stations_dict['station_name'].index('South Atlantic, Falkland Islands')] lon = stations_dict['lon'][stations_dict['station_name'].index('South Atlantic, Falkland Islands')] elif timestamp >= datetime.datetime(year=1965, month=7, day=1) and station_number == 7: name = stations_dict['station_name'][stations_dict['station_name'].index('Quito, Ecuador')] code = stations_dict['3_letter_code'][stations_dict['station_name'].index('Quito, Ecuador')] lat = stations_dict['lat'][stations_dict['station_name'].index('Quito, Ecuador')] lon = stations_dict['lon'][stations_dict['station_name'].index('Quito, Ecuador')] elif timestamp >= datetime.datetime(year=1965, month=7, day=1) and station_number == 4: name = stations_dict['station_name'][stations_dict['station_name'].index("St. John's, Newfoundland")] code = stations_dict['3_letter_code'][stations_dict['station_name'].index("St. John's, Newfoundland")] lat = stations_dict['lat'][stations_dict['station_name'].index("St. John's, Newfoundland")] lon = stations_dict['lon'][stations_dict['station_name'].index("St. John's, Newfoundland")] else: name = None code = None lat = None lon = None #if len([name, code, lat, lon]) != 4: return name, code, lat, lon #station_values = filtered_df.apply(lambda x: get_station_name(x['station_number'], x['timestamp']), axis=1) filtered_df['station_name'] = None filtered_df['3_letter_code'] = None filtered_df['lat'] = None filtered_df['lon'] = None for i in range(len(filtered_df.index)): station_values = get_station_name(filtered_df.iloc[i]['station_number'], filtered_df.iloc[i]['timestamp']) filtered_df.iloc[i, filtered_df.columns.get_loc('station_name')] = station_values[0] filtered_df.iloc[i, filtered_df.columns.get_loc('3_letter_code')] = station_values[1] filtered_df.iloc[i, filtered_df.columns.get_loc('lat')] = station_values[2] filtered_df.iloc[i, filtered_df.columns.get_loc('lon')] = station_values[3] #filtered_df['station_name'], filtered_df['3_letter_code'], filtered_df['lat'], filtered_df['lon'] = fig = plt.figure(figsize=(20, 5)) ax = filtered_df[filtered_df['station_number'] == 4].apply(lambda x: x['timestamp'].date(), axis=1).value_counts().plot() ax.set_xlabel("station_number") ax.set_ylabel("Ionograms") add_value_labels(ax) print(filtered_df.isnull().sum()) print(len(filtered_df.index)) # function from https://stackoverflow.com/questions/28931224/adding-value-labels-on-a-matplotlib-bar-chart def add_value_labels(ax, spacing=5): """Add labels to the end of each bar in a bar chart. Arguments: ax (matplotlib.axes.Axes): The matplotlib object containing the axes of the plot to annotate. spacing (int): The distance between the labels and the bars. """ # For each bar: Place a label for rect in ax.patches: # Get X and Y placement of label from rect. y_value = rect.get_height() x_value = rect.get_x() + rect.get_width() / 2 # Number of points between bar and label. Change to your liking. space = spacing # Vertical alignment for positive values va = 'bottom' # If value of bar is negative: Place label below bar if y_value < 0: # Invert space to place label below space *= -1 # Vertically align label at top va = 'top' # Use Y value as label and format number with one decimal place label = y_value # Create annotation ax.annotate( label, # Use `label` as label (x_value, y_value),# Place label at end of the bar xytext=(0, space), # Vertically shift label by `space` textcoords="offset points", # Interpret `xytext` as offset in points ha='center', # Horizontally center label va=va) # Vertically align label differently for # positive and negative values. fig = plt.figure(figsize=(20, 5)) ax = filtered_df['station_name'].value_counts().plot.bar() ax.set_xlabel("station_name") ax.set_ylabel("Ionograms") add_value_labels(ax) filtered_df['station_name'].value_counts() # St John's amd Santiago don't show up anywhere fig = plt.figure(figsize=(20, 5)) ax = filtered_df[filtered_df['station_name'].isnull()]['station_number'].value_counts().plot.bar() ax.set_xlabel("station_number") ax.set_ylabel("Ionograms") add_value_labels(ax) #filtered_df[filtered_df['station_number'] == 7].apply(lambda x: x['timestamp'].date(), axis=1) datetime.month(filtered_df.iloc[26338]['timestamp'].date()) print(type(list(filtered_df['station_name'])[0])) print(type(list(df_stations['station_name'])[0])) initial_df_size = 467776 total_error_rate = 100 * (1 - len(filtered_df.dropna().index) / initial_df_size) print("Final df size:", len(filtered_df.dropna().index)) print("Total error rate: " + str(total_error_rate) + '%') # fmin fig = plt.figure(figsize=(20, 5)) plt.subplot(1, 2, 1) ax = filtered_df['fmin'].hist(bins=100) ax.set_xlabel("fmin") ax.set_ylabel("Ionograms") plt.subplot(1, 2, 2) ax = filtered_df[filtered_df['fmin'] < 2]['fmin'].hist(bins=120) ax.set_xlabel("fmin") # max depth fig = plt.figure(figsize=(20, 5)) plt.subplot(1, 3, 1) ax = filtered_df['max_depth'].hist(bins=100, orientation='horizontal') ax.set_ylabel("max_depth") ax.set_xlabel("Ionograms") plt.subplot(1, 3, 2) ax = filtered_df[filtered_df['max_depth'] < 1600]['max_depth'].hist(bins=80, orientation='horizontal') ax.set_ylabel("max_depth") ax.set_xlabel("Ionograms") plt.subplot(1, 3, 3) ax = filtered_df[filtered_df['max_depth'] > 3000]['max_depth'].hist(bins=80, orientation='horizontal') ax.set_ylabel("max_depth") ax.set_xlabel("Ionograms")
_____no_output_____
MIT
data_cleaning/notebooks/data_cleaning.ipynb
CamRoy008/AlouetteApp
Fix file naming
#def fix_file_name(file_name): # dir_0 = [] # dir_1 = [] # dir_2 = [] # dir_3 = [] # file_array = filtered_df.iloc[i]['file_name'].replace('\\', '/').split('/') # file_array[-3:] df_final = filtered_df.copy() df_final['file_name'] = filtered_df.apply(lambda x: '/'.join(x['file_name'].replace('\\', '/').split('/')[-3:])[:-4], axis=1) #ftp://ftp.asc-csa.gc.ca/users/OpenData_DonneesOuvertes/pub/AlouetteData/Alouette%20Data/R014207815/3488-15A/1.png
_____no_output_____
MIT
data_cleaning/notebooks/data_cleaning.ipynb
CamRoy008/AlouetteApp
Drop unnecessary columns
df_final.columns df_final = df_final.drop(columns=['Unnamed: 0', 'year', 'day_1', 'day_2', 'day_3', 'hour_1','hour_2', 'minute_1', 'minute_2',\ 'second_1', 'second_2','station_number_1', 'station_number_2', 'day', 'hour', 'minute','second'])
_____no_output_____
MIT
data_cleaning/notebooks/data_cleaning.ipynb
CamRoy008/AlouetteApp
Export final dateframe
len(df_final.index) df_final.to_csv("data/final_alouette_data.csv")
_____no_output_____
MIT
data_cleaning/notebooks/data_cleaning.ipynb
CamRoy008/AlouetteApp
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. Installation and configurationThis notebook configures the notebooks in this tutorial to connect to an Azure Machine Learning (AML) Workspace. You can use an existing workspace or create a new one.
import azureml.core from azureml.core import Workspace from azureml.core.authentication import ServicePrincipalAuthentication, AzureCliAuthentication, \ InteractiveLoginAuthentication from azureml.exceptions import AuthenticationException from dotenv import set_key, get_key, find_dotenv from pathlib import Path
_____no_output_____
MIT
00_AMLConfiguration.ipynb
Bhaskers-Blu-Org2/az-ml-batch-score
Prerequisites If you have already completed the prerequisites and selected the correct Kernel for this notebook, the AML Python SDK is already installed. Let's check the AML SDK version.
print("AML SDK Version:", azureml.core.VERSION)
_____no_output_____
MIT
00_AMLConfiguration.ipynb
Bhaskers-Blu-Org2/az-ml-batch-score
Set up your Azure Machine Learning workspace To create or access an Azure ML Workspace, you will need the following information:* Your subscription id* A resource group name* A name for your workspace* A region for your workspace**Note**: As with other Azure services, there are limits on certain resources like cluster size associated with the Azure Machine Learning service. Please read [this article](https://docs.microsoft.com/en-us/azure/machine-learning/service/how-to-manage-quotas) on the default limits and how to request more quota. If you have a workspace created already, you need to get your subscription and workspace information. You can find the values for those by visiting your workspace in the [Azure portal](http://portal.azure.com). If you don't have a workspace, the create workspace command in the next section will create a resource group and a workspace using the names you provide. Replace the values in the following cell with your information. If you would like to use service principal authentication as described [here](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/manage-azureml-service/authentication-in-azureml/authentication-in-azure-ml.ipynb) make sure you provide the optional values as well.
# Azure resources subscription_id = "" resource_group = "" workspace_name = "" workspace_region = "" tenant_id = "YOUR_TENANT_ID" # Optional for service principal authentication username = "YOUR_SERVICE_PRINCIPAL_APPLICATION_ID" # Optional for service principal authentication password = "YOUR_SERVICE_PRINCIPAL_PASSWORD" # Optional for service principal authentication
_____no_output_____
MIT
00_AMLConfiguration.ipynb
Bhaskers-Blu-Org2/az-ml-batch-score
Create and initialize a dotenv file for storing parameters used in multiple notebooks.
env_path = find_dotenv() if env_path == "": Path(".env").touch() env_path = find_dotenv() set_key(env_path, "subscription_id", subscription_id) # Replace YOUR_AZURE_SUBSCRIPTION set_key(env_path, "resource_group", resource_group) set_key(env_path, "workspace_name", workspace_name) set_key(env_path, "workspace_region", workspace_region) set_key(env_path, "tenant_id", tenant_id) set_key(env_path, "username", username) set_key(env_path, "password", password)
_____no_output_____
MIT
00_AMLConfiguration.ipynb
Bhaskers-Blu-Org2/az-ml-batch-score
Create the workspaceThis cell will create an AML workspace for you in a subscription, provided you have the correct permissions.This will fail when:1. You do not have permission to create a workspace in the resource group2. You do not have permission to create a resource group if it's non-existing.2. You are not a subscription owner or contributor and no Azure ML workspaces have ever been created in this subscriptionIf workspace creation fails, please work with your IT admin to provide you with the appropriate permissions or to provision the required resources. If this cell succeeds, you're done configuring AML!
def get_auth(env_path): if get_key(env_path, 'password') != "YOUR_SERVICE_PRINCIPAL_PASSWORD": aml_sp_password = get_key(env_path, 'password') aml_sp_tennant_id = get_key(env_path, 'tenant_id') aml_sp_username = get_key(env_path, 'username') auth = ServicePrincipalAuthentication( tenant_id=aml_sp_tennant_id, service_principal_id=aml_sp_username, service_principal_password=aml_sp_password ) else: try: auth = AzureCliAuthentication() auth.get_authentication_header() except AuthenticationException: auth = InteractiveLoginAuthentication() return auth ws = Workspace.create( name=workspace_name, subscription_id=subscription_id, resource_group=resource_group, location=workspace_region, create_resource_group=True, auth=get_auth(env_path), exist_ok=True, )
_____no_output_____
MIT
00_AMLConfiguration.ipynb
Bhaskers-Blu-Org2/az-ml-batch-score
Let's check the details of the workspace.
ws.get_details()
_____no_output_____
MIT
00_AMLConfiguration.ipynb
Bhaskers-Blu-Org2/az-ml-batch-score
Let's write the workspace configuration for the rest of the notebooks to connect to the workspace.
ws.write_config()
_____no_output_____
MIT
00_AMLConfiguration.ipynb
Bhaskers-Blu-Org2/az-ml-batch-score
The ARMI Material LibraryWhile *nuclides* are the microscopic building blocks of nature, their collection into *materials* is what we interact with at the engineering scale. The ARMI Framework provides a `Material` class, which has a composition (how many of each nuclide are in the material), and a variety of thermomechanical properties (many of which are temperature dependent), such as:* Mass density * Heat capacity* Linear or volumetric thermal expansion* Thermal conductivity* Solidus/liquidus temperatureand so on. Many of these properties are widely available in the literature for fresh materials. As materials are irradiated, the properties tend to change in complex ways. Material objects can be extended to account for such changes. The ARMI Framework comes with a small set of example material definitions. These are generally quite incomplete (often missing temperature dependence), and are of academic quality at best. To do engineering design calculations, users of ARMI are expected to make or otherwise prepare materials. As the ecosystem grows, we hope the material library will mature.In any case, here we will explore the use of `Material`s. Let's get an instance of the Uranium Oxide material.
from armi.materials import uraniumOxide uo2 = uraniumOxide.UO2() density500 = uo2.density(Tc=500) print(f"The density of UO2 @ T = 500C is {density500:.2f} g/cc")
_____no_output_____
Apache-2.0
doc/tutorials/materials_demo.ipynb
DennisYelizarov/armi
Taking a look at the composition
print(uo2.p.massFrac)
_____no_output_____
Apache-2.0
doc/tutorials/materials_demo.ipynb
DennisYelizarov/armi
The mass fractions of a material, plus its mass density, fully define the composition. Conversions between number density/fraction and mass density/fraction are handled on the next level up (on `Component`s), which we will explore soon.ARMI automatically thermally-expands materials based on their coefficients of linear expansion. For instance, a piece of Uranium Oxide that's 10 cm at room temperature would be longer at 500 C according to the formula:\begin{equation}\frac{\Delta L}{L_0} = \alpha \Delta T\end{equation}On the reactor model, this all happens behind the scenes. But here at the material library level, we can see it in detail.
L0 = 10.0 dLL = uo2.linearExpansionFactor(500,25) L = L0 * (1+dLL) print(f"Hot length is {L:.4f} cm")
_____no_output_____
Apache-2.0
doc/tutorials/materials_demo.ipynb
DennisYelizarov/armi
Let's plot the heat capacity as a function of temperature in K.
import numpy as np import matplotlib.pyplot as plt %matplotlib inline Tk = np.linspace(300,2000) heatCapacity = [uo2.heatCapacity(Tk=ti) for ti in Tk] plt.plot(Tk, heatCapacity) plt.title("$UO_2$ heat capacity vs. temperature") plt.xlabel("Temperature (K)") plt.ylabel("Heat capacity (J/kg-K)") plt.grid(ls='--',alpha=0.3)
_____no_output_____
Apache-2.0
doc/tutorials/materials_demo.ipynb
DennisYelizarov/armi
Intro to Table Detection with Fast RCNNBy taking an ImageNet-pretrained model such as the VGG16, we can add a few more convolutional layers to construct an RPN, or region proposal network. This module extracts regions of interest, or RoIs, that inform a model on where to identify an object. When the RoIs are applied, we do max pooling only in the regions of interest, as to find an embedding that uniquely identifies that area of the input and well as building a description of what object might lie in that region. From this description, the model can then categorize that region into one of k categories it was trained to recognize.
# Train Fast RCNN import logging import pprint import mxnet as mx import numpy as np from rcnn.config import config, default, generate_config from rcnn.symbol import * from rcnn.core import callback, metric from rcnn.core.loader import AnchorLoader from rcnn.core.module import MutableModule from rcnn.utils.load_data import load_gt_roidb, merge_roidb, filter_roidb from rcnn.utils.load_model import load_param def train_net(args, ctx, pretrained, epoch, prefix, begin_epoch, end_epoch, lr=0.001, lr_step='5'): # set up logger logging.basicConfig() logger = logging.getLogger() logger.setLevel(logging.INFO) # setup config config.TRAIN.BATCH_IMAGES = 1 config.TRAIN.BATCH_ROIS = 128 config.TRAIN.END2END = True config.TRAIN.BBOX_NORMALIZATION_PRECOMPUTED = True # load symbol sym = eval('get_' + args.network + '_train')(num_classes=config.NUM_CLASSES, num_anchors=config.NUM_ANCHORS) feat_sym = sym.get_internals()['rpn_cls_score_output'] # setup multi-gpu batch_size = len(ctx) input_batch_size = config.TRAIN.BATCH_IMAGES * batch_size # print config pprint.pprint(config) # load dataset and prepare imdb for training image_sets = [iset for iset in args.image_set.split('+')] roidbs = [load_gt_roidb(args.dataset, image_set, args.root_path, args.dataset_path, flip=not args.no_flip) for image_set in image_sets] roidb = merge_roidb(roidbs) roidb = filter_roidb(roidb) # load training data train_data = AnchorLoader(feat_sym, roidb, batch_size=input_batch_size, shuffle=not args.no_shuffle, ctx=ctx, work_load_list=args.work_load_list, feat_stride=config.RPN_FEAT_STRIDE, anchor_scales=config.ANCHOR_SCALES, anchor_ratios=config.ANCHOR_RATIOS, aspect_grouping=config.TRAIN.ASPECT_GROUPING) # infer max shape max_data_shape = [('data', (input_batch_size, 3, max([v[0] for v in config.SCALES]), max([v[1] for v in config.SCALES])))] max_data_shape, max_label_shape = train_data.infer_shape(max_data_shape) max_data_shape.append(('gt_boxes', (input_batch_size, 100, 5))) print('providing maximum shape', max_data_shape, max_label_shape) # infer shape data_shape_dict = dict(train_data.provide_data + train_data.provide_label) arg_shape, out_shape, aux_shape = sym.infer_shape(**data_shape_dict) arg_shape_dict = dict(zip(sym.list_arguments(), arg_shape)) out_shape_dict = dict(zip(sym.list_outputs(), out_shape)) aux_shape_dict = dict(zip(sym.list_auxiliary_states(), aux_shape)) print('output shape') pprint.pprint(out_shape_dict) # load and initialize params if args.resume: arg_params, aux_params = load_param(prefix, begin_epoch, convert=True) else: arg_params, aux_params = load_param(pretrained, epoch, convert=True) arg_params['rpn_conv_3x3_weight'] = mx.random.normal(0, 0.01, shape=arg_shape_dict['rpn_conv_3x3_weight']) arg_params['rpn_conv_3x3_bias'] = mx.nd.zeros(shape=arg_shape_dict['rpn_conv_3x3_bias']) arg_params['rpn_cls_score_weight'] = mx.random.normal(0, 0.01, shape=arg_shape_dict['rpn_cls_score_weight']) arg_params['rpn_cls_score_bias'] = mx.nd.zeros(shape=arg_shape_dict['rpn_cls_score_bias']) arg_params['rpn_bbox_pred_weight'] = mx.random.normal(0, 0.01, shape=arg_shape_dict['rpn_bbox_pred_weight']) arg_params['rpn_bbox_pred_bias'] = mx.nd.zeros(shape=arg_shape_dict['rpn_bbox_pred_bias']) arg_params['cls_score_weight'] = mx.random.normal(0, 0.01, shape=arg_shape_dict['cls_score_weight']) arg_params['cls_score_bias'] = mx.nd.zeros(shape=arg_shape_dict['cls_score_bias']) arg_params['bbox_pred_weight'] = mx.random.normal(0, 0.001, shape=arg_shape_dict['bbox_pred_weight']) arg_params['bbox_pred_bias'] = mx.nd.zeros(shape=arg_shape_dict['bbox_pred_bias']) # check parameter shapes for k in sym.list_arguments(): if k in data_shape_dict: continue assert k in arg_params, k + ' not initialized' assert arg_params[k].shape == arg_shape_dict[k], \ 'shape inconsistent for ' + k + ' inferred ' + str(arg_shape_dict[k]) + ' provided ' + str(arg_params[k].shape) for k in sym.list_auxiliary_states(): assert k in aux_params, k + ' not initialized' assert aux_params[k].shape == aux_shape_dict[k], \ 'shape inconsistent for ' + k + ' inferred ' + str(aux_shape_dict[k]) + ' provided ' + str(aux_params[k].shape) # create solver fixed_param_prefix = config.FIXED_PARAMS data_names = [k[0] for k in train_data.provide_data] label_names = [k[0] for k in train_data.provide_label] mod = MutableModule(sym, data_names=data_names, label_names=label_names, logger=logger, context=ctx, work_load_list=args.work_load_list, max_data_shapes=max_data_shape, max_label_shapes=max_label_shape, fixed_param_prefix=fixed_param_prefix) # decide training params metric rpn_eval_metric = metric.RPNAccMetric() rpn_cls_metric = metric.RPNLogLossMetric() rpn_bbox_metric = metric.RPNL1LossMetric() eval_metric = metric.RCNNAccMetric() cls_metric = metric.RCNNLogLossMetric() bbox_metric = metric.RCNNL1LossMetric() eval_metrics = mx.metric.CompositeEvalMetric() for child_metric in [rpn_eval_metric, rpn_cls_metric, rpn_bbox_metric, eval_metric, cls_metric, bbox_metric]: eval_metrics.add(child_metric) # callback batch_end_callback = callback.Speedometer(train_data.batch_size, frequent=args.frequent) means = np.tile(np.array(config.TRAIN.BBOX_MEANS), config.NUM_CLASSES) stds = np.tile(np.array(config.TRAIN.BBOX_STDS), config.NUM_CLASSES) epoch_end_callback = callback.do_checkpoint(prefix, means, stds) # decide learning rate base_lr = lr lr_factor = 0.1 lr_epoch = [int(epoch) for epoch in lr_step.split(',')] lr_epoch_diff = [epoch - begin_epoch for epoch in lr_epoch if epoch > begin_epoch] lr = base_lr * (lr_factor ** (len(lr_epoch) - len(lr_epoch_diff))) lr_iters = [int(epoch * len(roidb) / batch_size) for epoch in lr_epoch_diff] print('lr', lr, 'lr_epoch_diff', lr_epoch_diff, 'lr_iters', lr_iters) lr_scheduler = mx.lr_scheduler.MultiFactorScheduler(lr_iters, lr_factor) # optimizer optimizer_params = {'momentum': 0.9, 'wd': 0.0005, 'learning_rate': lr, 'lr_scheduler': lr_scheduler, 'rescale_grad': (1.0 / batch_size), 'clip_gradient': 5} # train mod.fit(train_data, eval_metric=eval_metrics, epoch_end_callback=epoch_end_callback, batch_end_callback=batch_end_callback, kvstore=args.kvstore, optimizer='sgd', optimizer_params=optimizer_params, arg_params=arg_params, aux_params=aux_params, begin_epoch=begin_epoch, num_epoch=end_epoch) ## Training Args class DictToObject: ''' helper class to encapsulate all the args from dict to obj ''' def __init__(self, **entries): self.__dict__.update(entries) args = {'lr': 0.001, 'image_set': '2007_trainval', 'network': 'resnet', 'resume': False, 'pretrained': 'model/resnet-101', 'root_path': 'new_data', 'dataset': 'TableDetectionVOC', 'lr_step': '7', 'prefix': 'model/rese2e', 'end_epoch': 10, 'dataset_path': 'table_data/VOCdevkit', 'gpus': '0', 'no_flip': False, 'no_shuffle': False, 'begin_epoch': 0, 'work_load_list': None, 'pretrained_epoch': 0, 'kvstore': 'device', 'frequent': 20} args = DictToObject(**args) if len(args.gpus) > 1: ctx = [mx.gpu(int(i)) for i in args.gpus.split(',')] else: ctx = [mx.gpu(int(args.gpus))] train_net(args, ctx, args.pretrained, args.pretrained_epoch, args.prefix, args.begin_epoch, args.end_epoch, lr=args.lr, lr_step=args.lr_step) # Fast r-cnn trained on VOC2007 dataset import os import cv2 import mxnet as mx import numpy as np from rcnn.config import config from rcnn.symbol import get_vgg_test, get_vgg_rpn_test from rcnn.io.image import resize, transform from rcnn.core.tester import Predictor, im_detect, im_proposal, vis_all_detection, draw_all_detection from rcnn.utils.load_model import load_param from rcnn.processing.nms import py_nms_wrapper, cpu_nms_wrapper, gpu_nms_wrapper import urllib2 import tempfile # 13 classes CLASSES = ('__background__', 'table', 'header', 'row', 'column') config.TEST.HAS_RPN = True SHORT_SIDE = config.SCALES[0][0] LONG_SIDE = config.SCALES[0][1] PIXEL_MEANS = config.PIXEL_MEANS DATA_NAMES = ['data', 'im_info'] LABEL_NAMES = None DATA_SHAPES = [('data', (1, 3, LONG_SIDE, SHORT_SIDE)), ('im_info', (1, 3))] LABEL_SHAPES = None # visualization CONF_THRESH = 0.7 NMS_THRESH = 0.3 nms = py_nms_wrapper(NMS_THRESH) def get_net(symbol, prefix, epoch, ctx): arg_params, aux_params = load_param(prefix, epoch, convert=True, ctx=ctx, process=True) # infer shape data_shape_dict = dict(DATA_SHAPES) arg_names, aux_names = symbol.list_arguments(), symbol.list_auxiliary_states() arg_shape, _, aux_shape = symbol.infer_shape(**data_shape_dict) arg_shape_dict = dict(zip(arg_names, arg_shape)) aux_shape_dict = dict(zip(aux_names, aux_shape)) # check shapes for k in symbol.list_arguments(): if k in data_shape_dict or 'label' in k: continue assert k in arg_params, k + ' not initialized' assert arg_params[k].shape == arg_shape_dict[k], \ 'shape inconsistent for ' + k + ' inferred ' + str(arg_shape_dict[k]) + ' provided ' + str(arg_params[k].shape) for k in symbol.list_auxiliary_states(): assert k in aux_params, k + ' not initialized' assert aux_params[k].shape == aux_shape_dict[k], \ 'shape inconsistent for ' + k + ' inferred ' + str(aux_shape_dict[k]) + ' provided ' + str(aux_params[k].shape) predictor = Predictor(symbol, DATA_NAMES, LABEL_NAMES, context=ctx, provide_data=DATA_SHAPES, provide_label=LABEL_SHAPES, arg_params=arg_params, aux_params=aux_params) return predictor def generate_batch(im): """ preprocess image, return batch :param im: cv2.imread returns [height, width, channel] in BGR :return: data_batch: MXNet input batch data_names: names in data_batch im_scale: float number """ im_array, im_scale = resize(im, SHORT_SIDE, LONG_SIDE) im_array = transform(im_array, PIXEL_MEANS) im_info = np.array([[im_array.shape[2], im_array.shape[3], im_scale]], dtype=np.float32) data = [mx.nd.array(im_array), mx.nd.array(im_info)] data_shapes = [('data', im_array.shape), ('im_info', im_info.shape)] data_batch = mx.io.DataBatch(data=data, label=None, provide_data=data_shapes, provide_label=None) return data_batch, DATA_NAMES, im_scale def demo_net(predictor, im, vis=False): """ generate data_batch -> im_detect -> post process :param predictor: Predictor :param image_name: image name :param vis: will save as a new image if not visualized :return: None """ data_batch, data_names, im_scale = generate_batch(im) scores, boxes, data_dict = im_detect(predictor, data_batch, data_names, im_scale) all_boxes = [[] for _ in CLASSES] for cls in CLASSES: cls_ind = CLASSES.index(cls) cls_boxes = boxes[:, 4 * cls_ind:4 * (cls_ind + 1)] cls_scores = scores[:, cls_ind, np.newaxis] keep = np.where(cls_scores >= CONF_THRESH)[0] dets = np.hstack((cls_boxes, cls_scores)).astype(np.float32)[keep, :] keep = nms(dets) all_boxes[cls_ind] = dets[keep, :] boxes_this_image = [[]] + [all_boxes[j] for j in range(1, len(CLASSES))] # print results print('class ---- [[x1, x2, y1, y2, confidence]]') for ind, boxes in enumerate(boxes_this_image): if len(boxes) > 0: print('---------', CLASSES[ind], '---------') print(boxes) if vis: vis_all_detection(data_dict['data'].asnumpy(), boxes_this_image, CLASSES, im_scale) else: #result_file = image_name.replace('.', '_result.') result_file = "output.jpg" print('results saved to %s' % result_file) im = draw_all_detection(data_dict['data'].asnumpy(), boxes_this_image, CLASSES, im_scale) cv2.imwrite(result_file, im) def get_image_from_url(url, img_file): req = urllib2.urlopen(url) img_file.write(req.read()) img_file.flush() return img_file.name
_____no_output_____
Apache-2.0
example/rcnn/Moodys-Table-Detection.ipynb
SCDM/mxnet
Inference - Lets run some predictions
vis = False gpu = 0 epoch = 3 prefix = 'e2e' ctx = mx.gpu(gpu) symbol = get_vgg_test(num_classes=config.NUM_CLASSES, num_anchors=config.NUM_ANCHORS) predictor = get_net(symbol, prefix, epoch, ctx) img_file = tempfile.NamedTemporaryFile() #url = 'http://images.all-free-download.com/images/graphiclarge/aeroplane_boeing_737_air_new_zealand_218019.jpg' #url = 'http://host.robots.ox.ac.uk/pascal/VOC/voc2012/segexamples/images/21.jpg' #url = 'https://www.siemens.com/press/pool/de/pressebilder/2011/mobility/soimo201107/072dpi/soimo201107-04_072dpi.jpg' url = '/home/ubuntu/workspace/mxnet/example/rcnn/new_data/VOCdevkit/VOC2007/JPEGImages/500046727_20161125_page_002.jpg' if 'JPEGImages' in url: image = url else: image = get_image_from_url(url, img_file) assert os.path.exists(image), image + ' not found' im = cv2.imread(image) demo_net(predictor, im, vis)
_____no_output_____
Apache-2.0
example/rcnn/Moodys-Table-Detection.ipynb
SCDM/mxnet
Table Object Detection
import numpy as np import matplotlib.pyplot as plt im = np.array(Image.open('/home/ubuntu/workspace/mxnet/example/rcnn/new_data/marked_table.png')) plt.imshow(im) plt.show()
_____no_output_____
Apache-2.0
example/rcnn/Moodys-Table-Detection.ipynb
SCDM/mxnet
Models ensemble to achieve better test metricsModels ensemble is a popular strategy in machine learning and deep learning areas to achieve more accurate and more stable outputs. A typical practice is:* Split all the training dataset into K folds.* Train K models with every K-1 folds data.* Execute inference on the test data with all the K models.* Compute the average values with weights or vote the most common value as the final result.MONAI provides `EnsembleEvaluator` and `MeanEnsemble`, `VoteEnsemble` post transforms. This tutorial shows how to leverage ensemble modules in MONAI to set up ensemble program.[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/models_ensemble.ipynb) Setup environment
!python -c "import monai" || pip install -q "monai-weekly[ignite, nibabel, tqdm]"
_____no_output_____
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Setup imports
# Copyright 2020 MONAI Consortium # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import glob import logging import os import tempfile import shutil import sys import nibabel as nib import numpy as np import torch from monai.config import print_config from monai.data import CacheDataset, DataLoader, create_test_image_3d from monai.engines import ( EnsembleEvaluator, SupervisedEvaluator, SupervisedTrainer ) from monai.handlers import MeanDice, StatsHandler, ValidationHandler, from_engine from monai.inferers import SimpleInferer, SlidingWindowInferer from monai.losses import DiceLoss from monai.networks.nets import UNet from monai.transforms import ( Activationsd, AsChannelFirstd, AsDiscreted, Compose, LoadImaged, MeanEnsembled, RandCropByPosNegLabeld, RandRotate90d, ScaleIntensityd, EnsureTyped, VoteEnsembled, ) from monai.utils import set_determinism print_config()
MONAI version: 0.6.0rc1+23.gc6793fd0 Numpy version: 1.20.3 Pytorch version: 1.9.0a0+c3d40fd MONAI flags: HAS_EXT = True, USE_COMPILED = False MONAI rev id: c6793fd0f316a448778d0047664aaf8c1895fe1c Optional dependencies: Pytorch Ignite version: 0.4.5 Nibabel version: 3.2.1 scikit-image version: 0.15.0 Pillow version: 7.0.0 Tensorboard version: 2.5.0 gdown version: 3.13.0 TorchVision version: 0.10.0a0 ITK version: 5.1.2 tqdm version: 4.53.0 lmdb version: 1.2.1 psutil version: 5.8.0 pandas version: 1.1.4 einops version: 0.3.0 For details about installing the optional dependencies, please visit: https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
directory = os.environ.get("MONAI_DATA_DIRECTORY") root_dir = tempfile.mkdtemp() if directory is None else directory print(root_dir)
/workspace/data/medical
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Set determinism, logging, device
set_determinism(seed=0) logging.basicConfig(stream=sys.stdout, level=logging.INFO) device = torch.device("cuda:0")
_____no_output_____
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Generate random (image, label) pairsGenerate 60 pairs for the task, 50 for training and 10 for test. And then split the 50 pairs into 5 folds to train 5 separate models.
data_dir = os.path.join(root_dir, "runs") if not os.path.exists(data_dir): os.makedirs(data_dir) for i in range(60): im, seg = create_test_image_3d( 128, 128, 128, num_seg_classes=1, channel_dim=-1) n = nib.Nifti1Image(im, np.eye(4)) nib.save(n, os.path.join(data_dir, f"img{i}.nii.gz")) n = nib.Nifti1Image(seg, np.eye(4)) nib.save(n, os.path.join(data_dir, f"seg{i}.nii.gz")) images = sorted(glob.glob(os.path.join(data_dir, "img*.nii.gz"))) segs = sorted(glob.glob(os.path.join(data_dir, "seg*.nii.gz"))) train_files = [] val_files = [] for i in range(5): train_files.append( [ {"image": img, "label": seg} for img, seg in zip( images[: (10 * i)] + images[(10 * (i + 1)): 50], segs[: (10 * i)] + segs[(10 * (i + 1)): 50], ) ] ) val_files.append( [ {"image": img, "label": seg} for img, seg in zip(images[(10 * i): (10 * (i + 1))], segs[(10 * i): (10 * (i + 1))]) ] ) test_files = [{"image": img, "label": seg} for img, seg in zip(images[50:60], segs[50:60])]
_____no_output_____
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Setup transforms for training and validation
train_transforms = Compose( [ LoadImaged(keys=["image", "label"]), AsChannelFirstd(keys=["image", "label"], channel_dim=-1), ScaleIntensityd(keys=["image", "label"]), RandCropByPosNegLabeld( keys=["image", "label"], label_key="label", spatial_size=[96, 96, 96], pos=1, neg=1, num_samples=4, ), RandRotate90d(keys=["image", "label"], prob=0.5, spatial_axes=[0, 2]), EnsureTyped(keys=["image", "label"]), ] ) val_transforms = Compose( [ LoadImaged(keys=["image", "label"]), AsChannelFirstd(keys=["image", "label"], channel_dim=-1), ScaleIntensityd(keys=["image", "label"]), EnsureTyped(keys=["image", "label"]), ] )
_____no_output_____
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Define CacheDatasets and DataLoaders for train, validation and test
num_models = 5 train_dss = [CacheDataset( data=train_files[i], transform=train_transforms) for i in range(num_models)] train_loaders = [ DataLoader( train_dss[i], batch_size=2, shuffle=True, num_workers=4) for i in range(num_models) ] val_dss = [CacheDataset(data=val_files[i], transform=val_transforms) for i in range(num_models)] val_loaders = [DataLoader(val_dss[i], batch_size=1, num_workers=4) for i in range(num_models)] test_ds = CacheDataset(data=test_files, transform=val_transforms) test_loader = DataLoader(test_ds, batch_size=1, num_workers=4)
100%|██████████| 40/40 [00:01<00:00, 26.37it/s] 100%|██████████| 40/40 [00:01<00:00, 33.42it/s] 100%|██████████| 40/40 [00:01<00:00, 36.70it/s] 100%|██████████| 40/40 [00:00<00:00, 40.63it/s] 100%|██████████| 40/40 [00:00<00:00, 43.25it/s] 100%|██████████| 10/10 [00:00<00:00, 40.24it/s] 100%|██████████| 10/10 [00:00<00:00, 37.47it/s] 100%|██████████| 10/10 [00:00<00:00, 39.96it/s] 100%|██████████| 10/10 [00:00<00:00, 38.21it/s] 100%|██████████| 10/10 [00:00<00:00, 39.50it/s] 100%|██████████| 10/10 [00:00<00:00, 42.86it/s]
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Define a training process based on workflowsMore usage examples of MONAI workflows are available at: [workflow examples](https://github.com/Project-MONAI/tutorials/tree/master/modules/engines).
def train(index): net = UNet( dimensions=3, in_channels=1, out_channels=1, channels=(16, 32, 64, 128, 256), strides=(2, 2, 2, 2), num_res_units=2, ).to(device) loss = DiceLoss(sigmoid=True) opt = torch.optim.Adam(net.parameters(), 1e-3) val_post_transforms = Compose( [EnsureTyped(keys="pred"), Activationsd(keys="pred", sigmoid=True), AsDiscreted( keys="pred", threshold_values=True)] ) evaluator = SupervisedEvaluator( device=device, val_data_loader=val_loaders[index], network=net, inferer=SlidingWindowInferer( roi_size=(96, 96, 96), sw_batch_size=4, overlap=0.5), postprocessing=val_post_transforms, key_val_metric={ "val_mean_dice": MeanDice( include_background=True, output_transform=from_engine(["pred", "label"]), ) }, ) train_handlers = [ ValidationHandler(validator=evaluator, interval=4, epoch_level=True), StatsHandler(tag_name="train_loss", output_transform=from_engine(["loss"], first=True)), ] trainer = SupervisedTrainer( device=device, max_epochs=4, train_data_loader=train_loaders[index], network=net, optimizer=opt, loss_function=loss, inferer=SimpleInferer(), amp=False, train_handlers=train_handlers, ) trainer.run() return net
_____no_output_____
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Execute 5 training processes and get 5 models
models = [train(i) for i in range(num_models)]
INFO:ignite.engine.engine.SupervisedTrainer:Engine run resuming from iteration 0, epoch 0 until 4 epochs INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 1/20 -- train_loss: 0.6230 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 2/20 -- train_loss: 0.5654 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 3/20 -- train_loss: 0.5949 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 4/20 -- train_loss: 0.5036 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 5/20 -- train_loss: 0.4908 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 6/20 -- train_loss: 0.4712 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 7/20 -- train_loss: 0.4696 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 8/20 -- train_loss: 0.5312 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 9/20 -- train_loss: 0.4865 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 10/20 -- train_loss: 0.4700 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 11/20 -- train_loss: 0.4217 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 12/20 -- train_loss: 0.4699 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 13/20 -- train_loss: 0.5223 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 14/20 -- train_loss: 0.4458 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 15/20 -- train_loss: 0.3606 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 16/20 -- train_loss: 0.4486 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 17/20 -- train_loss: 0.4257 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 18/20 -- train_loss: 0.4503 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 19/20 -- train_loss: 0.4755 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 1/4, Iter: 20/20 -- train_loss: 0.3600 INFO:ignite.engine.engine.SupervisedTrainer:Epoch[1] Complete. Time taken: 00:00:05 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 1/20 -- train_loss: 0.3595 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 2/20 -- train_loss: 0.4048 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 3/20 -- train_loss: 0.4752 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 4/20 -- train_loss: 0.4201 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 5/20 -- train_loss: 0.3508 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 6/20 -- train_loss: 0.3597 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 7/20 -- train_loss: 0.3493 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 8/20 -- train_loss: 0.4521 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 9/20 -- train_loss: 0.3626 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 10/20 -- train_loss: 0.5069 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 11/20 -- train_loss: 0.4473 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 12/20 -- train_loss: 0.4254 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 13/20 -- train_loss: 0.4346 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 14/20 -- train_loss: 0.3218 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 15/20 -- train_loss: 0.4270 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 16/20 -- train_loss: 0.4167 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 17/20 -- train_loss: 0.3766 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 18/20 -- train_loss: 0.4059 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 19/20 -- train_loss: 0.3510 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 2/4, Iter: 20/20 -- train_loss: 0.5764 INFO:ignite.engine.engine.SupervisedTrainer:Epoch[2] Complete. Time taken: 00:00:06 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 1/20 -- train_loss: 0.3963 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 2/20 -- train_loss: 0.3510 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 3/20 -- train_loss: 0.4277 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 4/20 -- train_loss: 0.4574 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 5/20 -- train_loss: 0.3738 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 6/20 -- train_loss: 0.4260 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 7/20 -- train_loss: 0.5325 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 8/20 -- train_loss: 0.3237 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 9/20 -- train_loss: 0.4540 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 10/20 -- train_loss: 0.3067 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 11/20 -- train_loss: 0.3417 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 12/20 -- train_loss: 0.3756 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 13/20 -- train_loss: 0.3444 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 14/20 -- train_loss: 0.3136 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 15/20 -- train_loss: 0.3385 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 16/20 -- train_loss: 0.3211 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 17/20 -- train_loss: 0.3638 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 18/20 -- train_loss: 0.3703 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 19/20 -- train_loss: 0.3725 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 3/4, Iter: 20/20 -- train_loss: 0.3613 INFO:ignite.engine.engine.SupervisedTrainer:Epoch[3] Complete. Time taken: 00:00:06 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 1/20 -- train_loss: 0.3960 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 2/20 -- train_loss: 0.3212 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 3/20 -- train_loss: 0.3127 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 4/20 -- train_loss: 0.3453 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 5/20 -- train_loss: 0.3885 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 6/20 -- train_loss: 0.3419 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 7/20 -- train_loss: 0.3377 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 8/20 -- train_loss: 0.3056 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 9/20 -- train_loss: 0.5426 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 10/20 -- train_loss: 0.3914 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 11/20 -- train_loss: 0.4319 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 12/20 -- train_loss: 0.3419 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 13/20 -- train_loss: 0.2996 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 14/20 -- train_loss: 0.3304 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 15/20 -- train_loss: 0.3622 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 16/20 -- train_loss: 0.3328 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 17/20 -- train_loss: 0.3306 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 18/20 -- train_loss: 0.3221 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 19/20 -- train_loss: 0.3609 INFO:ignite.engine.engine.SupervisedTrainer:Epoch: 4/4, Iter: 20/20 -- train_loss: 0.3655 INFO:ignite.engine.engine.SupervisedEvaluator:Engine run resuming from iteration 0, epoch 3 until 4 epochs INFO:ignite.engine.engine.SupervisedEvaluator:Got new best metric of val_mean_dice: 0.9533967077732086 INFO:ignite.engine.engine.SupervisedEvaluator:Epoch[4] Complete. Time taken: 00:00:01 INFO:ignite.engine.engine.SupervisedEvaluator:Engine run complete. Time taken: 00:00:02 INFO:ignite.engine.engine.SupervisedTrainer:Epoch[4] Complete. Time taken: 00:00:08 INFO:ignite.engine.engine.SupervisedTrainer:Engine run complete. Time taken: 00:00:26 INFO:ignite.engine.engine.SupervisedTrainer:Engine run resuming from iteration 0, epoch 0 until 4 epochs
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Define evaluation process based on `EnsembleEvaluator`
def ensemble_evaluate(post_transforms, models): evaluator = EnsembleEvaluator( device=device, val_data_loader=test_loader, pred_keys=["pred0", "pred1", "pred2", "pred3", "pred4"], networks=models, inferer=SlidingWindowInferer( roi_size=(96, 96, 96), sw_batch_size=4, overlap=0.5), postprocessing=post_transforms, key_val_metric={ "test_mean_dice": MeanDice( include_background=True, output_transform=from_engine(["pred", "label"]), ) }, ) evaluator.run()
_____no_output_____
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Evaluate the ensemble result with `MeanEnsemble``EnsembleEvaluator` accepts a list of models for inference and outputs a list of predictions for further operations.Here the input data is a list or tuple of PyTorch Tensor with shape: [B, C, H, W, D]. The list represents the output data from 5 models. And `MeanEnsemble` also can support to add `weights` for the input data:* The `weights` will be added to input data from highest dimension.* If the `weights` only has 1 dimension, it will be added to the `E` dimension of input data.* If the `weights` has 3 dimensions, it will be added to `E`, `B` and `C` dimensions. For example, to ensemble 3 segmentation model outputs, every output has 4 channels(classes), The input data shape can be: [3, B, 4, H, W, D], and add different `weights` for different classes. So the `weights` shape can be: [3, 1, 4], like: `weights = [[[1, 2, 3, 4]], [[4, 3, 2, 1]], [[1, 1, 1, 1]]]`.
mean_post_transforms = Compose( [ EnsureTyped(keys=["pred0", "pred1", "pred2", "pred3", "pred4"]), MeanEnsembled( keys=["pred0", "pred1", "pred2", "pred3", "pred4"], output_key="pred", # in this particular example, we use validation metrics as weights weights=[0.95, 0.94, 0.95, 0.94, 0.90], ), Activationsd(keys="pred", sigmoid=True), AsDiscreted(keys="pred", threshold_values=True), ] ) ensemble_evaluate(mean_post_transforms, models)
INFO:ignite.engine.engine.EnsembleEvaluator:Engine run resuming from iteration 0, epoch 0 until 1 epochs INFO:ignite.engine.engine.EnsembleEvaluator:Got new best metric of test_mean_dice: 0.9435271978378296 INFO:ignite.engine.engine.EnsembleEvaluator:Epoch[1] Complete. Time taken: 00:00:02 INFO:ignite.engine.engine.EnsembleEvaluator:Engine run complete. Time taken: 00:00:03
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Evaluate the ensemble result with `VoteEnsemble`Here the input data is a list or tuple of PyTorch Tensor with shape: [B, C, H, W, D]. The list represents the output data from 5 models.Note that:* `VoteEnsemble` expects the input data is discrete values.* Input data can be multiple channels data in One-Hot format or single channel data.* It will vote to select the most common data between items.* The output data has the same shape as every item of the input data.
vote_post_transforms = Compose( [ EnsureTyped(keys=["pred0", "pred1", "pred2", "pred3", "pred4"]), Activationsd(keys=["pred0", "pred1", "pred2", "pred3", "pred4"], sigmoid=True), # transform data into discrete before voting AsDiscreted(keys=["pred0", "pred1", "pred2", "pred3", "pred4"], threshold_values=True), VoteEnsembled(keys=["pred0", "pred1", "pred2", "pred3", "pred4"], output_key="pred"), ] ) ensemble_evaluate(vote_post_transforms, models)
INFO:ignite.engine.engine.EnsembleEvaluator:Engine run resuming from iteration 0, epoch 0 until 1 epochs INFO:ignite.engine.engine.EnsembleEvaluator:Got new best metric of test_mean_dice: 0.9436934590339661 INFO:ignite.engine.engine.EnsembleEvaluator:Epoch[1] Complete. Time taken: 00:00:02 INFO:ignite.engine.engine.EnsembleEvaluator:Engine run complete. Time taken: 00:00:03
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Cleanup data directoryRemove directory if a temporary was used.
if directory is None: shutil.rmtree(root_dir)
_____no_output_____
Apache-2.0
modules/models_ensemble.ipynb
dzenanz/tutorials
Load DatasetTo clear all record and load all images to the /dataset.svg_w=960, svg_h=540
from app.models import Label,Image,Batch, Comment, STATUS_CHOICES from django.contrib.auth.models import User import os, fnmatch, uuid, shutil from uuid import uuid4 def getbatchlist(filelist): def chunks(li, n): """Yield successive n-sized chunks from l.""" for i in range(0, len(li), n): yield li[i:i + n] return list(chunks(filelist, 5)) print getbatchlist(range(10)) #FOR DEBUG ONLY !!!! # Clear all batches and move images from /dataset to /raw print "DELETE ALL RECORDS!!" q=Batch.objects.all().delete() static_path = settings.STATICFILES_DIRS[0] raw_path = os.path.join(static_path,'raw') dataset_path = os.path.join(static_path,'dataset') raw_files = fnmatch.filter(os.listdir(dataset_path), '*.jpg') for i in raw_files: _dst=os.path.join(raw_path, i ) _src=os.path.join(dataset_path,i) print "moving to: %s"%(_dst) shutil.move(src=_src, dst=_dst) # moving from /raw/i to /dataset/j static_path = settings.STATICFILES_DIRS[0] raw_path = os.path.join(static_path,'raw') dataset_path = os.path.join(static_path,'dataset') raw_files = fnmatch.filter(os.listdir(raw_path), '*.jpg') for chunk in getbatchlist(raw_files): b=Batch() b.save() for i in chunk: j=unicode(uuid4())+'.jpg' print "batch: %s,src: %s, dst: %s"%(b,i,j) Image(batch=b, src_path=j, raw_path=i).save() _dst=os.path.join(dataset_path,j) _src=os.path.join(raw_path,i) shutil.move(src=_src, dst=_dst)
batch: BID000001,src: 15185e10-3d59-4129-b5c4-314fdb228a59.jpg, dst: 6c35c307-30ca-48c1-a92e-7b7dd9b60108.jpg batch: BID000001,src: 25136c78-05f6-422c-9b82-cbbd42deb261.jpg, dst: ba0d77ca-d6da-4213-b396-944580bfccea.jpg batch: BID000001,src: 34c84071-abd7-4f01-86e6-3f2dc6c96a0b.jpg, dst: 2ed9b9ed-bcec-45a2-a068-8806e9e83764.jpg batch: BID000001,src: 50b41b0a-1fa7-473b-8330-9a26310380b7.jpg, dst: f92dd495-e108-401d-9cfc-d498cc768004.jpg batch: BID000001,src: 54e9550f-5c1f-46d8-b4d5-45899bf0554f.jpg, dst: f1093aab-7a1b-4ef4-bd5c-86174cc86f8c.jpg batch: BID000002,src: 693a478e-439a-45de-8b20-20f4d0e0f240.jpg, dst: d97782e1-404a-47c3-8f12-decb80c9abf4.jpg batch: BID000002,src: 7696acee-37df-4d8c-b85b-30220ac00020.jpg, dst: 0d3e864c-fad3-4d30-9580-b61c7ec9fb47.jpg batch: BID000002,src: 88634d71-c69f-4582-b54c-926719da1020.jpg, dst: f5bed359-f260-41d5-8fc5-2181e9441f7d.jpg batch: BID000002,src: 92506a5f-1f28-482a-98ad-e377c7ddfed3.jpg, dst: 648e13bf-3aa9-46d6-85ff-6393cf870e00.jpg batch: BID000002,src: bbd100a5-82e3-4bcd-8213-24d6ad73ffc6.jpg, dst: 8466c83f-6873-4340-ab13-229322640162.jpg
MIT
beta/debug_load_imgaes.ipynb
SothanaV/visionmarker
Orthogonal Matching PursuitUsing orthogonal matching pursuit for recovering a sparse signal from a noisymeasurement encoded with a dictionary
print(__doc__) import matplotlib.pyplot as plt import numpy as np from sklearn.linear_model import OrthogonalMatchingPursuit from sklearn.linear_model import OrthogonalMatchingPursuitCV from sklearn.datasets import make_sparse_coded_signal n_components, n_features = 512, 100 n_nonzero_coefs = 17 # generate the data # y = Xw # |x|_0 = n_nonzero_coefs y, X, w = make_sparse_coded_signal(n_samples=1, n_components=n_components, n_features=n_features, n_nonzero_coefs=n_nonzero_coefs, random_state=0) idx, = w.nonzero() # distort the clean signal y_noisy = y + 0.05 * np.random.randn(len(y)) # plot the sparse signal plt.figure(figsize=(7, 7)) plt.subplot(4, 1, 1) plt.xlim(0, 512) plt.title("Sparse signal") plt.stem(idx, w[idx]) # plot the noise-free reconstruction omp = OrthogonalMatchingPursuit(n_nonzero_coefs=n_nonzero_coefs) omp.fit(X, y) coef = omp.coef_ idx_r, = coef.nonzero() plt.subplot(4, 1, 2) plt.xlim(0, 512) plt.title("Recovered signal from noise-free measurements") plt.stem(idx_r, coef[idx_r]) # plot the noisy reconstruction omp.fit(X, y_noisy) coef = omp.coef_ idx_r, = coef.nonzero() plt.subplot(4, 1, 3) plt.xlim(0, 512) plt.title("Recovered signal from noisy measurements") plt.stem(idx_r, coef[idx_r]) # plot the noisy reconstruction with number of non-zeros set by CV omp_cv = OrthogonalMatchingPursuitCV(cv=5) omp_cv.fit(X, y_noisy) coef = omp_cv.coef_ idx_r, = coef.nonzero() plt.subplot(4, 1, 4) plt.xlim(0, 512) plt.title("Recovered signal from noisy measurements with CV") plt.stem(idx_r, coef[idx_r]) plt.subplots_adjust(0.06, 0.04, 0.94, 0.90, 0.20, 0.38) plt.suptitle('Sparse signal recovery with Orthogonal Matching Pursuit', fontsize=16) plt.show()
_____no_output_____
Apache-2.0
01 Machine Learning/scikit_examples_jupyter/linear_model/plot_omp.ipynb
alphaolomi/colab
1. AutoGraph writes graph code for you[AutoGraph](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/autograph/README.md) helps you write complicated graph code using just plain Python -- behind the scenes, AutoGraph automatically transforms your code into the equivalent TF graph code. We support a large chunk of the Python language, which is growing. [Please see this document for what we currently support, and what we're working on](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/autograph/LIMITATIONS.md).Here's a quick example of how it works:
# Autograph can convert functions like this... def g(x): if x > 0: x = x * x else: x = 0.0 return x # ...into graph-building functions like this: def tf_g(x): with tf.name_scope('g'): def if_true(): with tf.name_scope('if_true'): x_1, = x, x_1 = x_1 * x_1 return x_1, def if_false(): with tf.name_scope('if_false'): x_1, = x, x_1 = 0.0 return x_1, x = autograph_utils.run_cond(tf.greater(x, 0), if_true, if_false) return x # You can run your plain-Python code in graph mode, # and get the same results out, but with all the benfits of graphs: print('Original value: %2.2f' % g(9.0)) # Generate a graph-version of g and call it: tf_g = autograph.to_graph(g) with tf.Graph().as_default(): # The result works like a regular op: takes tensors in, returns tensors. # You can inspect the graph using tf.get_default_graph().as_graph_def() g_ops = tf_g(tf.constant(9.0)) with tf.Session() as sess: print('Autograph value: %2.2f\n' % sess.run(g_ops)) # You can view, debug and tweak the generated code: print(autograph.to_code(g))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Automatically converting complex control flowAutoGraph can convert a large chunk of the Python language into equivalent graph-construction code, and we're adding new supported language features all the time. In this section, we'll give you a taste of some of the functionality in AutoGraph.AutoGraph will automatically convert most Python control flow statements into their correct graph equivalent. We support common statements like `while`, `for`, `if`, `break`, `return` and more. You can even nest them as much as you like. Imagine trying to write the graph version of this code by hand:
# Continue in a loop def f(l): s = 0 for c in l: if c % 2 > 0: continue s += c return s print('Original value: %d' % f([10,12,15,20])) tf_f = autograph.to_graph(f) with tf.Graph().as_default(): with tf.Session(): print('Graph value: %d\n\n' % tf_f(tf.constant([10,12,15,20])).eval()) print(autograph.to_code(f))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Try replacing the `continue` in the above code with `break` -- AutoGraph supports that as well! Let's try some other useful Python constructs, like `print` and `assert`. We automatically convert Python `assert` statements into the equivalent `tf.Assert` code.
def f(x): assert x != 0, 'Do not pass zero!' return x * x tf_f = autograph.to_graph(f) with tf.Graph().as_default(): with tf.Session(): try: print(tf_f(tf.constant(0)).eval()) except tf.errors.InvalidArgumentError as e: print('Got error message:\n%s' % e.message)
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
You can also use plain Python `print` functions in in-graph
def f(n): if n >= 0: while n < 5: n += 1 print(n) return n tf_f = autograph.to_graph(f) with tf.Graph().as_default(): with tf.Session(): tf_f(tf.constant(0)).eval()
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Appending to lists in loops also works (we create a tensor list ops behind the scenes)
def f(n): z = [] # We ask you to tell us the element dtype of the list autograph.set_element_type(z, tf.int32) for i in range(n): z.append(i) # when you're done with the list, stack it # (this is just like np.stack) return autograph.stack(z) tf_f = autograph.to_graph(f) with tf.Graph().as_default(): with tf.Session(): print(tf_f(tf.constant(3)).eval()) print('\n\n'+autograph.to_code(f)) def fizzbuzz(num): if num % 3 == 0 and num % 5 == 0: print('FizzBuzz') elif num % 3 == 0: print('Fizz') elif num % 5 == 0: print('Buzz') else: print(num) return num tf_g = autograph.to_graph(fizzbuzz) with tf.Graph().as_default(): # The result works like a regular op: takes tensors in, returns tensors. # You can inspect the graph using tf.get_default_graph().as_graph_def() g_ops = tf_g(tf.constant(15)) with tf.Session() as sess: sess.run(g_ops) # You can view, debug and tweak the generated code: print('\n') print(autograph.to_code(fizzbuzz))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
De-graphify Exercises Easy print statements
# See what happens when you turn AutoGraph off. # Do you see the type or the value of x when you print it? # @autograph.convert() def square_log(x): x = x * x print('Squared value of x =', x) return x with tf.Graph().as_default(): with tf.Session() as sess: print(sess.run(square_log(tf.constant(4))))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Convert the TensorFlow code into Python code for AutoGraph
def square_if_positive(x): x = tf.cond(tf.greater(x, 0), lambda: x * x, lambda: x) return x with tf.Session() as sess: print(sess.run(square_if_positive(tf.constant(4)))) @autograph.convert() def square_if_positive(x): pass # TODO: fill it in! with tf.Session() as sess: print(sess.run(square_if_positive(tf.constant(4))))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Uncollapse to see answer
# Simple cond @autograph.convert() def square_if_positive(x): if x > 0: x = x * x return x with tf.Graph().as_default(): with tf.Session() as sess: print(sess.run(square_if_positive(tf.constant(4))))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Nested If statement
def nearest_odd_square(x): def if_positive(): x1 = x * x x1 = tf.cond(tf.equal(x1 % 2, 0), lambda: x1 + 1, lambda: x1) return x1, x = tf.cond(tf.greater(x, 0), if_positive, lambda: x) return x with tf.Graph().as_default(): with tf.Session() as sess: print(sess.run(nearest_odd_square(tf.constant(4)))) @autograph.convert() def nearest_odd_square(x): pass # TODO: fill it in! with tf.Session() as sess: print(sess.run(nearest_odd_square(tf.constant(4))))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Uncollapse to reveal answer
@autograph.convert() def nearest_odd_square(x): if x > 0: x = x * x if x % 2 == 0: x = x + 1 return x with tf.Graph().as_default(): with tf.Session() as sess: print(sess.run(nearest_odd_square(tf.constant(4))))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Convert a while loop
# Convert a while loop def square_until_stop(x, y): x = tf.while_loop(lambda x: tf.less(x, y), lambda x: x * x, [x]) return x with tf.Graph().as_default(): with tf.Session() as sess: print(sess.run(square_until_stop(tf.constant(4), tf.constant(100)))) @autograph.convert() def square_until_stop(x, y): pass # TODO: fill it in! with tf.Graph().as_default(): with tf.Session() as sess: print(sess.run(square_until_stop(tf.constant(4), tf.constant(100))))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Uncollapse for the answer
@autograph.convert() def square_until_stop(x, y): while x < y: x = x * x return x with tf.Graph().as_default(): with tf.Session() as sess: print(sess.run(square_until_stop(tf.constant(4), tf.constant(100))))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Nested loop and conditional
@autograph.convert() def argwhere_cumsum(x, threshold): current_sum = 0.0 idx = 0 for i in range(len(x)): idx = i if current_sum >= threshold: break current_sum += x[i] return idx n = 10 with tf.Graph().as_default(): with tf.Session() as sess: idx = argwhere_cumsum(tf.ones(n), tf.constant(float(n / 2))) print(sess.run(idx)) @autograph.convert() def argwhere_cumsum(x, threshold): pass # TODO: fill it in! n = 10 with tf.Graph().as_default(): with tf.Session() as sess: idx = argwhere_cumsum(tf.ones(n), tf.constant(float(n / 2))) print(sess.run(idx))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Uncollapse to see answer
@autograph.convert() def argwhere_cumsum(x, threshold): current_sum = 0.0 idx = 0 for i in range(len(x)): idx = i if current_sum >= threshold: break current_sum += x[i] return idx n = 10 with tf.Graph().as_default(): with tf.Session() as sess: idx = argwhere_cumsum(tf.ones(n), tf.constant(float(n / 2))) print(sess.run(idx))
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
3. Training MNIST in-graphWriting control flow in AutoGraph is easy, so running a training loop in a TensorFlow graph should be easy as well! Here, we show an example of training a simple Keras model on MNIST, where the entire training process -- loading batches, calculating gradients, updating parameters, calculating validation accuracy, and repeating until convergence -- is done in-graph. Download data
import gzip import os import shutil from six.moves import urllib def download(directory, filename): filepath = os.path.join(directory, filename) if tf.gfile.Exists(filepath): return filepath if not tf.gfile.Exists(directory): tf.gfile.MakeDirs(directory) url = 'https://storage.googleapis.com/cvdf-datasets/mnist/' + filename + '.gz' zipped_filepath = filepath + '.gz' print('Downloading %s to %s' % (url, zipped_filepath)) urllib.request.urlretrieve(url, zipped_filepath) with gzip.open(zipped_filepath, 'rb') as f_in, open(filepath, 'wb') as f_out: shutil.copyfileobj(f_in, f_out) os.remove(zipped_filepath) return filepath def dataset(directory, images_file, labels_file): images_file = download(directory, images_file) labels_file = download(directory, labels_file) def decode_image(image): # Normalize from [0, 255] to [0.0, 1.0] image = tf.decode_raw(image, tf.uint8) image = tf.cast(image, tf.float32) image = tf.reshape(image, [784]) return image / 255.0 def decode_label(label): label = tf.decode_raw(label, tf.uint8) label = tf.reshape(label, []) return tf.to_int32(label) images = tf.data.FixedLengthRecordDataset( images_file, 28 * 28, header_bytes=16).map(decode_image) labels = tf.data.FixedLengthRecordDataset( labels_file, 1, header_bytes=8).map(decode_label) return tf.data.Dataset.zip((images, labels)) def mnist_train(directory): return dataset(directory, 'train-images-idx3-ubyte', 'train-labels-idx1-ubyte') def mnist_test(directory): return dataset(directory, 't10k-images-idx3-ubyte', 't10k-labels-idx1-ubyte')
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Define the model
def mlp_model(input_shape): model = tf.keras.Sequential(( tf.keras.layers.Dense(100, activation='relu', input_shape=input_shape), tf.keras.layers.Dense(100, activation='relu'), tf.keras.layers.Dense(10, activation='softmax'))) model.build() return model def predict(m, x, y): y_p = m(x) losses = tf.keras.losses.categorical_crossentropy(y, y_p) l = tf.reduce_mean(losses) accuracies = tf.keras.metrics.categorical_accuracy(y, y_p) accuracy = tf.reduce_mean(accuracies) return l, accuracy def fit(m, x, y, opt): l, accuracy = predict(m, x, y) opt.minimize(l) return l, accuracy def setup_mnist_data(is_training, hp, batch_size): if is_training: ds = mnist_train('/tmp/autograph_mnist_data') ds = ds.shuffle(batch_size * 10) else: ds = mnist_test('/tmp/autograph_mnist_data') ds = ds.repeat() ds = ds.batch(batch_size) return ds def get_next_batch(ds): itr = ds.make_one_shot_iterator() image, label = itr.get_next() x = tf.to_float(tf.reshape(image, (-1, 28 * 28))) y = tf.one_hot(tf.squeeze(label), 10) return x, y
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
Define the training loop
def train(train_ds, test_ds, hp): m = mlp_model((28 * 28,)) opt = tf.train.MomentumOptimizer(hp.learning_rate, 0.9) # We'd like to save our losses to a list. In order for AutoGraph # to convert these lists into their graph equivalent, # we need to specify the element type of the lists. train_losses = [] test_losses = [] train_accuracies = [] test_accuracies = [] autograph.set_element_type(train_losses, tf.float32) autograph.set_element_type(test_losses, tf.float32) autograph.set_element_type(train_accuracies, tf.float32) autograph.set_element_type(test_accuracies, tf.float32) # This entire training loop will be run in-graph. i = tf.constant(0) while i < hp.max_steps: train_x, train_y = get_next_batch(train_ds) test_x, test_y = get_next_batch(test_ds) step_train_loss, step_train_accuracy = fit(m, train_x, train_y, opt) step_test_loss, step_test_accuracy = predict(m, test_x, test_y) if i % (hp.max_steps // 10) == 0: print('Step', i, 'train loss:', step_train_loss, 'test loss:', step_test_loss, 'train accuracy:', step_train_accuracy, 'test accuracy:', step_test_accuracy) train_losses.append(step_train_loss) test_losses.append(step_test_loss) train_accuracies.append(step_train_accuracy) test_accuracies.append(step_test_accuracy) i += 1 # We've recorded our loss values and accuracies # to a list in a graph with AutoGraph's help. # In order to return the values as a Tensor, # we need to stack them before returning them. return ( autograph.stack(train_losses), autograph.stack(test_losses), autograph.stack(train_accuracies), autograph.stack(test_accuracies), ) with tf.Graph().as_default(): hp = tf.contrib.training.HParams( learning_rate=0.05, max_steps=500, ) train_ds = setup_mnist_data(True, hp, 50) test_ds = setup_mnist_data(False, hp, 1000) tf_train = autograph.to_graph(train) loss_tensors = tf_train(train_ds, test_ds, hp) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) ( train_losses, test_losses, train_accuracies, test_accuracies ) = sess.run(loss_tensors) plt.title('MNIST train/test losses') plt.plot(train_losses, label='train loss') plt.plot(test_losses, label='test loss') plt.legend() plt.xlabel('Training step') plt.ylabel('Loss') plt.show() plt.title('MNIST train/test accuracies') plt.plot(train_accuracies, label='train accuracy') plt.plot(test_accuracies, label='test accuracy') plt.legend(loc='lower right') plt.xlabel('Training step') plt.ylabel('Accuracy') plt.show()
_____no_output_____
Apache-2.0
tensorflow/contrib/autograph/examples/notebooks/workshop.ipynb
nicolasoyharcabal/tensorflow
!pip install yfinance import yfinance import pandas as pd import numpy as np import matplotlib.pyplot as plt from scipy.stats import ttest_ind import datetime plt.rcParams['figure.figsize'] = [10, 7] plt.rc('font', size=14) np.random.seed(0) y = np.arange(0,100,1) + np.random.normal(0,10,100) sma = pd.Series(y).rolling(20).mean() plt.plot(y,label="Time series") plt.plot(sma,label="20-period SMA") plt.legend() plt.show() n_forward = 40 name = 'GLD' start_date = "2010-01-01" end_date = "2020-06-15" ticker = yfinance.Ticker("FB") data = ticker.history(interval="1d",start='2010-01-01',end=end_date) plt.plot(data['Close'],label='Facebook') plt.plot(data['Close'].rolling(20).mean(),label = "20-periods SMA") plt.plot(data['Close'].rolling(50).mean(),label = "50-periods SMA") plt.plot(data['Close'].rolling(200).mean(),label = "200-periods SMA") plt.legend() plt.xlim((datetime.date(2019,1,1),datetime.date(2020,6,15))) plt.ylim((100,250)) plt.show() ticker = yfinance.Ticker(name) data = ticker.history(interval="1d",start=start_date,end=end_date) data['Forward Close'] = data['Close'].shift(-n_forward) data['Forward Return'] = (data['Forward Close'] - data['Close'])/data['Close'] result = [] train_size = 0.6 for sma_length in range(20,500): data['SMA'] = data['Close'].rolling(sma_length).mean() data['input'] = [int(x) for x in data['Close'] > data['SMA']] df = data.dropna() training = df.head(int(train_size * df.shape[0])) test = df.tail(int((1 - train_size) * df.shape[0])) tr_returns = training[training['input'] == 1]['Forward Return'] test_returns = test[test['input'] == 1]['Forward Return'] mean_forward_return_training = tr_returns.mean() mean_forward_return_test = test_returns.mean() pvalue = ttest_ind(tr_returns,test_returns,equal_var=False)[1] result.append({ 'sma_length':sma_length, 'training_forward_return': mean_forward_return_training, 'test_forward_return': mean_forward_return_test, 'p-value':pvalue }) result.sort(key = lambda x : -x['training_forward_return']) result[0] best_sma = result[0]['sma_length'] data['SMA'] = data['Close'].rolling(best_sma).mean() plt.plot(data['Close'],label=name) plt.plot(data['SMA'],label = "{} periods SMA".format(best_sma)) plt.legend() plt.show()
_____no_output_____
MIT
Find_the_best_moving_average.ipynb
BiffTannon/Test
Numpy We have seen python basic data structures in our last section. They are great but lack specialized features for data analysis. Like, adding roows, columns, operating on 2d matrices aren't readily available. So, we will use *numpy* for such functions.
import numpy as np
_____no_output_____
Apache-2.0
CourseContent/03-Intro.to.Python.and.Basic.Statistics/Week1/Practice.Exercise/2.Lab - Numpy.ipynb
averma111/AIML-PGP
Numpy operates on *nd* arrays. These are similar to lists but contains homogenous elements but easier to store 2-d data.
l1 = [1,2,3,4] nd1 = np.array(l1) print(nd1) l2 = [5,6,7,8] nd2 = np.array([l1,l2]) print(nd2)
[1 2 3 4] [[1 2 3 4] [5 6 7 8]]
Apache-2.0
CourseContent/03-Intro.to.Python.and.Basic.Statistics/Week1/Practice.Exercise/2.Lab - Numpy.ipynb
averma111/AIML-PGP
Some functions on np.array()
print(nd2.shape) print(nd2.size) print(nd2.dtype)
(2, 4) 8 int32
Apache-2.0
CourseContent/03-Intro.to.Python.and.Basic.Statistics/Week1/Practice.Exercise/2.Lab - Numpy.ipynb
averma111/AIML-PGP
Question 1Create an identity 2d-array or matrix (with ones across the diagonal).
np.identity(2) np.eye(2)
_____no_output_____
Apache-2.0
CourseContent/03-Intro.to.Python.and.Basic.Statistics/Week1/Practice.Exercise/2.Lab - Numpy.ipynb
averma111/AIML-PGP
Question 2Create a 2d-array or matrix of order 3x3 with values = 9,8,7,6,5,4,3,2,1 arranged in the same order.
d=np.matrix([[9,8,7],[6,5,4],[3,2,1]]) d np.arange(9,0,-1).reshape(3,3)
_____no_output_____
Apache-2.0
CourseContent/03-Intro.to.Python.and.Basic.Statistics/Week1/Practice.Exercise/2.Lab - Numpy.ipynb
averma111/AIML-PGP
Question 3Reverse both the rows and columns of the given matrix.
d.T
_____no_output_____
Apache-2.0
CourseContent/03-Intro.to.Python.and.Basic.Statistics/Week1/Practice.Exercise/2.Lab - Numpy.ipynb
averma111/AIML-PGP
Question 4Add + 1 to all the elements in the given matrix.
d + 1
_____no_output_____
Apache-2.0
CourseContent/03-Intro.to.Python.and.Basic.Statistics/Week1/Practice.Exercise/2.Lab - Numpy.ipynb
averma111/AIML-PGP
Similarly you can do operations like scalar substraction, division, multiplication (operating on each element in the matrix) Question 5Find the mean of all elements in the given matrix nd6.nd6 = [[ 1 4 9 121 144 169] [ 16 25 36 196 225 256] [ 49 64 81 289 324 361]]
nd6 = np.matrix([[ 1, 4, 9, 121, 144, 169], [ 16, 25, 36, 196, 225, 256], [ 49, 64, 81, 289, 324, 361]]) nd6.mean()
_____no_output_____
Apache-2.0
CourseContent/03-Intro.to.Python.and.Basic.Statistics/Week1/Practice.Exercise/2.Lab - Numpy.ipynb
averma111/AIML-PGP
Question 7Find the dot product of two given matrices.
mat1 = np.arange(9).reshape(3,3) mat2 = np.arange(10,19,1).reshape(3,3) mat1.dot(mat2) mat1 @ mat2 np.dot(mat1, mat2)
_____no_output_____
Apache-2.0
CourseContent/03-Intro.to.Python.and.Basic.Statistics/Week1/Practice.Exercise/2.Lab - Numpy.ipynb
averma111/AIML-PGP
Festival Playlists
import os import numpy as np import pandas as pd import requests import json import spotipy from IPython.display import display
_____no_output_____
MIT
notebooks/using_APIs_to_automate_the_process.ipynb
adrialuzllompart/festival-playlists
1. Use the Songkick API to get all the bands playing the festival2. Use the Setlist.FM API to get the setlists3. Use the Spotify API to create the playlists and add all the songs Set API credentials
setlistfm_api_key = os.getenv('SETLISTFM_API_KEY') spotify_client_id = os.getenv('SPOTIFY_CLIENT_ID') spotify_client_secret = os.getenv('SPOTIFY_CLIENT_SECRET')
_____no_output_____
MIT
notebooks/using_APIs_to_automate_the_process.ipynb
adrialuzllompart/festival-playlists
Setlist FM Plan of action1. Given a lineup (list of band names), get their Musicbrainz identifiers (`mbid`) via `https://api.setlist.fm/rest/1.0/search/artists`2. Retrieve the setlists for each artist using their `mbid` via `https://api.setlist.fm/rest/1.0/artist/{artist_mbid}/setlists`
lineup = pd.read_csv( '/Users/adrialuz/Desktop/weekender.txt', header=None, names=['band'], encoding="ISO-8859-1" )['band'].values len(lineup) lineup artists_url = 'https://api.setlist.fm/rest/1.0/search/artists' lineup_mbids = [] not_found = [] for name in lineup: req = requests.get(artists_url, headers={'x-api-key': setlistfm_api_key, 'Accept': 'application/json'}, params={'artistName': name, 'p': 1, 'sort': 'relevance'} ) i = 0 while (not req.ok) & (i <= 5): req = requests.get(artists_url, headers={'x-api-key': setlistfm_api_key, 'Accept': 'application/json'}, params={'artistName': name, 'p': 1, 'sort': 'relevance'} ) i += 1 if req.ok: artist_response = req.json()['artist'] num_artists = len(artist_response) if num_artists > 1: for i in range(num_artists): if artist_response[i]['name'].lower() == name.lower(): mbid = artist_response[i]['mbid'] lineup_mbids.append({'name': name, 'mbid': mbid}) break elif num_artists == 1: mbid = artist_response[0]['mbid'] lineup_mbids.append({'name': name, 'mbid': mbid}) elif num_artists == 0: print(f'No results I think for {name}') else: print(f'WTF {name}?') else: print(f'Couldn\'t find {name}') not_found.append(name) lineup_mbids not_found artist_setlist = [] for a in lineup_mbids: songs_played = [] mbid = a['mbid'] setlists_url = f'https://api.setlist.fm/rest/1.0/artist/{mbid}/setlists' req = requests.get(setlists_url, headers={'x-api-key': setlistfm_api_key, 'Accept': 'application/json'}, params={'p': 1} ) i = 0 while (not req.ok) & (i <= 5): req = requests.get(setlists_url, headers={'x-api-key': setlistfm_api_key, 'Accept': 'application/json'}, params={'p': 1} ) i += 1 if req.ok: setlist_response = req.json()['setlist'] num_setlists = len(setlist_response) for i in range(num_setlists): setlist = setlist_response[i]['sets']['set'] num_sections = len(setlist) total_songs = [] for p in range(num_sections): total_songs += setlist[p]['song'] num_songs = len(total_songs) for i in range(num_songs): song = total_songs[i] song_title = song['name'] # if the song is a cover add the original artist to the song title if 'cover' in song: song_title += ' {}'.format(song['cover']['name']) songs_played.append(song_title) most_played_songs = list(pd.Series(songs_played).value_counts().head(15).index) artist_setlist.append({ 'artist': a['name'], 'setlist': most_played_songs }) else: not_found.append(a['name']) not_found artist_setlist setlist_lengths = [] short_or_empty_setlist = [] for i in range(len(artist_setlist)): n_songs = len(artist_setlist[i]['setlist']) setlist_lengths.append({ 'artist': artist_setlist[i]['artist'], 'n_songs': n_songs }) if n_songs < 5: short_or_empty_setlist.append(artist_setlist[i]['artist']) len(short_or_empty_setlist) short_or_empty_setlist
_____no_output_____
MIT
notebooks/using_APIs_to_automate_the_process.ipynb
adrialuzllompart/festival-playlists
Spotify
username = 'adrialuz' scope = 'playlist-modify-public' token = spotipy.util.prompt_for_user_token(username, scope, redirect_uri='http://localhost:9090') sp = spotipy.Spotify(auth=token) sp.trace = False sp.search('artist:Dua Lipa', limit=1, type='artist', market='GB')['artists']['items'][0]['id'] sp.search( f'artist:Khaled', limit=2, type='artist', market='GB' )['artists']['items'] spotify_ids = [] for a in short_or_empty_setlist: search_result = sp.search( f'artist:{a}', limit=5, type='artist', market='GB' )['artists']['items'] if search_result: for i in range(len(search_result)): name = search_result[i]['name'] if name.lower() == a.lower(): artist_id = search_result[i]['id'] spotify_ids.append(artist_id) break else: pass else: print(f'Couldn\'t find {a} on Spotify.') spotify_ids sp.artist('59xdAObFYuaKO2phzzz07H')['name'] popular_songs = [] for artist_id in spotify_ids: search_results = sp.artist_top_tracks(artist_id, country='GB')['tracks'] top_songs = [] if search_results: for i in range(len(search_results)): song_name = search_results[i]['name'] top_songs.append(song_name) popular_songs.append({ 'artist': sp.artist(artist_id)['name'], 'setlist': top_songs }) else: print(artist_id, sp.artist(artist_id)['name']) popular_songs
_____no_output_____
MIT
notebooks/using_APIs_to_automate_the_process.ipynb
adrialuzllompart/festival-playlists
Get the URI codes for each track
uris = [] missing_songs = [] for a in (artist_setlist + popular_songs): artist = a['artist'] setlist = a['setlist'] for s in setlist: s = s.replace(',', '').replace('\'', '').replace('"', '').replace('.', '').replace( '?', '').replace(')', '').replace('(', '').replace('/', '').replace( '\\', '').replace('&', '').replace('-', '') items = sp.search(q=f'artist:{artist} track:{s}', limit=1)['tracks']['items'] if items: uri = items[0]['id'] uris.append(uri) else: items = sp.search(q=f'track:{s}', limit=1)['tracks']['items'] if items: if items != [None]: uri = items[0]['id'] uris.append(uri) else: missing_songs.append({ 'artist': artist, 'song': s }) len(uris) len(missing_songs) missing_songs divisor = int(np.floor(len(uris) / np.ceil(len(uris) / 100))) times = int(np.floor(len(uris) / divisor)) for i in range(times): subset = uris[divisor*i:divisor*(i+1)] sp.user_playlist_add_tracks(username, playlist_id='2nUkznVEo8EgQXw0UucbpS', tracks=subset)
_____no_output_____
MIT
notebooks/using_APIs_to_automate_the_process.ipynb
adrialuzllompart/festival-playlists
Error HandlingThe code in this notebook helps with handling errors. Normally, an error in notebook code causes the execution of the code to stop; while an infinite loop in notebook code causes the notebook to run without end. This notebook provides two classes to help address these concerns. **Prerequisites*** This notebook needs some understanding on advanced concepts in Python, notably * classes * the Python `with` statement * tracing * measuring time * exceptions SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.ExpectError import ```and then make use of the following features.The `ExpectError` class allows you to catch and report exceptions, yet resume execution. This is useful in notebooks, as they would normally interrupt execution as soon as an exception is raised. Its typical usage is in conjunction with a `with` clause:```python>>> with ExpectError():>>> x = 1 / 0Traceback (most recent call last): File "", line 2, in x = 1 / 0ZeroDivisionError: division by zero (expected)```The `ExpectTimeout` class allows you to interrupt execution after the specified time. This is useful for interrupting code that might otherwise run forever.```python>>> with ExpectTimeout(5):>>> long_running_test()Start0 seconds have passed1 seconds have passed2 seconds have passed3 seconds have passedTraceback (most recent call last): File "", line 2, in long_running_test() File "", line 5, in long_running_test print(i, "seconds have passed") File "", line 5, in long_running_test print(i, "seconds have passed") File "", line 25, in check_time raise TimeoutErrorTimeoutError (expected)```The exception and the associated traceback are printed as error messages. If you do not want that, use these keyword options:* `print_traceback` (default True) can be set to `False` to avoid the traceback being printed* `mute` (default False) can be set to `True` to completely avoid any output. Catching ErrorsThe class `ExpectError` allows to express that some code produces an exception. A typical usage looks as follows:```Pythonfrom ExpectError import ExpectErrorwith ExpectError(): function_that_is_supposed_to_fail()```If an exception occurs, it is printed on standard error; yet, execution continues.
import bookutils import traceback import sys from types import FrameType, TracebackType # ignore from typing import Union, Optional, Callable, Any class ExpectError: """Execute a code block expecting (and catching) an error.""" def __init__(self, exc_type: Optional[type] = None, print_traceback: bool = True, mute: bool = False): """ Constructor. Expect an exception of type `exc_type` (`None`: any exception). If `print_traceback` is set (default), print a traceback to stderr. If `mute` is set (default: False), do not print anything. """ self.print_traceback = print_traceback self.mute = mute self.expected_exc_type = exc_type def __enter__(self) -> Any: """Begin of `with` block""" return self def __exit__(self, exc_type: type, exc_value: BaseException, tb: TracebackType) -> Optional[bool]: """End of `with` block""" if exc_type is None: # No exception return if (self.expected_exc_type is not None and exc_type != self.expected_exc_type): raise # Unexpected exception # An exception occurred if self.print_traceback: lines = ''.join( traceback.format_exception( exc_type, exc_value, tb)).strip() else: lines = traceback.format_exception_only( exc_type, exc_value)[-1].strip() if not self.mute: print(lines, "(expected)", file=sys.stderr) return True # Ignore it
_____no_output_____
MIT
docs/notebooks/ExpectError.ipynb
bjrnmath/debuggingbook
Here's an example:
def fail_test() -> None: # Trigger an exception x = 1 / 0 with ExpectError(): fail_test() with ExpectError(print_traceback=False): fail_test()
ZeroDivisionError: division by zero (expected)
MIT
docs/notebooks/ExpectError.ipynb
bjrnmath/debuggingbook
We can specify the type of the expected exception. This way, if something else happens, we will get notified.
with ExpectError(ZeroDivisionError): fail_test() with ExpectError(): with ExpectError(ZeroDivisionError): some_nonexisting_function() # type: ignore
Traceback (most recent call last): File "<ipython-input-1-e6c7dad1986d>", line 3, in <module> some_nonexisting_function() # type: ignore File "<ipython-input-1-e6c7dad1986d>", line 3, in <module> some_nonexisting_function() # type: ignore NameError: name 'some_nonexisting_function' is not defined (expected)
MIT
docs/notebooks/ExpectError.ipynb
bjrnmath/debuggingbook
Catching TimeoutsThe class `ExpectTimeout(seconds)` allows to express that some code may run for a long or infinite time; execution is thus interrupted after `seconds` seconds. A typical usage looks as follows:```Pythonfrom ExpectError import ExpectTimeoutwith ExpectTimeout(2) as t: function_that_is_supposed_to_hang()```If an exception occurs, it is printed on standard error (as with `ExpectError`); yet, execution continues.Should there be a need to cancel the timeout within the `with` block, `t.cancel()` will do the trick.The implementation uses `sys.settrace()`, as this seems to be the most portable way to implement timeouts. It is not very efficient, though. Also, it only works on individual lines of Python code and will not interrupt a long-running system function.
import sys import time class ExpectTimeout: """Execute a code block expecting (and catching) a timeout.""" def __init__(self, seconds: Union[int, float], print_traceback: bool = True, mute: bool = False): """ Constructor. Interrupe execution after `seconds` seconds. If `print_traceback` is set (default), print a traceback to stderr. If `mute` is set (default: False), do not print anything. """ self.seconds_before_timeout = seconds self.original_trace_function: Optional[Callable] = None self.end_time: Optional[float] = None self.print_traceback = print_traceback self.mute = mute def check_time(self, frame: FrameType, event: str, arg: Any) -> Callable: """Tracing function""" if self.original_trace_function is not None: self.original_trace_function(frame, event, arg) current_time = time.time() if self.end_time and current_time >= self.end_time: raise TimeoutError return self.check_time def __enter__(self) -> Any: """Begin of `with` block""" start_time = time.time() self.end_time = start_time + self.seconds_before_timeout self.original_trace_function = sys.gettrace() sys.settrace(self.check_time) return self def __exit__(self, exc_type: type, exc_value: BaseException, tb: TracebackType) -> Optional[bool]: """End of `with` block""" self.cancel() if exc_type is None: return # An exception occurred if self.print_traceback: lines = ''.join( traceback.format_exception( exc_type, exc_value, tb)).strip() else: lines = traceback.format_exception_only( exc_type, exc_value)[-1].strip() if not self.mute: print(lines, "(expected)", file=sys.stderr) return True # Ignore it def cancel(self) -> None: sys.settrace(self.original_trace_function)
_____no_output_____
MIT
docs/notebooks/ExpectError.ipynb
bjrnmath/debuggingbook
Here's an example:
def long_running_test() -> None: print("Start") for i in range(10): time.sleep(1) print(i, "seconds have passed") print("End") with ExpectTimeout(5, print_traceback=False): long_running_test()
Start 0 seconds have passed 1 seconds have passed 2 seconds have passed 3 seconds have passed
MIT
docs/notebooks/ExpectError.ipynb
bjrnmath/debuggingbook