markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Install the [Category Encoders](https://github.com/scikit-learn-contrib/categorical-encoding) libraryIf you're running on Google Colab:```!pip install category_encoders```If you're running locally with Anaconda:```!conda install -c conda-forge category_encoders```
_____no_output_____
MIT
module2-baselines-validation/LS_DS_232_Baselines_Validation.ipynb
damerei/DS-Unit-2-Sprint-3-Classification-Validation
Baseline with cross-validation + independent test setA complete example, as an alternative to Train/Validate/Test scikit-learn documentation- [`sklearn.model_selection.cross_val_score`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html)- [ The `scoring` parameter: defining model evaluation rules](https://scikit-learn.org/stable/modules/model_evaluation.htmlthe-scoring-parameter-defining-model-evaluation-rules)
# Imports %matplotlib inline import warnings import category_encoders as ce import matplotlib.pyplot as plt import pandas as pd from sklearn.linear_model import LogisticRegression from sklearn.model_selection import cross_val_score from sklearn.pipeline import make_pipeline from sklearn.exceptions import DataConversionWarning from sklearn.preprocessing import StandardScaler warnings.filterwarnings(action='ignore', category=DataConversionWarning) # Load data bank = pd.read_csv('bank-additional/bank-additional-full.csv', sep=';') # Assign to X, y X = bank.drop(columns='y') y = bank['y'] == 'yes' # Drop leaky & random features X = X.drop(columns='duration') # Split Train, Test X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.2, random_state=42, stratify=y) # Make pipeline pipeline = make_pipeline( ce.OneHotEncoder(use_cat_names=True), StandardScaler(), LogisticRegression(solver='lbfgs', max_iter=1000) ) # Cross-validate with training data scores = cross_val_score(pipeline, X_train, y_train, scoring='roc_auc', cv=10, n_jobs=-1, verbose=10)
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 2 concurrent workers. [Parallel(n_jobs=-1)]: Done 1 tasks | elapsed: 3.9s [Parallel(n_jobs=-1)]: Done 4 tasks | elapsed: 6.7s [Parallel(n_jobs=-1)]: Done 10 out of 10 | elapsed: 12.8s finished
MIT
module2-baselines-validation/LS_DS_232_Baselines_Validation.ipynb
damerei/DS-Unit-2-Sprint-3-Classification-Validation
This is the baseline score that more sophisticated models must beat.
print('Cross-Validation ROC AUC scores:', scores) print('Average:', scores.mean())
Cross-Validation ROC AUC scores: [0.82042478 0.79227573 0.79162088 0.762977 0.78662274 0.78877613 0.76414311 0.79607284 0.80670867 0.77968487] Average: 0.7889306746390174
MIT
module2-baselines-validation/LS_DS_232_Baselines_Validation.ipynb
damerei/DS-Unit-2-Sprint-3-Classification-Validation
Is more effort justified? It depends. The blogpost ["Always start with a stupid model"](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa) explains,> Here is a very common story: a team wants to implement a model to predict something like the probability of a user clicking an ad. They start with a logistic regression and quickly (after some minor tuning) reach 90% accuracy.> From there, the question is: Should the team focus on getting the accuracy up to 95%, or should they solve other problems 90% of the way?> ***If a baseline does well, then you’ve saved yourself the headache of setting up a more complex model. If it does poorly, the kind of mistakes it makes are very instructive*** ...So what else can we learn from this baseline? ["Always start with a stupid model"](https://blog.insightdatascience.com/always-start-with-a-stupid-model-no-exceptions-3a22314b9aaa) suggests to look at> **What type of signal your model picks up on.** Most baselines will allow you to extract ***feature importances***, revealing which aspects of the input are most predictive. Analyzing feature importance is a great way to realize how your model is making decisions, and what it might be missing.We can do that:
# (Re)fit on training data pipeline.fit(X_train, y_train) # Visualize coefficients plt.figure(figsize=(10,30)) plt.title('Coefficients') coefficients = pipeline.named_steps['logisticregression'].coef_[0] feature_names = pipeline.named_steps['onehotencoder'].transform(X_train).columns pd.Series(coefficients, feature_names).sort_values().plot.barh(color='gray');
_____no_output_____
MIT
module2-baselines-validation/LS_DS_232_Baselines_Validation.ipynb
damerei/DS-Unit-2-Sprint-3-Classification-Validation
Kubeflow pipelines**Learning Objectives:** 1. Learn how to deploy a Kubeflow cluster on GCP 1. Learn how to create a experiment in Kubeflow 1. Learn how to package you code into a Kubeflow pipeline 1. Learn how to run a Kubeflow pipeline in a repeatable and traceable way IntroductionIn this notebook, we will first setup a Kubeflow cluster on GCP.Then, we will create a Kubeflow experiment and a Kubflow pipeline from our taxifare machine learning code. At last, we will run the pipeline on the Kubeflow cluster, providing us with a reproducible and traceable way to execute machine learning code.
!pip3 install --user kfp --upgrade
Requirement already satisfied: kfp in /home/jupyter/.local/lib/python3.7/site-packages (1.7.2) Requirement already satisfied: google-auth<2,>=1.6.1 in /opt/conda/lib/python3.7/site-packages (from kfp) (1.34.0) Requirement already satisfied: jsonschema<4,>=3.0.1 in /opt/conda/lib/python3.7/site-packages (from kfp) (3.2.0) Requirement already satisfied: PyYAML<6,>=5.3 in /opt/conda/lib/python3.7/site-packages (from kfp) (5.4.1) Requirement already satisfied: kfp-server-api<2.0.0,>=1.1.2 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (1.7.0) Requirement already satisfied: pydantic<2,>=1.8.2 in /opt/conda/lib/python3.7/site-packages (from kfp) (1.8.2) Requirement already satisfied: kfp-pipeline-spec<0.2.0,>=0.1.9 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.1.9) Requirement already satisfied: cloudpickle<2,>=1.3.0 in /opt/conda/lib/python3.7/site-packages (from kfp) (1.6.0) Requirement already satisfied: requests-toolbelt<1,>=0.8.0 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.9.1) Requirement already satisfied: kubernetes<13,>=8.0.0 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (12.0.1) Requirement already satisfied: protobuf<4,>=3.13.0 in /opt/conda/lib/python3.7/site-packages (from kfp) (3.16.0) Requirement already satisfied: Deprecated<2,>=1.2.7 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (1.2.12) Requirement already satisfied: tabulate<1,>=0.8.6 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.8.9) Requirement already satisfied: google-cloud-storage<2,>=1.20.0 in /opt/conda/lib/python3.7/site-packages (from kfp) (1.41.1) Requirement already satisfied: google-api-python-client<2,>=1.7.8 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (1.12.8) Requirement already satisfied: click<8,>=7.1.1 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (7.1.2) Requirement already satisfied: fire<1,>=0.3.1 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.4.0) Requirement already satisfied: absl-py<=0.11,>=0.9 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.11.0) Requirement already satisfied: docstring-parser<1,>=0.7.3 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.10) Requirement already satisfied: strip-hints<1,>=0.1.8 in /home/jupyter/.local/lib/python3.7/site-packages (from kfp) (0.1.10) Requirement already satisfied: six in /opt/conda/lib/python3.7/site-packages (from absl-py<=0.11,>=0.9->kfp) (1.16.0) Requirement already satisfied: wrapt<2,>=1.10 in /opt/conda/lib/python3.7/site-packages (from Deprecated<2,>=1.2.7->kfp) (1.12.1) Requirement already satisfied: termcolor in /opt/conda/lib/python3.7/site-packages (from fire<1,>=0.3.1->kfp) (1.1.0) Requirement already satisfied: httplib2<1dev,>=0.15.0 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client<2,>=1.7.8->kfp) (0.19.1) Requirement already satisfied: google-auth-httplib2>=0.0.3 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client<2,>=1.7.8->kfp) (0.1.0) Requirement already satisfied: google-api-core<2dev,>=1.21.0 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client<2,>=1.7.8->kfp) (1.31.1) Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from google-api-python-client<2,>=1.7.8->kfp) (3.0.1) Requirement already satisfied: requests<3.0.0dev,>=2.18.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (2.25.1) Requirement already satisfied: googleapis-common-protos<2.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (1.53.0) Requirement already satisfied: setuptools>=40.3.0 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (49.6.0.post20210108) Requirement already satisfied: packaging>=14.3 in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (21.0) Requirement already satisfied: pytz in /opt/conda/lib/python3.7/site-packages (from google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (2021.1) Requirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.1->kfp) (4.7.2) Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.1->kfp) (0.2.7) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<2,>=1.6.1->kfp) (4.2.2) Requirement already satisfied: google-resumable-media<3.0dev,>=1.3.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage<2,>=1.20.0->kfp) (1.3.2) Requirement already satisfied: google-cloud-core<3.0dev,>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from google-cloud-storage<2,>=1.20.0->kfp) (1.7.2) Requirement already satisfied: google-crc32c<2.0dev,>=1.0 in /opt/conda/lib/python3.7/site-packages (from google-resumable-media<3.0dev,>=1.3.0->google-cloud-storage<2,>=1.20.0->kfp) (1.1.2) Requirement already satisfied: cffi>=1.0.0 in /opt/conda/lib/python3.7/site-packages (from google-crc32c<2.0dev,>=1.0->google-resumable-media<3.0dev,>=1.3.0->google-cloud-storage<2,>=1.20.0->kfp) (1.14.6) Requirement already satisfied: pycparser in /opt/conda/lib/python3.7/site-packages (from cffi>=1.0.0->google-crc32c<2.0dev,>=1.0->google-resumable-media<3.0dev,>=1.3.0->google-cloud-storage<2,>=1.20.0->kfp) (2.20) Requirement already satisfied: pyparsing<3,>=2.4.2 in /opt/conda/lib/python3.7/site-packages (from httplib2<1dev,>=0.15.0->google-api-python-client<2,>=1.7.8->kfp) (2.4.7) Requirement already satisfied: pyrsistent>=0.14.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema<4,>=3.0.1->kfp) (0.17.3) Requirement already satisfied: importlib-metadata in /opt/conda/lib/python3.7/site-packages (from jsonschema<4,>=3.0.1->kfp) (4.6.3) Requirement already satisfied: attrs>=17.4.0 in /opt/conda/lib/python3.7/site-packages (from jsonschema<4,>=3.0.1->kfp) (21.2.0) Requirement already satisfied: python-dateutil in /opt/conda/lib/python3.7/site-packages (from kfp-server-api<2.0.0,>=1.1.2->kfp) (2.8.2) Requirement already satisfied: urllib3>=1.15 in /opt/conda/lib/python3.7/site-packages (from kfp-server-api<2.0.0,>=1.1.2->kfp) (1.26.6) Requirement already satisfied: certifi in /opt/conda/lib/python3.7/site-packages (from kfp-server-api<2.0.0,>=1.1.2->kfp) (2021.5.30) Requirement already satisfied: requests-oauthlib in /opt/conda/lib/python3.7/site-packages (from kubernetes<13,>=8.0.0->kfp) (1.3.0) Requirement already satisfied: websocket-client!=0.40.0,!=0.41.*,!=0.42.*,>=0.32.0 in /opt/conda/lib/python3.7/site-packages (from kubernetes<13,>=8.0.0->kfp) (0.57.0) Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.1->kfp) (0.4.8) Requirement already satisfied: typing-extensions>=3.7.4.3 in /opt/conda/lib/python3.7/site-packages (from pydantic<2,>=1.8.2->kfp) (3.10.0.0) Requirement already satisfied: idna<3,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (2.10) Requirement already satisfied: chardet<5,>=3.0.2 in /opt/conda/lib/python3.7/site-packages (from requests<3.0.0dev,>=2.18.0->google-api-core<2dev,>=1.21.0->google-api-python-client<2,>=1.7.8->kfp) (4.0.0) Requirement already satisfied: wheel in /opt/conda/lib/python3.7/site-packages (from strip-hints<1,>=0.1.8->kfp) (0.36.2) Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->jsonschema<4,>=3.0.1->kfp) (3.5.0) Requirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib->kubernetes<13,>=8.0.0->kfp) (3.1.1)
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
Restart the kernelAfter you install the additional packages, you need to restart the notebook kernel so it can find the packages. Import libraries and define constants
from os import path import kfp import kfp.compiler as compiler import kfp.components as comp import kfp.dsl as dsl import kfp.gcp as gcp import kfp.notebook
_____no_output_____
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
Setup a Kubeflow cluster on GCP **TODO 1** To deploy a [Kubeflow](https://www.kubeflow.org/) clusterin your GCP project, use the [AI Platform pipelines](https://console.cloud.google.com/ai-platform/pipelines):1. Go to [AI Platform Pipelines](https://console.cloud.google.com/ai-platform/pipelines) in the GCP Console.1. Create a new instance2. Hit "Configure"3. Check the box "Allow access to the following Cloud APIs"1. Hit "Create Cluster"4. Hit "Deploy"When the cluster is ready, go back to the AI Platform pipelines page and click on "SETTINGS" entry for your cluster.This will bring up a pop up with code snippets on how to access the cluster programmatically. Copy the "host" entry and set the "HOST" variable below with that.
HOST = "" # TODO: fill in the HOST information for the cluster
_____no_output_____
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
Authenticate your KFP cluster with a Kubernetes secretIf you run pipelines that requires calling any GCP services, you need to set the application default credential to a pipeline step by mounting the proper GCP service account token as a Kubernetes secret.First point your kubectl current context to your cluster. Go back to your [Kubeflow cluster dashboard](https://console.cloud.google.com/ai-platform/pipelines/clusters) or navigate to `Navigation menu > AI Platform > Pipelines` and look to see the cluster name, zone and namespace for the pipeline you deployed above. It's likely called `cluster-1` if this is the first AI Pipelines you've created.
# Change below if necessary PROJECT = !gcloud config get-value project # noqa: E999 PROJECT = PROJECT[0] BUCKET = PROJECT # change if needed CLUSTER = "cluster-1" # change if needed ZONE = "us-central1-a" # change if needed NAMESPACE = "default" # change if needed %env PROJECT=$PROJECT %env CLUSTER=$CLUSTER %env ZONE=$ZONE %env NAMESPACE=$NAMESPACE # Configure kubectl to connect with the cluster !gcloud container clusters get-credentials "$CLUSTER" --zone "$ZONE" --project "$PROJECT"
Fetching cluster endpoint and auth data. kubeconfig entry generated for cluster-1.
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
We'll create a service account called `kfpdemo` with the necessary IAM permissions for our cluster secret. We'll give this service account permissions for any GCP services it might need. This `taxifare` pipeline needs access to Cloud Storage, so we'll give it the `storage.admin` role and `ml.admin`. Open a Cloud Shell and copy/paste this code in the terminal there.```bashPROJECT=$(gcloud config get-value project) Create service accountgcloud iam service-accounts create kfpdemo \ --display-name kfpdemo --project $PROJECT Grant permissions to the service account by binding rolesgcloud projects add-iam-policy-binding $PROJECT \ --member=serviceAccount:kfpdemo@$PROJECT.iam.gserviceaccount.com \ --role=roles/storage.admin gcloud projects add-iam-policy-binding $PROJECT \ --member=serviceAccount:kfpdemo@$PROJECT.iam.gserviceaccount.com \ --role=roles/ml.admin ``` Then, we'll create and download a key for this service account and store the service account credential as a Kubernetes secret called `user-gcp-sa` in the cluster.
%%bash gcloud iam service-accounts keys create application_default_credentials.json \ --iam-account kfpdemo@$PROJECT.iam.gserviceaccount.com # Check that the key was downloaded correctly. !ls application_default_credentials.json # Create a k8s secret. If already exists, override. !kubectl create secret generic user-gcp-sa \ --from-file=user-gcp-sa.json=application_default_credentials.json \ -n $NAMESPACE --dry-run=client -o yaml | kubectl apply -f -
secret/user-gcp-sa configured
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
Create an experiment **TODO 2** We will start by creating a Kubeflow client to pilot the Kubeflow cluster:
client = kfp.Client(host=HOST)
_____no_output_____
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
Let's look at the experiments that are running on this cluster. Since you just launched it, you should see only a single "Default" experiment:
client.list_experiments()
_____no_output_____
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
Now let's create a 'taxifare' experiment where we could look at all the various runs of our taxifare pipeline:
exp = client.create_experiment(name="taxifare")
_____no_output_____
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
Let's make sure the experiment has been created correctly:
client.list_experiments()
_____no_output_____
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
Packaging your code into Kubeflow components We have packaged our taxifare ml pipeline into three components:* `./components/bq2gcs` that creates the training and evaluation data from BigQuery and exports it to GCS* `./components/trainjob` that launches the training container on AI-platform and exports the model* `./components/deploymodel` that deploys the trained model to AI-platform as a REST APIEach of these components has been wrapped into a Docker container, in the same way we did with the taxifare training code in the previous lab.If you inspect the code in these folders, you'll notice that the `main.py` or `main.sh` files contain the code we previously executed in the notebooks (loading the data to GCS from BQ, or launching a training job to AI-platform, etc.). The last line in the `Dockerfile` tells you that these files are executed when the container is run. So we just packaged our ml code into light container images for reproducibility. We have made it simple for you to build the container images and push them to the Google Cloud image registry gcr.io in your project:
# Builds the taxifare trainer container in case you skipped the optional part # of lab 1 !taxifare/scripts/build.sh # Pushes the taxifare trainer container to gcr/io !taxifare/scripts/push.sh # Builds the KF component containers and push them to gcr/io !cd pipelines && make components
make[1]: Entering directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/bq2gcs' rm: cannot remove './venv': No such file or directory OK Sending build context to Docker daemon 21.5kB Step 1/6 : FROM google/cloud-sdk:latest ---> 915a516535e8 Step 2/6 : RUN apt-get update && apt-get install --yes python3-pip ---> Using cache ---> 0f653294e07c Step 3/6 : COPY . /code ---> Using cache ---> 7d8f8d185c30 Step 4/6 : WORKDIR /code ---> Using cache ---> 20d39822bdb4 Step 5/6 : RUN pip3 install google-cloud-bigquery ---> Using cache ---> a1baf2091090 Step 6/6 : ENTRYPOINT ["python3", "./main.py"] ---> Using cache ---> 05e9191c9619 Successfully built 05e9191c9619 Successfully tagged gcr.io/dsparing-sandbox/taxifare-bq2gcs:latest Using default tag: latest The push refers to repository [gcr.io/dsparing-sandbox/taxifare-bq2gcs] 038e5a12: Preparing 7ee6f1b4: Preparing ced31aad: Preparing e268b455: Preparing 15a7c280: Preparing 9ae3a881: Preparing 24ad8c63: Preparing d95c5384: Preparing d7a1159c: Preparing d1217615: Preparing c1bc2645: Layer already exists latest: digest: sha256:2b9f25d58c03019983d785b6fd7910e61d886d9d84e96781aade0ae05a50e020 size: 2633 make[1]: Leaving directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/bq2gcs' make[1]: Entering directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/trainjob' rm: cannot remove './venv': No such file or directory OK Sending build context to Docker daemon 14.85kB Step 1/5 : FROM google/cloud-sdk:latest ---> 915a516535e8 Step 2/5 : COPY . /code ---> Using cache ---> f598ddbd44a2 Step 3/5 : WORKDIR /code ---> Using cache ---> 79f9d21c1bcf Step 4/5 : RUN pip3 install cloudml-hypertune ---> Using cache ---> 9edc5e05ae65 Step 5/5 : ENTRYPOINT ["./main.sh"] ---> Using cache ---> ae939b43e795 Successfully built ae939b43e795 Successfully tagged gcr.io/dsparing-sandbox/taxifare-trainjob:latest Using default tag: latest The push refers to repository [gcr.io/dsparing-sandbox/taxifare-trainjob] 9e8dbe19: Preparing f0a19019: Preparing e268b455: Preparing 15a7c280: Preparing 9ae3a881: Preparing 24ad8c63: Preparing d95c5384: Preparing d7a1159c: Preparing d1217615: Preparing d1217615: Layer already exists latest: digest: sha256:4d28b7a097e81f911782a0101b395b2c3d56f83b7a55bddd8a3a83d2ab3e18b6 size: 2420 make[1]: Leaving directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/trainjob' make[1]: Entering directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/deploymodel' rm: cannot remove './venv': No such file or directory OK Sending build context to Docker daemon 14.85kB Step 1/4 : FROM google/cloud-sdk:latest ---> 915a516535e8 Step 2/4 : COPY . /code ---> Using cache ---> 8ce1e25d9ba8 Step 3/4 : WORKDIR /code ---> Using cache ---> 9b585070839b Step 4/4 : ENTRYPOINT ["./main.sh"] ---> Using cache ---> 0559c7f14028 Successfully built 0559c7f14028 Successfully tagged gcr.io/dsparing-sandbox/taxifare-deploymodel:latest Using default tag: latest The push refers to repository [gcr.io/dsparing-sandbox/taxifare-deploymodel] 539f9c38: Preparing e268b455: Preparing 15a7c280: Preparing 9ae3a881: Preparing 24ad8c63: Preparing d95c5384: Preparing d7a1159c: Preparing d1217615: Preparing c1bc2645: Layer already exists latest: digest: sha256:6028a2f793dc35ee033376f1c43294cfdb0e3128443b6dbea7836393d6f53fd2 size: 2211 make[1]: Leaving directory '/home/jupyter/asl-ml-immersion/notebooks/building_production_ml_systems/solutions/pipelines/components/deploymodel'
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
Now that the container images are pushed to the [registry in your project](https://console.cloud.google.com/gcr), we need to create yaml files describing to Kubeflow how to use these containers. It boils down essentially to* describing what arguments Kubeflow needs to pass to the containers when it runs them* telling Kubeflow where to fetch the corresponding Docker imagesIn the cells below, we have three of these "Kubeflow component description files", one for each of our components. **TODO 3** **IMPORTANT: Modify the image URI in the cell below to reflect that you pushed the images into the gcr.io associated with your project.**
%%writefile bq2gcs.yaml name: bq2gcs description: | This component creates the training and validation datasets as BiqQuery tables and export them into a Google Cloud Storage bucket at gs://qwiklabs-gcp-00-568a75dfa3e1/taxifare/data. inputs: - {name: Input Bucket , type: String, description: 'GCS directory path.'} implementation: container: image: gcr.io/qwiklabs-gcp-00-568a75dfa3e1/taxifare-bq2gcs args: ["--bucket", {inputValue: Input Bucket}] %%writefile trainjob.yaml name: trainjob description: | This component trains a model to predict that taxi fare in NY. It takes as argument a GCS bucket and expects its training and eval data to be at gs://<BUCKET>/taxifare/data/ and will export the trained model at gs://<BUCKET>/taxifare/model/. inputs: - {name: Input Bucket , type: String, description: 'GCS directory path.'} implementation: container: image: gcr.io/qwiklabs-gcp-00-568a75dfa3e1/taxifare-trainjob args: [{inputValue: Input Bucket}] %%writefile deploymodel.yaml name: deploymodel description: | This component deploys a trained taxifare model on GCP as taxifare:dnn. It takes as argument a GCS bucket and expects the model to deploy to be found at gs://<BUCKET>/taxifare/model/export/savedmodel/ inputs: - {name: Input Bucket , type: String, description: 'GCS directory path.'} implementation: container: image: gcr.io/qwiklabs-gcp-00-568a75dfa3e1/taxifare-deploymodel args: [{inputValue: Input Bucket}]
Overwriting deploymodel.yaml
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
Create a Kubeflow pipeline The code below creates a kubeflow pipeline by decorating a regular function with the`@dsl.pipeline` decorator. Now the arguments of this decorated function will bethe input parameters of the Kubeflow pipeline.Inside the function, we describe the pipeline by* loading the yaml component files we created above into a Kubeflow `op`* specifying the order into which the Kubeflow ops should be run
# TODO 3 PIPELINE_TAR = "taxifare.tar.gz" BQ2GCS_YAML = "./bq2gcs.yaml" TRAINJOB_YAML = "./trainjob.yaml" DEPLOYMODEL_YAML = "./deploymodel.yaml" @dsl.pipeline( name="Taxifare", description="Train a ml model to predict the taxi fare in NY", ) def pipeline(gcs_bucket_name="<bucket where data and model will be exported>"): bq2gcs_op = comp.load_component_from_file(BQ2GCS_YAML) bq2gcs = bq2gcs_op( input_bucket=gcs_bucket_name, ) """ trainjob_op = comp.load_component_from_file(TRAINJOB_YAML) trainjob = trainjob_op( input_bucket=gcs_bucket_name, ) deploymodel_op = comp.load_component_from_file(DEPLOYMODEL_YAML) deploymodel = deploymodel_op( input_bucket=gcs_bucket_name, ) trainjob.after(bq2gcs) deploymodel.after(trainjob) """
_____no_output_____
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
The pipeline function above is then used by the Kubeflow compiler to create a Kubeflow pipeline artifact that can be either uploaded to the Kubeflow cluster from the UI, or programatically, as we will do below:
compiler.Compiler().compile(pipeline, PIPELINE_TAR) ls $PIPELINE_TAR
taxifare.tar.gz
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
If you untar and uzip this pipeline artifact, you'll see that the compiler has transformed thePython description of the pipeline into yaml description!Now let's feed Kubeflow with our pipeline and run it using our client:
# TODO 4 run = client.run_pipeline( experiment_id=exp.id, job_name="taxifare", pipeline_package_path="taxifare.tar.gz", params={ "gcs_bucket_name": BUCKET, }, )
_____no_output_____
Apache-2.0
notebooks/building_production_ml_systems/solutions/3_kubeflow_pipelines.ipynb
Jonathanpro/asl-ml-immersion
**Setup**
%reload_ext autoreload %autoreload 2 %matplotlib inline from fastai.vision import *
_____no_output_____
Apache-2.0
planet_experimental2.ipynb
amittal27/course-v3
**Configure data**
path = Path.cwd()/'planet' path.mkdir(parents=True, exist_ok=True) path
_____no_output_____
Apache-2.0
planet_experimental2.ipynb
amittal27/course-v3
**Multiclassification**
# read the csv file using the pandas library (popular way of dealing with tabular data in python), print the first five rows df = pd.read_csv(path/'train_classes.csv') df.head() # data augmentation tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0) # do not want warp on the satellite images # ensure we have the same validation set each time np.random.seed(42) # get images, get labels src = (ImageList.from_csv(path, 'train_classes.csv', folder='train-jpg', suffix='.jpg') .split_by_rand_pct(0.2) # set aside 20% of the training set (the current set of images) for the validation set .label_from_df(label_delim=' ')) # apply transforms, construct dataset data = (src.transform(tfms, size=128) # standardize images to be a little smaller, 128 x 128 .databunch().normalize(imagenet_stats)) # use databunch to bind training and validation datasets data.show_batch(rows=3, figsize=(12,9)) # base artchitecture arch = models.resnet50
_____no_output_____
Apache-2.0
planet_experimental2.ipynb
amittal27/course-v3
**Metrics**metrics to print out during training (NOTE: they do not impact how our model trains); just shows us how we're doing
# however, instead of picking just one of the classes in len(data.classes) as our prediction label, we want to pick out n of those classes # anything higher than a desired threshold will be assumed to be a label for the input image # accuracy uses argmax to find the category with the maximum probability of being represented by the image/data given # it compared it to the actual accuracy and took the average; this method can't be implemented when we have multiple labels per image # in this case, our threshold value is 0.2 (experimentally found to be pretty good) # a partial function takes in a function and a list of keywords & values and creates a new func that's exactly the same but with the arguments provided; new function was generated acc_02 = partial(accuracy_thresh, thresh=0.2) data.c # number of outputs we want our model to create = len(data.classes); given one probability for each of these classes f_score = partial(fbeta, thresh=0.2) # a metric used by Kaggle to weigh false positive and false negatives learn = cnn_learner(data, arch, metrics=[acc_02, f_score]) # find a good learning rate learn.lr_find() # plot results learn.recorder.plot() # pick learning rate lr = 1e-2 # fit_one_cycle five times with that learning rate learn.fit_one_cycle(5, slice(lr)) # save learn.save('stage-1-rn50') learn.unfreeze() learn.lr_find() learn.recorder.plot() # fitting with original dataset, though we could create a new databunch with just the misclassified instances learn.fit_one_cycle(5, slice(1e-5, lr/5)) learn.save('stage-2-rn50')
_____no_output_____
Apache-2.0
planet_experimental2.ipynb
amittal27/course-v3
**Now, create a whole "new" dataset where the images are 256 x 256 to hopefully increase our fbeta metric score**no concern of overfitting then
# create new databunch with 256 x 256 images (higher res images) data = (src.transform(tfms, size=256), # same transforms as before .databunch.normalize(imagenet_stats)) # start with our pre-trained model learn.data = data # replace learner data with new databunch data.train_ds[0][0].shape # freeze to just train the last few layers learn.lr_find() learn.recorder.plot() # new learning rate lr=1e-2/2 # just training the last few layers learn.fit_one_cycle(5, slice(lr)) learn.recorder.plot_losses() learn.save('stage-2-256-rn50') learn.export()
_____no_output_____
Apache-2.0
planet_experimental2.ipynb
amittal27/course-v3
Convolutional Neural Networks: Step by StepWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. By the end of this notebook, you'll be able to: * Explain the convolution operation* Apply two different types of pooling operation* Identify the components used in a convolutional neural network (padding, stride, filter, ...) and their purpose* Build a convolutional neural network **Notation**:- Superscript $[l]$ denotes an object of the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object from the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input. - Subscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer. - $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. - $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. You should be familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! Table of Contents- [1 - Packages](1)- [2 - Outline of the Assignment](2)- [3 - Convolutional Neural Networks](3) - [3.1 - Zero-Padding](3-1) - [Exercise 1 - zero_pad](ex-1) - [3.2 - Single Step of Convolution](3-2) - [Exercise 2 - conv_single_step](ex-2) - [3.3 - Convolutional Neural Networks - Forward Pass](3-3) - [Exercise 3 - conv_forward](ex-3)- [4 - Pooling Layer](4) - [4.1 - Forward Pooling](4-1) - [Exercise 4 - pool_forward](ex-4)- [5 - Backpropagation in Convolutional Neural Networks (OPTIONAL / UNGRADED)](5) - [5.1 - Convolutional Layer Backward Pass](5-1) - [5.1.1 - Computing dA](5-1-1) - [5.1.2 - Computing dW](5-1-2) - [5.1.3 - Computing db](5-1-3) - [Exercise 5 - conv_backward](ex-5) - [5.2 Pooling Layer - Backward Pass](5-2) - [5.2.1 Max Pooling - Backward Pass](5-2-1) - [Exercise 6 - create_mask_from_window](ex-6) - [5.2.2 - Average Pooling - Backward Pass](5-2-2) - [Exercise 7 - distribute_value](ex-7) - [5.2.3 Putting it Together: Pooling Backward](5-2-3) - [Exercise 8 - pool_backward](ex-8) 1 - PackagesLet's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- np.random.seed(1) is used to keep all the random function calls consistent. This helps to grade your work.
import numpy as np import h5py import matplotlib.pyplot as plt from public_tests import * %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 np.random.seed(1)
_____no_output_____
MIT
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
2 - Outline of the AssignmentYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions to walk you through the steps:- Convolution functions, including: - Zero Padding - Convolve window - Convolution forward - Convolution backward (optional)- Pooling functions, including: - Pooling forward - Create mask - Distribute value - Pooling backward (optional) This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:**Note**: For every forward function, there is a corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. 3 - Convolutional Neural NetworksAlthough programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. 3.1 - Zero-PaddingZero-padding adds zeros around the border of an image: Figure 1 : Zero-Padding Image (3 channels, RGB) with a padding of 2. The main benefits of padding are:- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. - It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels at the edges of an image. Exercise 1 - zero_padImplement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:```pythona = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), mode='constant', constant_values = (0,0))```
# GRADED FUNCTION: zero_pad def zero_pad(X, pad): """ Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, as illustrated in Figure 1. Argument: X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images pad -- integer, amount of padding around each image on vertical and horizontal dimensions Returns: X_pad -- padded image of shape (m, n_H + 2 * pad, n_W + 2 * pad, n_C) """ #(≈ 1 line) X_pad = np.pad(X, ((0,0), (pad,pad), (pad,pad), (0,0)), mode='constant', constant_values = (0,0)) # YOUR CODE STARTS HERE # YOUR CODE ENDS HERE return X_pad np.random.seed(1) x = np.random.randn(4, 3, 3, 2) x_pad = zero_pad(x, 3) print ("x.shape =\n", x.shape) print ("x_pad.shape =\n", x_pad.shape) print ("x[1,1] =\n", x[1, 1]) print ("x_pad[1,1] =\n", x_pad[1, 1]) assert type(x_pad) == np.ndarray, "Output must be a np array" assert x_pad.shape == (4, 9, 9, 2), f"Wrong shape: {x_pad.shape} != (4, 9, 9, 2)" print(x_pad[0, 0:2,:, 0]) assert np.allclose(x_pad[0, 0:2,:, 0], [[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]], 1e-15), "Rows are not padded with zeros" assert np.allclose(x_pad[0, :, 7:9, 1].transpose(), [[0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0]], 1e-15), "Columns are not padded with zeros" assert np.allclose(x_pad[:, 3:6, 3:6, :], x, 1e-15), "Internal values are different" fig, axarr = plt.subplots(1, 2) axarr[0].set_title('x') axarr[0].imshow(x[0, :, :, 0]) axarr[1].set_title('x_pad') axarr[1].imshow(x_pad[0, :, :, 0]) zero_pad_test(zero_pad)
x.shape = (4, 3, 3, 2) x_pad.shape = (4, 9, 9, 2) x[1,1] = [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]] x_pad[1,1] = [[0. 0.] [0. 0.] [0. 0.] [0. 0.] [0. 0.] [0. 0.] [0. 0.] [0. 0.] [0. 0.]] [[0. 0. 0. 0. 0. 0. 0. 0. 0.] [0. 0. 0. 0. 0. 0. 0. 0. 0.]]  All tests passed.
MIT
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
3.2 - Single Step of Convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: - Takes an input volume - Applies a filter at every position of the input- Outputs another volume (usually of different size) Figure 2 : Convolution operation with a filter of 3x3 and a stride of 1 (stride = amount you move the window each time you slide) In a computer vision application, each value in the matrix on the left corresponds to a single pixel value. You convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. Exercise 2 - conv_single_stepImplement `conv_single_step()`. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html). **Note**: The variable b will be passed in as a numpy array. If you add a scalar (a float or integer) to a numpy array, the result is a numpy array. In the special case of a numpy array containing a single value, you can cast it as a float to convert it to a scalar.
# GRADED FUNCTION: conv_single_step def conv_single_step(a_slice_prev, W, b): """ Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation of the previous layer. Arguments: a_slice_prev -- slice of input data of shape (f, f, n_C_prev) W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev) b -- Bias parameters contained in a window - matrix of shape (1, 1, 1) Returns: Z -- a scalar value, the result of convolving the sliding window (W, b) on a slice x of the input data """ #(≈ 3 lines of code) # Element-wise product between a_slice_prev and W. Do not add the bias yet. s = a_slice_prev * W # Sum over all entries of the volume s. Z = s.reshape(1,-1).sum() # Add bias b to Z. Cast b to a float() so that Z results in a scalar value. Z = Z + float(b) # YOUR CODE STARTS HERE # YOUR CODE ENDS HERE return Z np.random.seed(1) a_slice_prev = np.random.randn(4, 4, 3) W = np.random.randn(4, 4, 3) b = np.random.randn(1, 1, 1) Z = conv_single_step(a_slice_prev, W, b) print("Z =", Z) conv_single_step_test(conv_single_step) assert (type(Z) == np.float64 or type(Z) == np.float32), "You must cast the output to float" assert np.isclose(Z, -6.999089450680221), "Wrong value"
Z = -6.999089450680221  All tests passed.
MIT
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
3.3 - Convolutional Neural Networks - Forward PassIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: Exercise 3 - conv_forwardImplement the function below to convolve the filters `W` on an input activation `A_prev`. This function takes the following inputs:* `A_prev`, the activations output by the previous layer (for a batch of m inputs); * Weights are denoted by `W`. The filter window size is `f` by `f`.* The bias vector is `b`, where each filter has its own (single) bias. You also have access to the hyperparameters dictionary, which contains the stride and the padding. **Hint**: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:```pythona_slice_prev = a_prev[0:2,0:2,:]```Notice how this gives a 3D slice that has height 2, width 2, and depth 3. Depth is the number of channels. This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find out how each of the corners can be defined using h, w, f and s in the code below. Figure 3 : Definition of a slice using vertical and horizontal start/end (with a 2x2 filter) This figure shows only a single channel. **Reminder**: The formulas relating the output shape of the convolution to the input shape are: $$n_H = \Bigl\lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \Bigr\rfloor +1$$$$n_W = \Bigl\lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \Bigr\rfloor +1$$$$n_C = \text{number of filters used in the convolution}$$ For this exercise, don't worry about vectorization! Just implement everything with for-loops. Additional Hints (if you're stuck):* Use array slicing (e.g.`varname[0:1,:,3:5]`) for the following variables: `a_prev_pad` ,`W`, `b` - Copy the starter code of the function and run it outside of the defined function, in separate cells. - Check that the subset of each array is the size and dimension that you're expecting. * To decide how to get the `vert_start`, `vert_end`, `horiz_start`, `horiz_end`, remember that these are indices of the previous layer. - Draw an example of a previous padded layer (8 x 8, for instance), and the current (output layer) (2 x 2, for instance). - The output layer's indices are denoted by `h` and `w`. * Make sure that `a_slice_prev` has a height, width and depth.* Remember that `a_prev_pad` is a subset of `A_prev_pad`. - Think about which one should be used within the for loops.
# GRADED FUNCTION: conv_forward def conv_forward(A_prev, W, b, hparameters): ''' Implements the forward propagation for a convolution function Arguments: A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) W -- Weights, numpy array of shape (f, f, n_C_prev, n_C) b -- Biases, numpy array of shape (1, 1, 1, n_C) hparameters -- python dictionary containing "stride" and "pad" Returns: Z -- conv output, numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward() function ''' # Retrieve dimensions from A_prev's shape (≈1 line) (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve dimensions from W's shape (≈1 line) (f, f, n_C_prev, n_C) = W.shape # Retrieve information from "hparameters" (≈2 lines) stride = hparameters['stride'] pad = hparameters['pad'] # Compute the dimensions of the CONV output volume using the formula given above. # Hint: use int() to apply the 'floor' operation. (≈2 lines) n_H = int((n_H_prev+(2*pad) -f )/stride) + 1 n_W = int((n_W_prev+(2*pad) -f )/stride) + 1 # Initialize the output volume Z with zeros. (≈1 line) Z = np.zeros((m, n_H, n_W, n_C)) # Create A_prev_pad by padding A_prev A_prev_pad = zero_pad(A_prev, pad) for i in range(m): # loop over the batch of training examples a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation for h in range(n_H): # loop over vertical axis of the output volume # Find the vertical start and end of the current "slice" (≈2 lines) vert_start = h * stride vert_end = h * stride+ f for w in range(n_W): # loop over horizontal axis of the output volume # Find the horizontal start and end of the current "slice" (≈2 lines) horiz_start = w * stride horiz_end = w * stride + f for c in range(n_C): # loop over channels (= #filters) of the output volume # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line) a_slice_prev = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:] # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈3 line) weights = W[:,:,:,c] biases = b[:,:,:,c] Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:,:,:,c], b[:,:,:,c]) # YOUR CODE STARTS HERE # YOUR CODE ENDS HERE # Save information in "cache" for the backprop cache = (A_prev, W, b, hparameters) return Z, cache np.random.seed(1) A_prev = np.random.randn(2, 5, 7, 4) W = np.random.randn(3, 3, 4, 8) b = np.random.randn(1, 1, 1, 8) hparameters = {"pad" : 1, "stride": 2} Z, cache_conv = conv_forward(A_prev, W, b, hparameters) print("Z's mean =\n", np.mean(Z)) print("Z[0,2,1] =\n", Z[0, 2, 1]) print("cache_conv[0][1][2][3] =\n", cache_conv[0][1][2][3]) conv_forward_test(conv_forward)
Z's mean = 0.5511276474566768 Z[0,2,1] = [-2.17796037 8.07171329 -0.5772704 3.36286738 4.48113645 -2.89198428 10.99288867 3.03171932] cache_conv[0][1][2][3] = [-1.1191154 1.9560789 -0.3264995 -1.34267579] (2, 13, 15, 8)  All tests passed.
MIT
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
Finally, a CONV layer should also contain an activation, in which case you would add the following line of code:```python Convolve the window to get back one output neuronZ[i, h, w, c] = ... Apply activationA[i, h, w, c] = activation(Z[i, h, w, c])```You don't need to do it here, however. 4 - Pooling Layer The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: - Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the $f \times f$ window you would compute a *max* or *average* over. 4.1 - Forward PoolingNow, you are going to implement MAX-POOL and AVG-POOL, in the same function. Exercise 4 - pool_forwardImplement the forward pass of the pooling layer. Follow the hints in the comments below.**Reminder**:As there's no padding, the formulas binding the output shape of the pooling to the input shape is:$$n_H = \Bigl\lfloor \frac{n_{H_{prev}} - f}{stride} \Bigr\rfloor +1$$$$n_W = \Bigl\lfloor \frac{n_{W_{prev}} - f}{stride} \Bigr\rfloor +1$$$$n_C = n_{C_{prev}}$$
# GRADED FUNCTION: pool_forward def pool_forward(A_prev, hparameters, mode = "max"): """ Implements the forward pass of the pooling layer Arguments: A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) hparameters -- python dictionary containing "f" and "stride" mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C) cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters """ # Retrieve dimensions from the input shape (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve hyperparameters from "hparameters" f = hparameters["f"] stride = hparameters["stride"] # Define the dimensions of the output n_H = int(1 + (n_H_prev - f) / stride) n_W = int(1 + (n_W_prev - f) / stride) n_C = n_C_prev # Initialize output matrix A A = np.zeros((m, n_H, n_W, n_C)) for i in range(m): # loop over the training examples for h in range(n_H): # loop on the vertical axis of the output volume # Find the vertical start and end of the current "slice" (≈2 lines) vert_start = h * stride vert_end = h * stride+ f for w in range(n_W): # loop on the horizontal axis of the output volume #Find the vertical start and end of the current "slice" (≈2 lines) horiz_start = w * stride horiz_end = w * stride + f for c in range (n_C): # loop over the channels of the output volume # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line) a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end,c] # Compute the pooling operation on the slice. # Use an if statement to differentiate the modes. # Use np.max and np.mean. if mode == "max": A[i, h, w, c] = np.max(a_prev_slice) elif mode == "average": A[i, h, w, c] = np.mean(a_prev_slice) # YOUR CODE STARTS HERE # YOUR CODE ENDS HERE # Store the input and hparameters in "cache" for pool_backward() cache = (A_prev, hparameters) # Making sure your output shape is correct assert(A.shape == (m, n_H, n_W, n_C)) return A, cache # Case 1: stride of 1 np.random.seed(1) A_prev = np.random.randn(2, 5, 5, 3) hparameters = {"stride" : 1, "f": 3} A, cache = pool_forward(A_prev, hparameters, mode = "max") print("mode = max") print("A.shape = " + str(A.shape)) print("A[1, 1] =\n", A[1, 1]) print() A, cache = pool_forward(A_prev, hparameters, mode = "average") print("mode = average") print("A.shape = " + str(A.shape)) print("A[1, 1] =\n", A[1, 1]) pool_forward_test(pool_forward)
mode = max A.shape = (2, 3, 3, 3) A[1, 1] = [[1.96710175 0.84616065 1.27375593] [1.96710175 0.84616065 1.23616403] [1.62765075 1.12141771 1.2245077 ]] mode = average A.shape = (2, 3, 3, 3) A[1, 1] = [[ 0.44497696 -0.00261695 -0.31040307] [ 0.50811474 -0.23493734 -0.23961183] [ 0.11872677 0.17255229 -0.22112197]]  All tests passed.
MIT
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
**Expected output**```mode = maxA.shape = (2, 3, 3, 3)A[1, 1] = [[1.96710175 0.84616065 1.27375593] [1.96710175 0.84616065 1.23616403] [1.62765075 1.12141771 1.2245077 ]]mode = averageA.shape = (2, 3, 3, 3)A[1, 1] = [[ 0.44497696 -0.00261695 -0.31040307] [ 0.50811474 -0.23493734 -0.23961183] [ 0.11872677 0.17255229 -0.22112197]]```
# Case 2: stride of 2 np.random.seed(1) A_prev = np.random.randn(2, 5, 5, 3) hparameters = {"stride" : 2, "f": 3} A, cache = pool_forward(A_prev, hparameters) print("mode = max") print("A.shape = " + str(A.shape)) print("A[0] =\n", A[0]) print() A, cache = pool_forward(A_prev, hparameters, mode = "average") print("mode = average") print("A.shape = " + str(A.shape)) print("A[1] =\n", A[1])
mode = max A.shape = (2, 2, 2, 3) A[0] = [[[1.74481176 0.90159072 1.65980218] [1.74481176 1.6924546 1.65980218]] [[1.13162939 1.51981682 2.18557541] [1.13162939 1.6924546 2.18557541]]] mode = average A.shape = (2, 2, 2, 3) A[1] = [[[-0.17313416 0.32377198 -0.34317572] [ 0.02030094 0.14141479 -0.01231585]] [[ 0.42944926 0.08446996 -0.27290905] [ 0.15077452 0.28911175 0.00123239]]]
MIT
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
**Expected Output:** ```mode = maxA.shape = (2, 2, 2, 3)A[0] = [[[1.74481176 0.90159072 1.65980218] [1.74481176 1.6924546 1.65980218]] [[1.13162939 1.51981682 2.18557541] [1.13162939 1.6924546 2.18557541]]]mode = averageA.shape = (2, 2, 2, 3)A[1] = [[[-0.17313416 0.32377198 -0.34317572] [ 0.02030094 0.14141479 -0.01231585]] [[ 0.42944926 0.08446996 -0.27290905] [ 0.15077452 0.28911175 0.00123239]]]``` **What you should remember**:* A convolution extracts features from an input image by taking the dot product between the input data and a 3D array of weights (the filter). * The 2D output of the convolution is called the feature map* A convolution layer is where the filter slides over the image and computes the dot product * This transforms the input volume into an output volume of different size * Zero padding helps keep more information at the image borders, and is helpful for building deeper networks, because you can build a CONV layer without shrinking the height and width of the volumes* Pooling layers gradually reduce the height and width of the input by sliding a 2D window over each specified region, then summarizing the features in that region **Congratulations**! You have now implemented the forward passes of all the layers of a convolutional network. Great work!The remainder of this notebook is optional, and will not be graded. If you carry on, just remember to hit the Submit button to submit your work for grading first. 5 - Backpropagation in Convolutional Neural Networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and were not derived in lecture, but are briefly presented below. 5.1 - Convolutional Layer Backward Pass Let's start by implementing the backward pass for a CONV layer. 5.1.1 - Computing dA:This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:$$dA \mathrel{+}= \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, you multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, you are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into:```pythonda_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]``` 5.1.2 - Computing dW:This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:$$dW_c \mathrel{+}= \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$Where $a_{slice}$ corresponds to the slice which was used to generate the activation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into:```pythondW[:,:,:,c] \mathrel{+}= a_slice * dZ[i, h, w, c]``` 5.1.3 - Computing db:This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:$$db = \sum_h \sum_w dZ_{hw} \tag{3}$$As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into:```pythondb[:,:,:,c] += dZ[i, h, w, c]``` Exercise 5 - conv_backwardImplement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
def conv_backward(dZ, cache): """ Implement the backward propagation for a convolution function Arguments: dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward(), output of conv_forward() Returns: dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev), numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) dW -- gradient of the cost with respect to the weights of the conv layer (W) numpy array of shape (f, f, n_C_prev, n_C) db -- gradient of the cost with respect to the biases of the conv layer (b) numpy array of shape (1, 1, 1, n_C) """ # Retrieve information from "cache" (A_prev, W, b, hparameters) = cache # Retrieve dimensions from A_prev's shape (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve dimensions from W's shape (f, f, n_C_prev, n_C) = W.shape # Retrieve information from "hparameters" stride = hparameters['stride'] pad = hparameters['pad'] # Retrieve dimensions from dZ's shape (m, n_H, n_W, n_C) = dZ.shape # Initialize dA_prev, dW, db with the correct shapes dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev)) dW = np.zeros((f, f, n_C_prev, n_C)) db = np.zeros((1, 1, 1, n_C)) # Pad A_prev and dA_prev A_prev_pad = zero_pad(A_prev, pad) dA_prev_pad = zero_pad(dA_prev, pad) for i in range(m): # loop over the training examples # select ith training example from A_prev_pad and dA_prev_pad a_prev_pad = A_prev_pad[i] da_prev_pad = dA_prev_pad[i] for h in range(n_H): # loop over vertical axis of the output volume for w in range(n_W): # loop over horizontal axis of the output volume for c in range(n_C): # loop over the channels of the output volume # Find the corners of the current "slice" vert_start = h * stride vert_end = h * stride + f horiz_start = w * stride horiz_end = w * stride + f # Use the corners to define the slice from a_prev_pad a_slice = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:] # Update gradients for the window and the filter's parameters using the code formulas given above da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c] dW[:,:,:,c] += a_slice * dZ[i, h, w, c] db[:,:,:,c] += dZ[i, h, w, c] # Set the ith training example's dA_prev to the unpadded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :]) dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :] # YOUR CODE STARTS HERE # YOUR CODE ENDS HERE # Making sure your output shape is correct assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev)) return dA_prev, dW, db # We'll run conv_forward to initialize the 'Z' and 'cache_conv", # which we'll use to test the conv_backward function np.random.seed(1) A_prev = np.random.randn(10, 4, 4, 3) W = np.random.randn(2, 2, 3, 8) b = np.random.randn(1, 1, 1, 8) hparameters = {"pad" : 2, "stride": 2} Z, cache_conv = conv_forward(A_prev, W, b, hparameters) # Test conv_backward dA, dW, db = conv_backward(Z, cache_conv) print("dA_mean =", np.mean(dA)) print("dW_mean =", np.mean(dW)) print("db_mean =", np.mean(db)) assert type(dA) == np.ndarray, "Output must be a np.ndarray" assert type(dW) == np.ndarray, "Output must be a np.ndarray" assert type(db) == np.ndarray, "Output must be a np.ndarray" assert dA.shape == (10, 4, 4, 3), f"Wrong shape for dA {dA.shape} != (10, 4, 4, 3)" assert dW.shape == (2, 2, 3, 8), f"Wrong shape for dW {dW.shape} != (2, 2, 3, 8)" assert db.shape == (1, 1, 1, 8), f"Wrong shape for db {db.shape} != (1, 1, 1, 8)" assert np.isclose(np.mean(dA), 1.4524377), "Wrong values for dA" assert np.isclose(np.mean(dW), 1.7269914), "Wrong values for dW" assert np.isclose(np.mean(db), 7.8392325), "Wrong values for db" print("\033[92m All tests passed.")
dA_mean = 1.4524377775388075 dW_mean = 1.7269914583139097 db_mean = 7.839232564616838  All tests passed.
MIT
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
**Expected Output**: dA_mean 1.45243777754 dW_mean 1.72699145831 db_mean 7.83923256462 5.2 Pooling Layer - Backward PassNext, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagate the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. 5.2.1 Max Pooling - Backward Pass Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: $$ X = \begin{bmatrix}1 && 3 \\4 && 2\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}0 && 0 \\1 && 0\end{bmatrix}\tag{4}$$As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling is similar to this, but uses a different mask. Exercise 6 - create_mask_from_windowImplement `create_mask_from_window()`. This function will be helpful for pooling backward. Hints:- [np.max()]() may be helpful. It computes the maximum of an array.- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:```A[i,j] = True if X[i,j] = xA[i,j] = False if X[i,j] != x```- Here, you don't need to consider cases where there are several maxima in a matrix.
def create_mask_from_window(x): """ Creates a mask from an input matrix x, to identify the max entry of x. Arguments: x -- Array of shape (f, f) Returns: mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x. """ # (≈1 line) mask = np.max(x) == x # YOUR CODE STARTS HERE # YOUR CODE ENDS HERE return mask np.random.seed(1) x = np.random.randn(2, 3) mask = create_mask_from_window(x) print('x = ', x) print("mask = ", mask) x = np.array([[-1, 2, 3], [2, -3, 2], [1, 5, -2]]) y = np.array([[False, False, False], [False, False, False], [False, True, False]]) mask = create_mask_from_window(x) assert type(mask) == np.ndarray, "Output must be a np.ndarray" assert mask.shape == x.shape, "Input and output shapes must match" assert np.allclose(mask, y), "Wrong output. The True value must be at position (2, 1)" print("\033[92m All tests passed.")
x = [[ 1.62434536 -0.61175641 -0.52817175] [-1.07296862 0.86540763 -2.3015387 ]] mask = [[ True False False] [False False False]]  All tests passed.
MIT
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
**Expected Output:** **x =**[[ 1.62434536 -0.61175641 -0.52817175] [-1.07296862 0.86540763 -2.3015387 ]] mask =[[ True False False] [False False False]] Why keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. 5.2.2 - Average Pooling - Backward Pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}1/4 && 1/4 \\1/4 && 1/4\end{bmatrix}\tag{5}$$This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. Exercise 7 - distribute_valueImplement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)
def distribute_value(dz, shape): """ Distributes the input value in the matrix of dimension shape Arguments: dz -- input scalar shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz Returns: a -- Array of size (n_H, n_W) for which we distributed the value of dz """ # Retrieve dimensions from shape (≈1 line) (n_H, n_W) = shape # Compute the value to distribute on the matrix (≈1 line) average = dz / (n_H * n_W) # Create a matrix where every entry is the "average" value (≈1 line) a = np.ones(shape) * average # YOUR CODE STARTS HERE # YOUR CODE ENDS HERE return a a = distribute_value(2, (2, 2)) print('distributed value =', a) assert type(a) == np.ndarray, "Output must be a np.ndarray" assert a.shape == (2, 2), f"Wrong shape {a.shape} != (2, 2)" assert np.sum(a) == 2, "Values must sum to 2" a = distribute_value(100, (10, 10)) assert type(a) == np.ndarray, "Output must be a np.ndarray" assert a.shape == (10, 10), f"Wrong shape {a.shape} != (10, 10)" assert np.sum(a) == 100, "Values must sum to 100" print("\033[92m All tests passed.")
distributed value = [[0.5 0.5] [0.5 0.5]]  All tests passed.
MIT
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
**Expected Output**: distributed_value =[[ 0.5 0.5] [ 0.5 0.5]] 5.2.3 Putting it Together: Pooling Backward You now have everything you need to compute backward propagation on a pooling layer. Exercise 8 - pool_backwardImplement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dA.
def pool_backward(dA, cache, mode = "max"): """ Implements the backward pass of the pooling layer Arguments: dA -- gradient of cost with respect to the output of the pooling layer, same shape as A cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev """ # Retrieve information from cache (≈1 line) (A_prev, hparameters) = cache # Retrieve hyperparameters from "hparameters" (≈2 lines) stride = hparameters['stride'] f = hparameters['f'] # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines) m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape m, n_H, n_W, n_C = dA.shape # Initialize dA_prev with zeros (≈1 line) dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev)) for i in range(m): # loop over the training examples # select training example from A_prev (≈1 line) a_prev = A_prev[i] for h in range(n_H): # loop on the vertical axis for w in range(n_W): # loop on the horizontal axis for c in range(n_C): # loop over the channels (depth) # Find the corners of the current "slice" (≈4 lines) vert_start = h * stride vert_end = h * stride + f horiz_start = w * stride horiz_end = w * stride + f # Compute the backward propagation in both modes. if mode == "max": # Use the corners and "c" to define the current slice from a_prev (≈1 line) a_prev_slice = a_prev_slice = a_prev[vert_start:vert_end, horiz_start:horiz_end, c] # Create the mask from a_prev_slice (≈1 line) mask = create_mask_from_window(a_prev_slice) # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line) dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += mask * dA[i, h, w, c] elif mode == "average": # Get the value da from dA (≈1 line) da = dA[i, h, w, c] # Define the shape of the filter as fxf (≈1 line) shape = (f, f) # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line) dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape) # YOUR CODE STARTS HERE # YOUR CODE ENDS HERE # Making sure your output shape is correct assert(dA_prev.shape == A_prev.shape) return dA_prev np.random.seed(1) A_prev = np.random.randn(5, 5, 3, 2) hparameters = {"stride" : 1, "f": 2} A, cache = pool_forward(A_prev, hparameters) print(A.shape) print(cache[0].shape) dA = np.random.randn(5, 4, 2, 2) dA_prev1 = pool_backward(dA, cache, mode = "max") print("mode = max") print('mean of dA = ', np.mean(dA)) print('dA_prev1[1,1] = ', dA_prev1[1, 1]) print() dA_prev2 = pool_backward(dA, cache, mode = "average") print("mode = average") print('mean of dA = ', np.mean(dA)) print('dA_prev2[1,1] = ', dA_prev2[1, 1]) assert type(dA_prev1) == np.ndarray, "Wrong type" assert dA_prev1.shape == (5, 5, 3, 2), f"Wrong shape {dA_prev1.shape} != (5, 5, 3, 2)" assert np.allclose(dA_prev1[1, 1], [[0, 0], [ 5.05844394, -1.68282702], [ 0, 0]]), "Wrong values for mode max" assert np.allclose(dA_prev2[1, 1], [[0.08485462, 0.2787552], [1.26461098, -0.25749373], [1.17975636, -0.53624893]]), "Wrong values for mode average" print("\033[92m All tests passed.")
(5, 4, 2, 2) (5, 5, 3, 2) mode = max mean of dA = 0.14571390272918056 dA_prev1[1,1] = [[ 0. 0. ] [ 5.05844394 -1.68282702] [ 0. 0. ]] mode = average mean of dA = 0.14571390272918056 dA_prev2[1,1] = [[ 0.08485462 0.2787552 ] [ 1.26461098 -0.25749373] [ 1.17975636 -0.53624893]]  All tests passed.
MIT
Convolution_model_Step_by_Step_v1.ipynb
rahul23aug/DeepLearning
Beginning Programming with Python===========================Session 1: Getting Started--------------------------* The Zen of Python* Variables* Comments* Strings* Numbers* Booleans* String Methods* Using Variables in Strings* Numerical Operations* Working with Numerical Data* Using the Math library* Setting up Anaconda Python * Anaconda * Good Youtube Video **Assignment**1. Install Anaconda Pythonhttps://www.youtube.com/watch?v=Z1Yd7upQsXY2. Create GitHub account3. Create Codewars account4. Create Reddit account
2 + 5 import this
_____no_output_____
MIT
april-2019/1-getting-started.ipynb
dogweather/beginning-programming-with-python
Variables---------
phoebe = 8 print(phoebe) print(id(phoebe)) maru = 7 id(maru) 4fun = 3 fun4 = 3 m = 7 phoebe_age_in_years = 8 dog_name = "Phoebe" print(dog_name) type(dog_name) type(phoebe) dog_name + phoebe dog_name + " is a dog"
_____no_output_____
MIT
april-2019/1-getting-started.ipynb
dogweather/beginning-programming-with-python
Comments--------
# I'm attempting to calculate dog years for humans dog = 1222 # Here's something # ... And etc. # and more... """Calculate the age very simply. I'm using the standard algorithm... """ age = 10
_____no_output_____
MIT
april-2019/1-getting-started.ipynb
dogweather/beginning-programming-with-python
Strings-------
4 + 4 "4" + "4" len("robb") dir("robb") "robb".endswith('b') "robb".endswith('x') "Phoebe".lower() "Here's a meanlingless sentence.".split() "phoebe" print("hey") name = "Ramona" f"{name} said thanks!" name + " said thanks!" "%s said thanks!", name "{} said thanks! {}".format(name, "hey")
_____no_output_____
MIT
april-2019/1-getting-started.ipynb
dogweather/beginning-programming-with-python
Numbers-------
type(5) type(5.5) account_balance = 100.1111111 5.5 + 5 5.5 + "5" 5.5 + float("5") float("0") type(0.0) float("x") int("0") int("x") "x" x x = 1 x id(x) "x"
_____no_output_____
MIT
april-2019/1-getting-started.ipynb
dogweather/beginning-programming-with-python
Booleans--------
1, -1, 0 type(1) False True 1 == 2 1 == 1 not (1 == 1) not (True and False) (not True) or (not False) if 1 == 2: print("The world's gone mad") if 1 == 1: print("No problem with that") phoebe_is_old = phoebe > 12 phoebe_is_old if phoebe_is_old: print("old") phoebe o = phoebe > 12 O = maru > 12 Math.PI import Math import math math.PI dir(math) GOOGLE_ID = 12354617234567234 GOOGLE_ID GOOGLE_ID = 'fu' GOOGLE_ID
_____no_output_____
MIT
april-2019/1-getting-started.ipynb
dogweather/beginning-programming-with-python
Numerical Data and the Math Library-----------------------------------
round(1.23456) round(1.5) round(1.4) help(round) round(1.23456, 2) 1.23 * 1.07 round(1.23 * 1.07, 2) round('x' + 'y', 2) abs(3) abs(-3) 2 % 3 3 % 2 3 % 2 == 0 10 % 2 == 0 10 % 3 == 0 15 % 3 == 0 not(10 % 2 == 0) 10 % 7 != 0
_____no_output_____
MIT
april-2019/1-getting-started.ipynb
dogweather/beginning-programming-with-python
Using the Math Library----------------------
import math x math dir(math) math pi math.pi math.factorial(4) math.factorial(10) help(math.ceil) math.ceil(1.1) math.ceil type(math.ceil)
_____no_output_____
MIT
april-2019/1-getting-started.ipynb
dogweather/beginning-programming-with-python
`concurrent.futures`This lesson has a strange name. `concurrent.futures` is the name of a (relative) modern package in the Python standard library. It's a package with a beautiful and Pythonic API that abstracts us from the low level mechanisms of concurrency.**`concurrent.futures` should be your default choice for concurrent programming as much as possible**In this tutorial, we started from the low levels `threading` and `multiprocessing` because we wanted to explain the concepts behind concurrency, but `concurrent.futures` offers a much safer and intuitive API. Let's start with it. Executors and futures ExecutorsExecutors are the entry points of `cf`. They are similar to `multiprocessing.Pool`s. Once an executor has been instantiated, we can `submit` jobs, or even `map` tasks, similar to `multiprocessin.Pool.map`. `concurrent.futures.Executor` is an abstract class. `cf` includes two concrete classes: `ThreadPoolExecutor` and `ProcessPoolExecutor`. This means that we can keep the same interface, but use completely different mechanisms just by changing the executor type we're using:
def check_price(exchange, symbol, date): base_url = "http://localhost:5000" resp = requests.get(f"{base_url}/price/{exchange}/{symbol}/{date}") return resp.json() with ThreadPoolExecutor(max_workers=10) as ex: future = ex.submit(check_price, 'bitstamp', 'btc', '2020-04-01') print(f"Price: ${future.result()['close']}") with ProcessPoolExecutor(max_workers=10, mp_context=mp.get_context('fork')) as ex: future = ex.submit(check_price, 'bitstamp', 'btc', '2020-04-01') print(f"Price: ${future.result()['close']}")
Price: $6421.14
MIT
7. concurrent.futures.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
This is the beauty of `cf`: we're using the same logic with two completely different executors; the API is the same. FuturesAs you can see from the the examples above, the `submit` method returns immediately a `Future` object. These objects are an abstraction of a task that is being processed. They have multiple useful methods that we can use (as seen in the following example). The most important one, `result(timeout=None)` will block for `timeout` seconds until a result was produced:
with ThreadPoolExecutor(max_workers=10) as ex: future = ex.submit(check_price, 'bitstamp', 'btc', '2020-04-01') print(future.done()) print(f"Price: ${future.result()['close']}") print(future.done())
False Price: $6421.14 True
MIT
7. concurrent.futures.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
The `map` methodExecutors have a `map` method that is similar to `mp.Pool.map`, it's convenient as there are no futures to work with, but it's limited as only one parameter can be passed:
EXCHANGES = ['bitfinex', 'bitstamp', 'kraken'] def check_price_tuple(arg): exchange, symbol, date = arg base_url = "http://localhost:5000" resp = requests.get(f"{base_url}/price/{exchange}/{symbol}/{date}") return resp.json() with ThreadPoolExecutor(max_workers=10) as ex: results = ex.map(check_price_tuple, [ (exchange, 'btc', '2020-04-01') for exchange in EXCHANGES ]) print([price['close'] for price in results]) ('bitstamp', 'btc', '2020-04-01')
_____no_output_____
MIT
7. concurrent.futures.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
As you can see, we had to define a new special function that works by receiving a tuple instead of the individual elements. `submit` & `as_completed` patternTo overcome the limitation of `Executor.map`, we can use a common pattern of creating multiple futures with `Executor.submit` and waiting for them to complete with the module-level function `concurrent.futures.as_completed`:
with ThreadPoolExecutor(max_workers=10) as ex: futures = { ex.submit(check_price, exchange, 'btc', '2020-04-01'): exchange for exchange in EXCHANGES } for future in cf.as_completed(futures): exchange = futures[future] print(f"{exchange.title()}: ${future.result()['close']}")
Kraken: $6401.9 Bitfinex: $6409.8 Bitstamp: $6421.14
MIT
7. concurrent.futures.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
Producer/Consumer with `concurrent.futures`I'll show you an example of the producer/consumer pattern using the `cf` module. There are multiple ways to create this pattern, I'll stick to the basics.
BASE_URL = "http://localhost:5000" resp = requests.get(f"{BASE_URL}/exchanges") EXCHANGES = resp.json() EXCHANGES[:3] START_DATE = datetime(2020, 3, 1) DATES = [(START_DATE + timedelta(days=i)).strftime('%Y-%m-%d') for i in range(31)] DATES[:3] resp = requests.get(f"{BASE_URL}/symbols") SYMBOLS = resp.json() SYMBOLS
_____no_output_____
MIT
7. concurrent.futures.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
Queues:
work_to_do = Queue() work_done = SimpleQueue() for exchange in EXCHANGES: for date in DATES: for symbol in SYMBOLS: task = { 'exchange': exchange, 'symbol': symbol, 'date': date, } work_to_do.put(task) work_to_do.qsize() def worker(task_queue, results_queue): while True: try: task = task_queue.get(block=False) except queue.Empty: print('Queue is empty! My work here is done. Exiting.') return exchange, symbol, date = task['exchange'], task['symbol'], task['date'] price = check_price(exchange, symbol, date) results_queue.put((price, exchange, symbol, date)) task_queue.task_done() with ThreadPoolExecutor(max_workers=32) as ex: futures = [ ex.submit(worker, work_to_do, work_done) for _ in range(32) ] work_to_do.join() all([f.done() for f in futures]) work_done.qsize() results = {} while True: try: price, exchange, symbol, date = work_done.get(block=None) results.setdefault(exchange, {}) results[exchange].setdefault(date, {}) results[exchange][date][symbol] = price['close'] if price else None except queue.Empty: break results['bitfinex']['2020-03-10']['btc'] results['bitstamp']['2020-03-10']['btc'] results['coinbase-pro']['2020-03-10']['btc']
_____no_output_____
MIT
7. concurrent.futures.ipynb
zzhengnan/pycon-concurrency-tutorial-2020
Basic Ensemble Learinng : Hard/Soft Voting/Bagging - OOB/bootstraps/bootstrap_features
##----------------- ## Copyright in private ## Modify History : ## 2018 - 9 - 24 ## Purpose: ## 1. 集成学习分类器 构建 - 集成多个子模型来投票,提供准确率,理论上子模型越多,整体模型的准确率将很高! ## 2. Hard(少数服从多数) and Soft voting classifier ## ## 3. 为提高每个子模型的差异性。希望每个子模型只看一部分的数据样本。 在看样本的形式上可以分为:放回取样(bagging)和不放回取样方法(pasting) ## 放回取样的方法,整体有30% 的数据是取不到, 参数bootstraps ## 4. n_estimators=500 个不同的子模型构成集成学习,而且模型之见存在差异,就构成了随机森林 ## ## Parameters: ## ## from sklearn import datasets import numpy as np import matplotlib.pyplot as plt # help(make_moons) ## data sets X,y = datasets.make_moons(n_samples = 1200, noise = 0.25, random_state = 100) X.shape y.shape #plot the show plt.scatter(X[y == 0,0],X [y == 0,1]) plt.scatter(X[y == 1,0],X [y == 1,1]) plt.show() # try to split data into test and train data sets from sklearn.model_selection import train_test_split X_train,X_test,y_train,y_test =train_test_split(X,y)
_____no_output_____
MIT
6_Ensemble Learing_HardandSoft_VotingClasaifier_OOB_Boostrap_Features.ipynb
Yazooliu/Ai_Lab_
1. EnsembleLearning Classifier
from sklearn.linear_model import LogisticRegression # Logistic Regression log_clf = LogisticRegression() log_clf.fit(X_train,y_train) #log_clf.score(X_test,y_test) # 0.83 # SVM from sklearn.svm import SVC svc_clf = SVC() svc_clf.fit(X_train,y_train) #svc_clf.score(X_test,y_test) #0.98 # Decision Tree from sklearn.tree import DecisionTreeClassifier dt_clf = DecisionTreeClassifier(random_state = 100) # max_depth = 2,criterion = 'gini' dt_clf.fit(X_train,y_train) #dt_clf.score(X_test,y_test) # 1.0 ## test on data sets log_clf.score(X_test,y_test) # 0.88 svc_clf.score(X_test,y_test) #0.94 dt_clf.score(X_test,y_test) # 0.92
_____no_output_____
MIT
6_Ensemble Learing_HardandSoft_VotingClasaifier_OOB_Boostrap_Features.ipynb
Yazooliu/Ai_Lab_
1.1 Ensemble Leaning 1.1.1集成学习的效果- 人多力量大,明显比单个分类器的分类结果准确率要高
predict_1 = log_clf.predict(X_test) predict_2 = svc_clf.predict(X_test) predict_3 = dt_clf.predict(X_test) # predict_y = np.array((predict_1 + predict_2 + predict_3) >=2 ,dtype = 'int') predict_y[:10] from sklearn.metrics import accuracy_score accuracy_score(y_test,predict_y)
_____no_output_____
MIT
6_Ensemble Learing_HardandSoft_VotingClasaifier_OOB_Boostrap_Features.ipynb
Yazooliu/Ai_Lab_
2. Voting Classifier - Hard Voting - 少数服从多数准则
# Harding voting - 少数服从多数 from sklearn.ensemble import VotingClassifier voting_clf_hard = VotingClassifier(estimators = [ ('log_clf',LogisticRegression()), ('svm_clf',SVC()), ('dt_clf',DecisionTreeClassifier(random_state = 200))], voting = 'hard') # test score by hard voting classifier voting_clf_hard.fit(X_train,y_train) voting_clf_hard.score(X_test,y_test)
c:\users\h155809\appdata\local\programs\python\python36\lib\site-packages\sklearn\preprocessing\label.py:151: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty. if diff:
MIT
6_Ensemble Learing_HardandSoft_VotingClasaifier_OOB_Boostrap_Features.ipynb
Yazooliu/Ai_Lab_
3. Voting Classifier - Soft Voting -考虑以分类的概率作为权值 来分类
from sklearn.ensemble import VotingClassifier voting_clf_soft = VotingClassifier(estimators = [ ('log_clf', LogisticRegression()), ('svc_clf',SVC(probability=True)), ('dt_clf', DecisionTreeClassifier(random_state = 200))],voting = 'soft') voting_clf_soft.fit(X_train,y_train) voting_clf_soft.score(X_test,y_test)
c:\users\h155809\appdata\local\programs\python\python36\lib\site-packages\sklearn\preprocessing\label.py:151: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty. if diff:
MIT
6_Ensemble Learing_HardandSoft_VotingClasaifier_OOB_Boostrap_Features.ipynb
Yazooliu/Ai_Lab_
4. 放回的取样方法 - Bagging
# 提高子模型的差异可以提高整体模型的准确率,并且可以让模型看不同数量的样本数量。 # 在取样本时,分为放回取样方法- Bagging and 不放回取样 pasting from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import BaggingClassifier # 以DecisionTree 为例 # n_estimators : 集成多少个分类器模型 # max_sample: 分类器一次看多少数据 # bootstrap = True: 可放回取样 bagging_clf = BaggingClassifier( DecisionTreeClassifier(), n_estimators = 100,max_samples = 100, bootstrap = True ) bagging_clf.fit(X_train,y_train) bagging_clf.score(X_test,y_test)
_____no_output_____
MIT
6_Ensemble Learing_HardandSoft_VotingClasaifier_OOB_Boostrap_Features.ipynb
Yazooliu/Ai_Lab_
4.1 放回取样方法Bagging - Out of Bag(OOB) 有大约30%的数据取不到
# 有放回的取样本,并通过 oob_score = True 来确定标记没有被取到的样本数量。并将这些样本用于测试样本准确度 bagging_clf = BaggingClassifier( DecisionTreeClassifier(), n_estimators = 100,max_samples = 100, bootstrap = True,oob_score = True ) bagging_clf.fit(X_train,y_train) bagging_clf.score(X_test,y_test) # test model on out of bag data bagging_clf.oob_score_
_____no_output_____
MIT
6_Ensemble Learing_HardandSoft_VotingClasaifier_OOB_Boostrap_Features.ipynb
Yazooliu/Ai_Lab_
4.2 放回取样方法Bagging - Out of Bag(OOB) - 基于样本特征的取样方法 - bootstrap_features
## 有放回的取样本 , 基于样本特征的取样方法,并标记最大取样特征数量max_features ## from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import BaggingClassifier # max_features - 每次取样特征数量 # bootstrap_features = True 基于特征的取样方法 bagging_clf_features = BaggingClassifier( DecisionTreeClassifier(), n_estimators = 100,max_samples = 100, bootstrap = True,oob_score = True,max_features = 2, bootstrap_features = True ) bagging_clf_features.fit(X_train,y_train) bagging_clf_features.score(X_test,y_test) # test model on out of bag data bagging_clf_features.oob_score_
_____no_output_____
MIT
6_Ensemble Learing_HardandSoft_VotingClasaifier_OOB_Boostrap_Features.ipynb
Yazooliu/Ai_Lab_
here true says is is stopped and we can say drone is hit something and stopped we can say
env.step(0)
_____no_output_____
Apache-2.0
RL1.ipynb
bsivavenu/Google_Colab_Notebooks
multiarmbandit
env = MultiArmedBandit()
_____no_output_____
Apache-2.0
RL1.ipynb
bsivavenu/Google_Colab_Notebooks
Stay classification: MBC evaluation**31.08.2020**
import numpy as np import pandas as pd import os, sys sys.path.append('/home/sandm/Notebooks/stay_classification/src/') %matplotlib inline import matplotlib.pyplot as plt from synthetic_data.trajectory_class import get_pickle_trajectory from synthetic_data.trajectory import get_stay_segs, get_adjusted_stays from stay_classification.metric_box_classifier.metric_box_classifier import stay_classifier_testing
_____no_output_____
MIT
notebooks/classifiers_playground/metric_box_classifier/classifier_1D__metric_box_classifier__3stays_illustrate_metrics.ipynb
m-salewski/stay_classification
Batch data evaluation
dsec = 1/3600.0 t_total = np.arange(0,24,dsec) time_thresh = 1/6 dist_thresh=0.25 nr_stays = 3 data_dir = os.path.abspath('../../')+f"/classifiers_playground/metric_box_classifier/testdata_training_set__canonical_{nr_stays}stays/" os.path.isdir(data_dir) import glob #data_dir = os.path.abspath('../../')+f"/testdata/testdata_training_set__general/" os.path.isdir(data_dir) pkls = glob.glob(data_dir + "*.pkl") from stay_classification.metrics_etc import eval_synth_data from stay_classification.metrics import get_segments_scores, get_segments_errs from stay_classification.metrics_cluster_tools import get_pred_labels, get_labels_from_clusters from synthetic_data.trajectory import get_stay_indices, get_adjusted_stays get_err = lambda trues, preds: np.sum(abs(trues-preds))/trues.size
_____no_output_____
MIT
notebooks/classifiers_playground/metric_box_classifier/classifier_1D__metric_box_classifier__3stays_illustrate_metrics.ipynb
m-salewski/stay_classification
Load, Classify, Measure
# For the correct nr of stays lens3 = [] precs3, a_precs3, w_precs3 = [], [], [] recs3, a_recs3, w_recs3 = [], [], [] errs3, a_errs3, w_errs3 = [], [], [] # For the incorrect nr of stays lens = [] precs, a_precs, w_precs = [], [], [] recs, a_recs, w_recs = [], [], [] errs, a_errs, w_errs = [], [], [] bad_list = [] precrec_limit = 0.80 ii = 0 length_criterion_break = False iqr_trim = False verbose = False total = 1000 for ii in range(0, total): # Load the data trajectory_tag = f"trajectory{ii}_{nr_stays}stays" path_to_file = pkls[ii]#data_dir + trajectory_tag t_arr, r_arr, x_arr, segments = get_pickle_trajectory(path_to_file) t_segs, x_segs = get_stay_segs(get_adjusted_stays(segments, t_arr)) # Get the true event indices (needed for the total error) true_indices = get_stay_indices(get_adjusted_stays(segments, t_arr), t_arr) true_labels = np.zeros(t_arr.shape) for pair in true_indices: true_labels[pair[0]:pair[1]+1] = 1 # Get the stay clusters #clusters = quick_box_method(t_arr, x_arr, dist_thresh, time_thresh, 1, False) all_clusters = stay_classifier_testing(t_arr, x_arr, dist_thresh, time_thresh, verbose) clusters = all_clusters[-1].copy() # Make some measurements final_len=len(clusters) # total scores prec, rec, conmat = eval_synth_data(t_arr, segments, clusters) # seg. scores _, a_prec, w_prec, _, a_rec, w_rec = get_segments_scores(t_arr, segments, clusters) # Total error pred_labels = get_pred_labels(clusters, t_arr.shape) err = get_err(true_labels,pred_labels) # Segment errors _, a_err, w_err = get_segments_errs(t_arr, segments, clusters) # Get the expected number of stays (in general) stays_tag = int((x_segs.size)/3) len_all_clusts = len(clusters) if final_len != stays_tag: lens.append(final_len) precs.append(prec) a_precs.append(a_prec) w_precs.append(w_prec) recs.append(rec) a_recs.append(a_rec) w_recs.append(w_rec) errs.append(err) a_errs.append(a_err) w_errs.append(w_err) else: lens3.append(final_len) precs3.append(prec) a_precs3.append(a_prec) w_precs3.append(w_prec) recs3.append(rec) a_recs3.append(a_rec) w_recs3.append(w_rec) errs3.append(err) a_errs3.append(a_err) w_errs3.append(w_err) # progress output if ii % int(0.1*total) == 0: print(f"{ii:4d} of {total:5d}") correct_frac = (len(lens3)/total) incorrect_frac = (len(lens)/total) print(f"\n * correct number of stays, {correct_frac:6.3f} ", f"\n * prec.: {sum(w_precs3)/len(w_precs3):6.3}", f"\n * rec.: {sum(w_recs3)/len(w_recs3):6.3}", f"\n * incorrect number of stays, {incorrect_frac:6.3f}", f"\n * prec.: {sum(w_precs)/len(w_precs):6.3}", f"\n * rec.: {sum(w_recs)/len(w_recs):6.3}")
* correct number of stays, 0.796 * prec.: 0.956 * rec.: 0.994 * incorrect number of stays, 0.204 * prec.: 0.856 * rec.: 0.95
MIT
notebooks/classifiers_playground/metric_box_classifier/classifier_1D__metric_box_classifier__3stays_illustrate_metrics.ipynb
m-salewski/stay_classification
Visualizations
from stay_classification.metrics_plotting import plot_scores_stats, plot_errs_stats, plot_scores_stats_cominbed os.mkdir('./visualizations/metrics_new/')
_____no_output_____
MIT
notebooks/classifiers_playground/metric_box_classifier/classifier_1D__metric_box_classifier__3stays_illustrate_metrics.ipynb
m-salewski/stay_classification
Prec/rec score distributions Total scores
title = f"Tot. scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}" fig, axs = plot_scores_stats(precs3, recs3, precs, recs, title) fig.savefig("./visualizations/metrics_new/" + f"metrics__{nr_stays}stays.png") title = f"Total scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}" fig, axs = plot_scores_stats_cominbed(precs3, recs3, precs, recs, title) fig.savefig("./visualizations/metrics_new/" + f"metrics__{nr_stays}stays_seg_scores_combined_tot.png")
_____no_output_____
MIT
notebooks/classifiers_playground/metric_box_classifier/classifier_1D__metric_box_classifier__3stays_illustrate_metrics.ipynb
m-salewski/stay_classification
Segment-averaged score distributions
title = f"Avg. scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}" fig, axs = plot_scores_stats(a_precs3, a_recs3, a_precs, a_recs, title) fig.savefig("./visualizations/metrics_new/" + f"metrics__{nr_stays}stays_seg_scores_avg.png") title = f"Avg. Scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}" fig, axs = plot_scores_stats_cominbed(a_precs3, a_recs3, a_precs, a_recs, title) fig.savefig("./visualizations/metrics_new/" + f"metrics__{nr_stays}stays_seg_scores_combined_avg.png")
_____no_output_____
MIT
notebooks/classifiers_playground/metric_box_classifier/classifier_1D__metric_box_classifier__3stays_illustrate_metrics.ipynb
m-salewski/stay_classification
Segment, weighted-averaged score distributions
title = f"W-avg. scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}" fig, axs = plot_scores_stats(w_precs3, w_recs3, w_precs, w_recs, title) fig.savefig("./visualizations/metrics_new/" + f"metrics__{nr_stays}stays_seg_scores_wavg.png") title = f"W-avg. Scores: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}" fig, axs = plot_scores_stats_cominbed(w_precs3, w_recs3, w_precs, w_recs, title) fig.savefig("./visualizations/metrics_new/" + f"metrics__{nr_stays}stays_seg_scores_combined_wavg.png")
_____no_output_____
MIT
notebooks/classifiers_playground/metric_box_classifier/classifier_1D__metric_box_classifier__3stays_illustrate_metrics.ipynb
m-salewski/stay_classification
Error stats
title = f"Avg. Error: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}" fig, ax = plot_errs_stats(w_errs3, w_errs, title) fig.savefig("./visualizations/metrics_new/" + f"metrics__{nr_stays}stays_seg_errs_avg.png") title = f"W-avg. Error: correct stays, {correct_frac:6.3f}; incorrect stays, {incorrect_frac:6.3f}" fig, ax = plot_errs_stats(w_errs3, w_errs, title) fig.savefig("./visualizations/metrics_new/" + f"metrics__{nr_stays}stays_seg_errs_wavg.png")
_____no_output_____
MIT
notebooks/classifiers_playground/metric_box_classifier/classifier_1D__metric_box_classifier__3stays_illustrate_metrics.ipynb
m-salewski/stay_classification
Capturer un texte saisi par l'utilisateurNous nous proposons d'écrire un programme qui demande et stocke le nom de l'utilisateur, l'âge de l'utilisateur et stocke une valeur qui renseigne si l'utilisateur est majeur ou non. Par la suite nous afficherons un message de salutation à cet utilisateur, son âge ainsi que sa majorité.Voici comment se présente notre programme
nom = input("Veuillez saisir votre nom: ") age = int(input("Veuillez saisir votre age: ")) estMajeur = age >= 18 print(f'Bonjour {nom}') print(f'vous avez {age} ans') print(f'Etes-vous majeur? {estMajeur}')
Veuillez saisir votre nom: Hippo Veuillez saisir votre age: 15
MIT
Jour 2/chap02-04-saisie-utilisateur.ipynb
bellash13/SmartAcademyPython
Repaso de ciclos, cadenas, tuplas y listas Cobra MosmasAl ver los precios y los anuncios del almacén Cobra Mosmas, un cliente lepide crear un programa de computador que le permita ingresar el precioindividual de tres productos y el precio de la promoción en combo de lostres productos anunciada por el almacen y determine si es preferiblecomprarlos por separado o en el combo promoción. (Pensemos por 3minutos en definir claramente el problema)
#piense aquí la solución
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Solución
def comprar(p1,p2,p3,pc): if pc <= p1+p2+p3: return 'Combo' else: return 'Por separado' a=float(input('Precio primer producto?')) b=float(input('Precio segundo producto?')) c=float(input('Precio tercer producto?')) d=float(input('Precio combo?')) print("Comprar",comprar(a,b,c,d))
Precio primer producto?1 Precio segundo producto?2 Precio tercer producto?3 Precio combo?4 Comprar Combo
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
La cercaUn campesino de la región le pide crear un programa de computador quele permita determinar cual de dos opciones (madera o alambre) es la mejoropción (menor costo) para encerrar un terreno rectangular de 𝑛 ∗ 𝑚 metroscuadrados, sabiendo el costo de un metro lineal de alambre, el costo de unmetro de madera y la cantdad de hilos de alambre o hileras de madera. Elcampesino solo piensa en usar una de las dos opciones, no las piensacombinar. (Pensemos por 3 minutos en definir claramente el problema)
def en_madera(n,m,w,p): return (2*n+2*m)*w*p def en_alambre(n,m,h,a): return (2*n+2*m)*h*a def usar(n,m,h,a,w,p): if en_madera(n,m,w,p) <= en_alambre(n,m,h,a): return 'Madera' else: return 'Alambre' n=float(input('Largo terreno?')) m=float(input('Ancho terreno?')) a=float(input('Costo metro alambre?')) h=int(input('Hilos de alambre?')) p=float(input('Costo metro madera?')) w=int(input('Hileras de madera?')) print("Usar",usar(n,m,a,h,p,w))
Largo terreno?2 Ancho terreno?4 Costo metro alambre?4 Hilos de alambre?3 Costo metro madera?2 Hileras de madera?2 Usar Madera
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Lista EscolarUnos padres de familia desesperados por determinar el dinero que debenpedir prestado para pagar los útiles escolares de su hijo, le han pedido crearun programa de computador que a partir de una lista de los precios de cada útil escolar y de la cantidad de cada útil escolar en la lista, determineel precio total de la lista. (Pensemos por 5 minutos en la solución)
def costo(precio, cantidad): costo = 0 for i in range(0,len(precio)): costo = costo + precio[i] * cantidad[i] return costo precio = [] cantidad = [] while input('Ingresar otro útil?').upper()=='S': precio.append(float(input('Precio útil?'))) cantidad.append(float(input('Cantidad?'))) print("La lista cuesta", costo(precio, cantidad))
Ingresar otro útil?S Precio útil?3999 Cantidad?1 Ingresar otro útil?N La lista cuesta 3999.0
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
ADNEn la última edición de la revista científica ”ADN al día” se indica que laspruebas de relación entre individuos a partir de código genético se definede la siguiente manera: Si las dos cadenas se diferencian en menos de𝑝letras, existe una relación de padre-hijo, si se diferencian en menos de𝑓 > 𝑝letras, existe una relación de formar parte de la misma familia. Deotra manera no existe relación. El laboratorioTein Cul Pan, le pidedesarrollar un programa que a partir de dos cadenas de ADN del mismo tamaño, determine si existe una relación pader-hijo, de la misma familia o ninguna, siguiendo las reglas definidas por la revista científica ”ADN aldía”. (Pensemos por 5 minutos en la solución)
def diferencia(a,b): cuenta = 0 for i in range(0,len(a)): if a[i] != b[i]: cuenta = cuenta + 1 return cuenta def relacion(a,b,p,f): d = diferencia(a,b) if d <= p: return 'Padre-Hijo' elif d <= f: return 'Familia' else: return 'Ninguna' ind1=input('Cadena ADN individuo 1?') ind2=input('Cadena ADN individuo 2?') p=int(input('Diferencia máxima para ser Padre-Hijo?')) f=int(input('Diferencia máxima para ser Familia?')) print("Relación", relacion(ind1, ind2, f, p)
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Extraer nombres de universidades ColombianasDada una lista de Universidades Colombianas, obtener el nombre del sitio web. Se asume que un nombre de universidad está entre los caracteres www. y edu.co. Por ejemplo de www.unal.edu.co se obtiene unal.*Entrada:*Un numero n indicando la cantidad de nombres de sitios web a procesar*Salida:*Listado de posibles nombres de universidades.*Ejemplo:* Input Output 5 www.unal.edu.co www.udistrital.edu.co www.univalle.edu.co www.javeriana.edu.co www.konradlorenz.edu.co unal udistrital univalle javeriana konradlorenz
#escriba aquí la solución
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Solucion:
def process(uni): return uni.split(".")[1] def main(): n = int(input()) for i in range(n): uni = input() print(process(uni)) main()
www.k.edu.co
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Leer información de estudiantes y calcular el promedio de notas Se tienen que procesar algunos comandos para realizar el procesamiento de notas de una Universidad. Se tiene una lista de estudiantes - Comando 1: Agregar estudiante y nota `1&nombre_estudiante&nota`- Comando 2: Calcular promedio de los estudiantes en un momento dado `2`- Comando 3: Ordenar estudiantes agregados por nombre `3`- Comando 4: Consultar la nota de un estudiante `4&nombre_estudiante`- Comando 5: Visualizar lista de estudiantes `5`- Comando 6: Salir `6`
#ingrese aquí la solución
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Para poder resolver el problema se pueden identificar varias partes a resolver que pueden modelarse como funciones:- Definir la lista de estudiantes.- agregar un estudiante dada la información- calcular el promedio de notas de los estudiantes en un momento dado- ordenar estudiantes agregados por nombre- consultar la nota de un estudiante - visualizar lista- procesar los comandos- mostrar menu Solución
# definir la lista de estudiantes un estudiante puede ser modelado como una tupla (por ahora) def agregar_estudiante(estudiantes, est): estudiantes.append(est) def promedio(estudiantes): prom = 0 #print(estudiantes) for estudiante in estudiantes: prom += float(estudiante[1]) print("El promedio de los estudiantes es: " + str(prom/len(estudiantes))) def ordenar(estudiantes): estudiantes.sort() def consultar(estudiantes, nombre): encontrado = False for estudiante in estudiantes: if estudiante[0] == nombre: encontrado = True print(estudiante[1]) if not encontrado: print("Estudiante no encontrado") def visualizar(estudiantes): print("Lista de estudiantes".center(30, "#")) if len(estudiantes) == 0: print("No hay estudiantes registrados.") for e in estudiantes: print("Nombre: " + e[0] + ", nota:" + str(e[1])) def procesar_comandos(): bandera = True estudiantes = [] comando = [0] while bandera or comando[0] != "6": bandera = False mostrar_menu() comando = input().split("&") print(comando[0]) if comando[0] == "1": agregar_estudiante(estudiantes, (comando[1], float(comando[2]))) elif comando[0] == "2": promedio(estudiantes) elif comando[0] == "3": ordenar(estudiantes) elif comando[0] == "4": consultar(estudiantes, comando[1]) elif comando[0] == "5": visualizar(estudiantes) def mostrar_menu(): print("Seleccione una opción:") print("Comando 1: Agregar estudiante y nota `1&nombre_estudiante&nota`") print("Comando 2: Calcular promedio de los estudiantes en un momento dado.") print("Comando 3: Ordenar estudiantes agregados por nombre") print("Comando 4: Consultar la nota de un estudiante `4&nombre_estudiante`") print("Comando 5: Visualizar") print("Comando 6: Salir") procesar_comandos() """ 1&Antonia&5.0 1&Juan&2.4 1&Pedro&4.3 """
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Entendiendo los sentimientos de GrootEl lenguaje de Groot es muy complicado para expresar sentimientos. Los sentimientos tienen n capas.Si n = 1, el sentimiento será "I hate it", si n = 2 es "I hate that I love it", y si n = 3 es "I hate that I love that I hate it" y así sucesivamente.*Entrada:*La cantidad n de capas donde $n \geq 1$*Salida:*Muestre la frase que Groot está tratando de decir.*Ejemplo 1:* InputOutput 1I hate it Ejemplo 2: InputOutput 2I hate that I love it Ejemplo 3: InputOutput 3I hate that I love that I hate it
#piense aquí la solución en caso de no plantear ninguna abra la siguiente celda
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Una posible opción para grootUna posible opción puede ser construir una tupla ó lista con dos elementos:
emocion = ["I hate", "I love"]
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Puede ir alternando entre posiciones de la lista el ciclo así:
salida = [] n = int(input()) for i in range(n): salida.append(emocion[i%2]) print(salida)
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Se puede usar join con that...
res = " that ".join(salida)
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Agregue it:
res += " it " print(res)
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Es posible generar esta solución con una lista creada por comprensión?
#intentelo aquí
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Solución
emocion = ["I hate", "I love"] salida = " that ".join([emocion[i % 2] for i in range(n)])+" it" print(salida)
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
Simplificador de FraccionesUtilizando funciones recursivas, elabore un programa que simplifique una fracción escrita de la forma a/b.Ejemplo: EntradaSalida 6/43/2
_____no_output_____
MIT
Cycle_1/Week_4/Session_16/12_Ejercicios_de_Repaso.ipynb
htrismicristo/MisionTIC_2022
King-County-House-Price-Prediction
# Importing libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import sklearn from sklearn.metrics import mean_absolute_error, mean_squared_error,r2_score from sklearn.preprocessing import MinMaxScaler, StandardScaler # Ignore warnings import warnings warnings.filterwarnings('ignore') # we can set numbers for how many rows and columns will be displayed pd.set_option('display.min_rows', 10) pd.set_option('display.max_columns', 30)
_____no_output_____
Apache-2.0
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deoployment
1. Loading Dataset and Explore
df = pd.read_csv('/content/dataset/kc_house_data.csv') df.head() df.shape df.info() # Delete Unwanted columns df.drop(['id', 'date'], axis=1, inplace=True) # Display missing values information df.isna().sum().sort_values(ascending=False) df.describe().T
_____no_output_____
Apache-2.0
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deoployment
2. EDA
# classification cat_col and num_col for plotting cat_col = [] num_col = [] col_name = df.columns for idx, value in enumerate(col_name): con = df[value].nunique() if(con<20): cat_col.append(value) else: num_col.append(value) def bar_plot(data, categorical_features): """Bar plot for categorical fetures of all columns""" print("Bar Plot for Categorical features") for col in categorical_features: counts = data[col].value_counts().sort_index() fig = plt.figure(figsize=(9, 6)) ax = fig.gca() counts.plot.bar(ax = ax, color='steelblue') ax.set_title(col + ' counts') ax.set_xlabel(col) ax.set_ylabel("Frequency") return plt.show() def histogram_plot(data, numeric_columns): """Histogram for numerical fetures of all columns""" print("Histogram for numeric_columns") for col in numeric_columns: fig = plt.figure(figsize=(9, 6)) ax = fig.gca() feature = data[col] feature.hist(bins=50, ax = ax) ax.axvline(feature.mean(), color='magenta', linestyle='dashed', linewidth=2) ax.axvline(feature.median(), color='cyan', linestyle='dashed', linewidth=2) ax.set_title(col) return plt.show() def scater_plot(data, numeric_columns, target_col): """Scatter for numerical fetures of columns for target columns""" print("Scater plot") for col in numeric_columns: fig = plt.figure(figsize=(9, 6)) ax = fig.gca() feature = df[col] label = data[f'{target_col}'] correlation = feature.corr(label) plt.scatter(x=feature, y=label) plt.xlabel(col) plt.ylabel('Price') ax.set_title('Price vs ' + col + '- correlation: ' + str(correlation)) return plt.show() # Bar plot for categorical features bar_plot(df, cat_col) # Histogram for numerical columns histogram_plot(df, num_col) # Scatter plot for price vs numerical columns scater_plot(df, num_col, 'price') # Correlation plot corr = df.corr().round(2) plt.figure(figsize=(12,8)) sns.heatmap(corr, annot=True, cmap="YlGnBu");
_____no_output_____
Apache-2.0
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deoployment
3. Split dataset
# create function for Split data def split_data(data, target_col): # Remove rows with missing target, seprate target from predictors data_copy = data.copy() data_copy.dropna(axis=0, subset=[target_col], inplace=True) y = data_copy[target_col] data_copy.drop([target_col], axis=1, inplace=True) # Break off validation set from training data from sklearn.model_selection import train_test_split X_train, X_valid, y_train, y_valid = train_test_split(data_copy, y, train_size=0.8, test_size=0.2, random_state=4) return X_train, X_valid, y_train, y_valid # Split data from main dataset to train, test and target X_train, X_valid, y_train, y_valid = split_data(df, 'price')
_____no_output_____
Apache-2.0
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deoployment
4. Model train Create column classification function for pipeline
def col_classification(data, num=20): # Select categorical columns cat_cols = [cname for cname in data.columns if data[cname].nunique() < num and data[cname].dtype =='object'] # Select numerical columns num_cols = [cname for cname in data.columns if data[cname].dtype in ['int64', 'float64']] return cat_cols, num_cols # Categorical cols and numerical columns classfication categorical_cols, numerical_cols = col_classification(X_train, 15)
_____no_output_____
Apache-2.0
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deoployment
Create model evaluation function
def evaluation_model(X_test, y_test, title="Target price prediction"): """Evaluation Model for regression problem, We need to use model name is clf""" # Evaluate the model using the test data preds = reg.predict(X_test) mse = mean_squared_error(y_test, preds) print("MSE:", mse) rmse = np.sqrt(mse) print('Mae: ', mean_absolute_error(y_valid, preds)) print("RMSE:", rmse) r2 = r2_score(y_valid, preds) print("R2:", r2) # Plot predicted vs actual plt.scatter(y_test, preds) plt.xlabel('Actual Labels') plt.ylabel('Predicted Labels') plt.title(title) # overlay the regression line z = np.polyfit(y_test, preds, 1) p = np.poly1d(z) plt.plot(y_valid,p(y_valid), color='magenta') return plt.show()
_____no_output_____
Apache-2.0
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deoployment
Try some of the linear regression model and evaluation their performance in our dataset
from sklearn.linear_model import LinearRegression from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso from sklearn.linear_model import ElasticNet from sklearn.ensemble import GradientBoostingRegressor from sklearn.tree import DecisionTreeRegressor from sklearn.ensemble import RandomForestRegressor models={'Linear Regression': LinearRegression(), 'Decision Tree Regressior' : DecisionTreeRegressor(random_state=0), 'Random Forrest Regressor' : RandomForestRegressor(n_estimators=10, random_state=0), 'Ridge': Ridge(), 'Lasso': Lasso(), 'ElasticN': ElasticNet(), 'Gradient Boosting Regressor': GradientBoostingRegressor() } print('####################################################################### \n') for name, model in models.items(): name_model = model reg = name_model.fit(X_train, y_train) print(f'{name}:') evaluation_model(X_valid, y_valid, 'Housing Price Prediction') print('####################################################################### \n')
####################################################################### Linear Regression: MSE: 39200213360.80082 Mae: 126368.12273608406 RMSE: 197990.43754888978 R2: 0.6969193739897344
Apache-2.0
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deoployment
After the evaluation Gradient Boosting Regressor works well in our dataset so we will use it. **Model train with Gradient Boosting Regressor using sklearn pipeline**
from sklearn.compose import ColumnTransformer from sklearn.pipeline import Pipeline from sklearn.impute import SimpleImputer from sklearn.preprocessing import OneHotEncoder from sklearn.metrics import mean_absolute_error from sklearn.preprocessing import StandardScaler from sklearn.ensemble import GradientBoostingRegressor # Preprocessing for numerical data numerical_transformer = SimpleImputer(strategy='median') # Preprocessing for categorical data categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='most_frequent')), ('onehot', OneHotEncoder(handle_unknown='ignore')) ]) # Bundle preprocessing for numerical and categorical data preprocessor = ColumnTransformer( transformers=[ ('num', numerical_transformer, numerical_cols), ('cat', categorical_transformer, categorical_cols) ]) model = GradientBoostingRegressor(random_state=4, n_estimators=1600) # Bundle Preprocessing and modeling code in pipeline reg = Pipeline(steps=[ ('preprocessor', preprocessor), ('scaler', StandardScaler()), ('model', model), ]) # Preprocessing of training data, fit model reg.fit(X_train, y_train) evaluation_model(X_valid, y_valid, 'Housing Price Prediction')
MSE: 13811738810.108906 Mae: 66634.5069623637 RMSE: 117523.35431780743 R2: 0.8932130698797662
Apache-2.0
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deoployment
5. Save Model, Load and Prediction
import pickle !mkdir 'model_save' pickle.dump(reg, open("model_save/model.pkl","wb")) # Load the model from the file loaded_model = pickle.load(open('model_save/model.pkl', 'rb')) X_new = pd.read_csv('dataset/user_input.csv') X_new result = loaded_model.predict(X_new) print('Prediction: {:.0f} price'.format(np.round(result[0])))
Prediction: 562858 price
Apache-2.0
King-County-House-Price-Prediction.ipynb
norochalise/House-Price-Prediction-to-Deoployment
Table of Contents 1&nbsp;&nbsp;Testing lolviz1.1&nbsp;&nbsp;Testing naively1.2&nbsp;&nbsp;Testing from within a Jupyter notebook1.2.1&nbsp;&nbsp;List1.2.2&nbsp;&nbsp;List of lists1.2.3&nbsp;&nbsp;List of lists of lists???1.2.4&nbsp;&nbsp;Tree1.2.5&nbsp;&nbsp;Objects1.2.6&nbsp;&nbsp;Calls1.2.7&nbsp;&nbsp;String1.3&nbsp;&nbsp;Conclusion Testing [lolviz](https://github.com/parrt/lolviz)I liked how the [lolviz](https://github.com/parrt/lolviz) module looked like. Let's try it!
%load_ext watermark %watermark -v -m -p lolviz from lolviz import *
_____no_output_____
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
Testing naively
data = ['hi', 'mom', {3, 4}, {"parrt": "user"}] g = listviz(data) print(g.source) # if you want to see the graphviz source g.view() # render and show graphviz.files.Source object
digraph G { nodesep=.05; node [penwidth="0.5", width=.1,height=.1]; node139621679300936 [shape="box", space="0.0", margin="0.01", fontcolor="#444443", fontname="Helvetica", label=<<table BORDER="0" CELLBORDER="0" CELLSPACING="0"> <tr> <td cellspacing="0" cellpadding="0" bgcolor="#fefecd" border="1" sides="br" valign="top"><font color="#444443" point-size="9">0</font></td> <td cellspacing="0" cellpadding="0" bgcolor="#fefecd" border="1" sides="br" valign="top"><font color="#444443" point-size="9">1</font></td> <td cellspacing="0" cellpadding="0" bgcolor="#fefecd" border="1" sides="br" valign="top"><font color="#444443" point-size="9">2</font></td> <td cellspacing="0" cellpadding="0" bgcolor="#fefecd" border="1" sides="b" valign="top"><font color="#444443" point-size="9">3</font></td> </tr> <tr> <td port="0" bgcolor="#fefecd" border="1" sides="r" align="center"><font point-size="11">'hi'</font></td> <td port="1" bgcolor="#fefecd" border="1" sides="r" align="center"><font point-size="11">'mom'</font></td> <td port="2" bgcolor="#fefecd" border="1" sides="r" align="center"><font point-size="11">{3, 4}</font></td> <td port="3" bgcolor="#fefecd" border="0" align="center"><font point-size="11">{'parrt': 'user'}</font></td> </tr></table> >]; }
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
It opened a window showing me this image: Testing from within a Jupyter notebookI test here all [the features of lolviz](https://github.com/parrt/lolvizfunctionality) : List
squares = [ i**2 for i in range(10) ] squares listviz(squares)
_____no_output_____
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
List of lists
n, m = 3, 4 example_matrix = [[0 if i != j else 1 for i in range(n)] for j in range(m)] example_matrix lolviz(example_matrix)
_____no_output_____
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
List of lists of lists???
n, m, o = 2, 3, 4 example_3D_matrix = [[[ 1 if i < j < k else 0 for i in range(n)] for j in range(m)] for k in range(o)] example_3D_matrix lolviz(example_3D_matrix)
_____no_output_____
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
It works, even if it is not as pretty. TreeOnly for binary trees, apparently. Let's try with a dictionary that looks like a binary tree:
anakin = { "name": "Anakin Skywalker", "son": { "name": "Luke Skywalker", }, "daughter": { "name": "Leia Skywalker", }, } from pprint import pprint pprint(anakin) treeviz(anakin, leftfield='son', rightfield='daugther')
_____no_output_____
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
It doesn't work out of the box for dictionaries, sadly.Let's check another example:
class Tree: def __init__(self, value, left=None, right=None): self.value = value self.left = left self.right = right root = Tree('parrt', Tree('mary', Tree('jim', Tree('srinivasan'), Tree('april'))), Tree('xue',None,Tree('mike'))) treeviz(root)
_____no_output_____
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2
Objects
objviz(anakin) objviz(anakin.values()) objviz(anakin.items())
_____no_output_____
MIT
Testing_the_lolviz_Python_module.ipynb
doc22940/notebooks-2