markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Census aggregation scratchpadBy [Ben Welsh](https://palewi.re/who-is-ben-welsh/) | import math | _____no_output_____ | MIT | notebooks/scratchpad.ipynb | nkrishnaswami/census-data-aggregator |
Approximation  | males_under_5, males_under_5_moe = 10154024, 3778
females_under_5, females_under_5_moe = 9712936, 3911
total_under_5 = males_under_5 + females_under_5
total_under_5
total_under_5_moe = math.sqrt(males_under_5_moe**2 + females_under_5_moe**2)
total_under_5_moe | _____no_output_____ | MIT | notebooks/scratchpad.ipynb | nkrishnaswami/census-data-aggregator |
 | def approximate_margin_of_error(*pairs):
"""
Returns the approximate margin of error after combining all of the provided Census Bureau estimates, taking into account each value's margin of error.
Expects a series of arguments, each a paired list with the estimated value first and the margin of error second.
"""
# According to the Census Bureau, when approximating a sum use only the largest zero estimate margin of error, once
# https://www.documentcloud.org/documents/6162551-20180418-MOE.html#document/p52
zeros = [p for p in pairs if p[0] == 0]
if len(zeros) > 1:
max_zero_margin = max([p[1] for p in zeros])
not_zero_margins = [p[1] for p in pairs if p[0] != 0]
margins = [max_zero_margin] + not_zero_margins
else:
margins = [p[1] for p in pairs]
return math.sqrt(sum([m**2 for m in margins]))
approximate_margin_of_error(
(males_under_5, males_under_5_moe),
(females_under_5, females_under_5_moe)
)
approximate_margin_of_error(
[0, 22],
[0, 22],
[0, 29],
[41, 37]
) | _____no_output_____ | MIT | notebooks/scratchpad.ipynb | nkrishnaswami/census-data-aggregator |
Aggregating totals | def total(*pairs):
"""
Returns the combined value of all the provided Census Bureau estimates, along with an approximated margin of error.
Expects a series of arguments, each a paired list with the estimated value first and the margin of error second.
"""
return sum([p[0] for p in pairs]), approximate_margin_of_error(*pairs)
total(
(males_under_5, males_under_5_moe),
(females_under_5, females_under_5_moe)
)
total(
[0, 22],
[0, 22],
[0, 29],
[41, 37]
) | _____no_output_____ | MIT | notebooks/scratchpad.ipynb | nkrishnaswami/census-data-aggregator |
Aggregating medians  | def approximate_median(range_list, design_factor=1.5):
"""
Returns the estimated median from a set of ranged totals.
Useful for generated medians for measures like median household income and median agn when aggregating census geographies.
Expects a list of dictionaries with three keys:
min: The minimum value in the range
max: The maximum value in the range
n: The number of people, households or other universe figure in the range
"""
# Sort the list
range_list.sort(key=lambda x: x['min'])
# For each range calculate its min and max value along the universe's scale
cumulative_n = 0
for range_ in range_list:
range_['n_min'] = cumulative_n
cumulative_n += range_['n']
range_['n_max'] = cumulative_n
# What is the total number of observations in the universe?
n = sum([d['n'] for d in range_list])
# What is the estimated midpoint of the n?
n_midpoint = n / 2.0
# Now use those to determine which group contains the midpoint.
try:
n_midpoint_range = next(d for d in range_list if n_midpoint >= d['n_min'] and n_midpoint <= d['n_max'])
except StopIteration:
raise StopIteration("The n's midpoint does not fall within a data range.")
# How many households in the midrange are needed to reach the midpoint?
n_midrange_gap = n_midpoint - n_midpoint_range['n_min']
# What is the proportion of the group that would be needed to get the midpoint?
n_midrange_gap_percent = n_midrange_gap / n_midpoint_range['n']
# Apply this proportion to the width of the midrange
n_midrange_gap_adjusted = (n_midpoint_range['max'] - n_midpoint_range['min']) * n_midrange_gap_percent
# Estimate the median
estimated_median = n_midpoint_range['min'] + n_midrange_gap_adjusted
# Get the standard error for this dataset
standard_error = (design_factor * math.sqrt((99/n)*(50**2))) / 100
# Use the standard error to calculate the p values
p_lower = (.5 - standard_error)
p_upper = (.5 + standard_error)
# Estimate the p_lower and p_upper n values
p_lower_n = n * p_lower
p_upper_n = n * p_upper
# Find the ranges the p values fall within
try:
p_lower_range_i, p_lower_range = next(
(i, d) for i, d in enumerate(range_list)
if p_lower_n >= d['n_min'] and p_lower_n <= d['n_max']
)
except StopIteration:
raise StopIteration("The n's lower p value does not fall within a data range.")
try:
p_upper_range_i, p_upper_range = next(
(i, d) for i, d in enumerate(range_list)
if p_upper_n >= d['n_min'] and p_upper_n <= d['n_max']
)
except StopIteration:
raise StopIteration("The n's higher p value does not fall within a data range.")
# Use these values to estimate the lower bound of the confidence interval
p_lower_a1 = p_lower_range['min']
try:
p_lower_a2 = range_list[p_lower_range_i+1]['min']
except IndexError:
p_lower_a2 = p_lower_range['max']
p_lower_c1 = p_lower_range['n_min'] / n
try:
p_lower_c2 = range_list[p_lower_range_i+1]['n_min'] / n
except IndexError:
p_lower_c2 = p_lower_range['n_max'] / n
lower_bound = ((p_lower - p_lower_c1) / (p_lower_c2 - p_lower_c1)) * (p_lower_a2 - p_lower_a1) + p_lower_a1
# Same for the upper bound
p_upper_a1 = p_upper_range['min']
try:
p_upper_a2 = range_list[p_upper_range_i+1]['min']
except IndexError:
p_upper_a2 = p_upper_range['max']
p_upper_c1 = p_upper_range['n_min'] / n
try:
p_upper_c2 = range_list[p_upper_range_i+1]['n_min'] / n
except IndexError:
p_upper_c2 = p_upper_range['n_max'] / n
upper_bound = ((p_upper - p_upper_c1) / (p_upper_c2 - p_upper_c1)) * (p_upper_a2 - p_upper_a1) + p_upper_a1
# Calculate the standard error of the median
standard_error_median = 0.5 * (upper_bound - lower_bound)
# Calculate the margin of error at the 90% confidence level
margin_of_error = 1.645 * standard_error_median
# Return the result
return estimated_median, margin_of_error
income = [
dict(min=-2500, max=9999, n=186),
dict(min=10000, max=14999, n=78),
dict(min=15000, max=19999, n=98),
dict(min=20000, max=24999, n=287),
dict(min=25000, max=29999, n=142),
dict(min=30000, max=34999, n=90),
dict(min=35000, max=39999, n=107),
dict(min=40000, max=44999, n=104),
dict(min=45000, max=49999, n=178),
dict(min=50000, max=59999, n=106),
dict(min=60000, max=74999, n=177),
dict(min=75000, max=99999, n=262),
dict(min=100000, max=124999, n=77),
dict(min=125000, max=149999, n=100),
dict(min=150000, max=199999, n=58),
dict(min=200000, max=250001, n=18)
]
approximate_median(income) | _____no_output_____ | MIT | notebooks/scratchpad.ipynb | nkrishnaswami/census-data-aggregator |
Install and setup use conda insteadmax python version 3.7!pip install tensorflow use conda install graphviz instead* also rqeuires conda install python-graphviz!pip install graphviz must use pip here - no conda package!pip install hiddenlayer | import graphviz
d= graphviz.Digraph()
d.edge('hello','world')
d
!conda env list | # conda environments:
#
base C:\ProgramData\Anaconda3
myenv * C:\Users\Rob.DESKTOP-HBG5EOT\.conda\envs\myenv
tf37 C:\Users\Rob.DESKTOP-HBG5EOT\.conda\envs\tf37
| MIT | pyTorch_PS/PT08-InstallAndSetupTensorflowHiddenLayer.ipynb | rsunderscore/learning |
Vertex client library: AutoML tabular binary classification model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex client library for Python to create tabular binary classification models and do batch prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users). DatasetThe dataset used for this tutorial is the [Bank Marketing](gs://cloud-ml-tables-data/bank-marketing.csv). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. ObjectiveIn this tutorial, you create an AutoML tabular binary classification model from a Python script, and then do a batch prediction using the Vertex client library. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Make a batch prediction.There is one key difference between using batch prediction and using online prediction:* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex client library. | import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Install the latest GA version of *google-cloud-storage* library as well. | ! pip3 install -U google-cloud-storage $USER_FLAG | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Restart the kernelOnce you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages. | if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. | PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations) | REGION = "us-central1" # @param {type: "string"} | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial. | from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell. | # If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS '' | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization. | BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket. | ! gsutil mb -l $REGION $BUCKET_NAME | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Finally, validate access to your Cloud Storage bucket by examining its contents: | ! gsutil ls -al $BUCKET_NAME | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client libraryImport the Vertex client library into our Python environment. | import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.struct_pb2 import Struct, Value | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Vertex constantsSetup up the following constants for Vertex:- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources. | # API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
AutoML constantsSet constants unique to AutoML datasets and training:- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).- Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for. | # Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml" | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for prediction.Set the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100Otherwise specify `(None, None)` to use a container image to run on a CPU. | if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Container (Docker) imageFor AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected. Machine TypeNext, set the machine type to use for prediction.- Set the variable `DEPLOY_COMPUTE` to configure the compute resources for the VM you will use for prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs* | if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
TutorialNow you are ready to start creating your own AutoML tabular binary classification model. Set up clientsThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.- Dataset Service for `Dataset` resources.- Model Service for `Model` resources.- Pipeline Service for training.- Job Service for batch prediction and custom training. | # client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
DatasetNow that your clients are ready, your first step is to create a `Dataset` resource instance. This step differs from Vision, Video and Language. For those products, after the `Dataset` resource is created, one then separately imports the data, using the `import_data` method.For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the `import_data` method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the `Dataset` resource's metadata. Cloud Storage`metadata = {"input_config": {"gcs_source": {"uri": [gcs_uri]}}}`The format for a Cloud Storage path is: gs://[bucket_name]/[folder(s)/[file] BigQuery`metadata = {"input_config": {"bigquery_source": {"uri": [gcs_uri]}}}`The format for a BigQuery path is: bq://[collection].[dataset].[table]Note that the `uri` field is a list, whereby you can input multiple CSV files or BigQuery tables when your data is split across files. Data preparationThe Vertex `Dataset` resource for tabular has a couple of requirements for your tabular data.- Must be in a CSV file or a BigQuery query. CSVFor tabular binary classification, the CSV file has a few requirements:- The first row must be the heading -- note how this is different from Vision, Video and Language where the requirement is no heading.- All but one column are features.- One column is the label, which you will specify when you subsequently create the training pipeline. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage. | IMPORT_FILE = "gs://cloud-ml-tables-data/bank-marketing.csv" | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Quick peek at your dataYou will use a version of the Bank Marketing dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.You also need for training to know the heading name of the label column, which is save as `label_column`. For this dataset, it is the last column in the CSV file. | count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing") | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
DatasetNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create `Dataset` resource instanceUse the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:1. Uses the dataset client service.2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters: - `display_name`: The human-readable name you choose to give it. - `metadata_schema_uri`: The schema for the dataset type. - `metadata`: The Cloud Storage or BigQuery location of the tabular data.3. Calls the client dataset service method `create_dataset`, with the following parameters: - `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources. - `dataset`: The Vertex dataset object instance you created.4. The method returns an `operation` object.An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method:| Method | Description || ----------- | ----------- || result() | Waits for the operation to complete and returns a result object in JSON format. || running() | Returns True/False on whether the operation is still running. || done() | Returns True/False on whether the operation is completed. || canceled() | Returns True/False on whether the operation was canceled. || cancel() | Cancels the operation (this may take up to 30 seconds). | | TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("bank-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Now save the unique dataset identifier for the `Dataset` resource instance you created. | # The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Train the modelNow train an AutoML tabular binary classification model using your Vertex `Dataset` resource. To train the model, do the following steps:1. Create an Vertex training pipeline for the `Dataset` resource.2. Execute the pipeline to start the training. Create a training pipelineYou may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:1. Being reusable for subsequent training jobs.2. Can be containerized and ran as a batch job.3. Can be distributed.4. All the steps are associated with the same pipeline job for tracking progress.Use this helper function `create_pipeline`, which takes the following parameters:- `pipeline_name`: A human readable name for the pipeline job.- `model_name`: A human readable name for the model.- `dataset`: The Vertex fully qualified dataset identifier.- `schema`: The dataset labeling (annotation) training schema.- `task`: A dictionary describing the requirements for the training job.The helper function calls the `Pipeline` client service'smethod `create_pipeline`, which takes the following parameters:- `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources.- `training_pipeline`: the full specification for the pipeline training job.Let's look now deeper into the *minimal* requirements for constructing a `training_pipeline` specification:- `display_name`: A human readable name for the pipeline job.- `training_task_definition`: The dataset labeling (annotation) training schema.- `training_task_inputs`: A dictionary describing the requirements for the training job.- `model_to_upload`: A human readable name for the model.- `input_data_config`: The dataset specification. - `dataset_id`: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier. - `fraction_split`: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML. | def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Construct the task requirementsNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.The minimal fields you need to specify are:- `prediction_type`: Whether we are doing "classification" or "regression".- `target_column`: The CSV heading column name for the column we want to predict (i.e., the label).- `train_budget_milli_node_hours`: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.- `disable_early_stopping`: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.- `transformations`: Specifies the feature engineering for each feature column.For `transformations`, the list must have an entry for each column. The outer key field indicates the type of feature engineering for the corresponding column. In this tutorial, you set it to `"auto"` to tell AutoML to automatically determine it.Finally, create the pipeline by calling the helper function `create_pipeline`, which returns an instance of a training pipeline object. | TRANSFORMATIONS = [
{"auto": {"column_name": "Age"}},
{"auto": {"column_name": "Job"}},
{"auto": {"column_name": "MaritalStatus"}},
{"auto": {"column_name": "Education"}},
{"auto": {"column_name": "Default"}},
{"auto": {"column_name": "Balance"}},
{"auto": {"column_name": "Housing"}},
{"auto": {"column_name": "Loan"}},
{"auto": {"column_name": "Contact"}},
{"auto": {"column_name": "Day"}},
{"auto": {"column_name": "Month"}},
{"auto": {"column_name": "Duration"}},
{"auto": {"column_name": "Campaign"}},
{"auto": {"column_name": "PDays"}},
{"auto": {"column_name": "POutcome"}},
]
PIPE_NAME = "bank_pipe-" + TIMESTAMP
MODEL_NAME = "bank_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="classification"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Now save the unique identifier of the training pipeline you created. | # The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Get information on a training pipelineNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:- `name`: The Vertex fully qualified pipeline identifier.When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`. | def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
DeploymentTraining the above model may take upwards of 30 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`. | while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Model informationNow that your model is trained, you can get some information on your model. Evaluate the Model resourceNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slicesUse this helper function `list_model_evaluations`, which takes the following parameter:- `name`: The Vertex fully qualified model identifier for the `Model` resource.This helper function uses the model client service's `list_model_evaluations` method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (`logLoss` and `auPrc`) you will print the result. | def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("logloss", metrics["logLoss"])
print("auPrc", metrics["auPrc"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Model deployment for batch predictionNow deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for on-demand prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource.3. Make online prediction requests to the `Endpoint` resource.For batch-prediction, you:1. Create a batch prediction job.2. The job service will provision resources for the batch prediction request.3. The results of the batch prediction request are returned to the caller.4. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction requestNow do a batch prediction to your deployed model. Make test itemsYou will use synthetic data as a test data items. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction. | HEADING = "Age,Job,MaritalStatus,Education,Default,Balance,Housing,Loan,Contact,Day,Month,Duration,Campaign,PDays,Previous,POutcome,Deposit"
INSTANCE_1 = (
"58,managment,married,teritary,no,2143,yes,no,unknown,5,may,261,1,-1,0, unknown"
)
INSTANCE_2 = (
"44,technician,single,secondary,no,39,yes,no,unknown,5,may,151,1,-1,0,unknown"
) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. Unlike image, video and text, the batch input file for tabular is only supported for CSV. For CSV file, you make:- The first line is the heading with the feature (fields) heading names.- Each remaining line is a separate prediction request with the corresponding feature values.For example: "feature_1", "feature_2". ... value_1, value_2, ... | import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.csv"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
f.write(HEADING + "\n")
f.write(str(INSTANCE_1) + "\n")
f.write(str(INSTANCE_2) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Compute instance scalingYou have several choices on scaling the compute instances for handling your batch prediction requests:- Single Instance: The batch prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.- Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request. | MIN_NODES = 1
MAX_NODES = 1 | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Make batch prediction requestNow that your batch of two test items is ready, let's do the batch request. Use this helper function `create_batch_prediction_job`, with the following parameters:- `display_name`: The human readable name for the prediction job.- `model_name`: The Vertex fully qualified identifier for the `Model` resource.- `gcs_source_uri`: The Cloud Storage path to the input file -- which you created above.- `gcs_destination_output_uri_prefix`: The Cloud Storage path that the service will write the predictions to.- `parameters`: Additional filtering parameters for serving prediction results.The helper function calls the job client service's `create_batch_prediction_job` metho, with the following parameters:- `parent`: The Vertex location root path for Dataset, Model and Pipeline resources.- `batch_prediction_job`: The specification for the batch prediction job.Let's now dive into the specification for the `batch_prediction_job`:- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex fully qualified identifier for the `Model` resource.- `dedicated_resources`: The compute resources to provision for the batch prediction job. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `starting_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.- `model_parameters`: Additional filtering parameters for serving prediction results. *Note*, image segmentation models do not support additional parameters.- `input_config`: The input source and format type for the instances to predict. - `instances_format`: The format of the batch prediction request file: `csv` only supported. - `gcs_source`: A list of one or more Cloud Storage paths to your batch prediction requests.- `output_config`: The output destination and format for the predictions. - `prediction_format`: The format of the batch prediction response file: `csv` only supported. - `gcs_destination`: The output destination for the predictions.This call is an asychronous operation. You will print from the response object a few select fields, including:- `name`: The Vertex fully qualified identifier assigned to the batch prediction job.- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex fully qualified identifier for the Model resource.- `generate_explanations`: Whether True/False explanations were provided with the predictions (explainability).- `state`: The state of the prediction job (pending, running, etc).Since this call will take a few moments to execute, you will likely get `JobState.JOB_STATE_PENDING` for `state`. | BATCH_MODEL = "bank_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
batch_prediction_job = {
"display_name": display_name,
# Format: 'projects/{project}/locations/{location}/models/{model_id}'
"model": model_name,
"model_parameters": json_format.ParseDict(parameters, Value()),
"input_config": {
"instances_format": IN_FORMAT,
"gcs_source": {"uris": [gcs_source_uri]},
},
"output_config": {
"predictions_format": OUT_FORMAT,
"gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix},
},
"dedicated_resources": {
"machine_spec": machine_spec,
"starting_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try:
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", response.labels)
return response
IN_FORMAT = "csv"
OUT_FORMAT = "csv" # [csv]
response = create_batch_prediction_job(
BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, None
) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Now get the unique identifier for the batch prediction job you created. | # The full unique ID for the batch job
batch_job_id = response.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Get information on a batch prediction jobUse this helper function `get_batch_prediction_job`, with the following paramter:- `job_name`: The Vertex fully qualified identifier for the batch prediction job.The helper function calls the job client service's `get_batch_prediction_job` method, with the following paramter:- `name`: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- `batch_job_id`The helper function will return the Cloud Storage path to where the predictions are stored -- `gcs_destination`. | def get_batch_prediction_job(job_name, silent=False):
response = clients["job"].get_batch_prediction_job(name=job_name)
if silent:
return response.output_config.gcs_destination.output_uri_prefix, response.state
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try: # not all data types support explanations
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" error:", response.error)
gcs_destination = response.output_config.gcs_destination
print(" gcs_destination")
print(" output_uri_prefix:", gcs_destination.output_uri_prefix)
return gcs_destination.output_uri_prefix, response.state
predictions, state = get_batch_prediction_job(batch_job_id) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Get PredictionsWhen the batch prediction is done processing, the job state will be `JOB_STATE_SUCCEEDED`.Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a CSV format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name `prediction`, and under that folder will be a file called `predictions*.csv`.Now display (cat) the contents. You will see multiple rows, one for each prediction.For each prediction:- The first four fields are the values (features) you did the prediction on.- The remaining fields are the confidence values, between 0 and 1, for each prediction. | def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
predictions, state = get_batch_prediction_job(batch_job_id, True)
if state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", state)
if state == aip.JobState.JOB_STATE_FAILED:
raise Exception("Batch Job Failed")
else:
folder = get_latest_predictions(predictions)
! gsutil ls $folder/prediction*.csv
! gsutil cat $folder/prediction*.csv
break
time.sleep(60) | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket | delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME | _____no_output_____ | Apache-2.0 | notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb | nayaknishant/vertex-ai-samples |
Project: Linear RegressionReggie is a mad scientist who has been hired by the local fast food joint to build their newest ball pit in the play area. As such, he is working on researching the bounciness of different balls so as to optimize the pit. He is running an experiment to bounce different sizes of bouncy balls, and then fitting lines to the data points he records. He has heard of linear regression, but needs your help to implement a version of linear regression in Python._Linear Regression_ is when you have a group of points on a graph, and you find a line that approximately resembles that group of points. A good Linear Regression algorithm minimizes the _error_, or the distance from each point to the line. A line with the least error is the line that fits the data the best. We call this a line of _best fit_.We will use loops, lists, and arithmetic to create a function that will find a line of best fit when given a set of data. Part 1: Calculating Error The line we will end up with will have a formula that looks like:```y = m*x + b````m` is the slope of the line and `b` is the intercept, where the line crosses the y-axis.Create a function called `get_y()` that takes in `m`, `b`, and `x` and returns what the `y` value would be for that `x` on that line! | def get_y(m, b, x):
y = m*x + b
return y
get_y(1, 0, 7) == 7
get_y(5, 10, 3) == 25
| _____no_output_____ | BSD-2-Clause | Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb | jfreeman812/Project_ZF |
Reggie wants to try a bunch of different `m` values and `b` values and see which line produces the least error. To calculate error between a point and a line, he wants a function called `calculate_error()`, which will take in `m`, `b`, and an [x, y] point called `point` and return the distance between the line and the point.To find the distance:1. Get the x-value from the point and store it in a variable called `x_point`2. Get the x-value from the point and store it in a variable called `y_point`3. Use `get_y()` to get the y-value that `x_point` would be on the line4. Find the difference between the y from `get_y` and `y_point`5. Return the absolute value of the distance (you can use the built-in function `abs()` to do this)The distance represents the error between the line `y = m*x + b` and the `point` given. | def calculate_error(m, b, point):
x_point, y_point = point
y = m*x_point + b
distance = abs(y - y_point)
return distance
| _____no_output_____ | BSD-2-Clause | Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb | jfreeman812/Project_ZF |
Let's test this function! | #this is a line that looks like y = x, so (3, 3) should lie on it. thus, error should be 0:
print(calculate_error(1, 0, (3, 3)))
#the point (3, 4) should be 1 unit away from the line y = x:
print(calculate_error(1, 0, (3, 4)))
#the point (3, 3) should be 1 unit away from the line y = x - 1:
print(calculate_error(1, -1, (3, 3)))
#the point (3, 3) should be 5 units away from the line y = -x + 1:
print(calculate_error(-1, 1, (3, 3))) | 0
1
1
5
| BSD-2-Clause | Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb | jfreeman812/Project_ZF |
Great! Reggie's datasets will be sets of points. For example, he ran an experiment comparing the width of bouncy balls to how high they bounce: | datapoints = [(1, 2), (2, 0), (3, 4), (4, 4), (5, 3)] | _____no_output_____ | BSD-2-Clause | Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb | jfreeman812/Project_ZF |
The first datapoint, `(1, 2)`, means that his 1cm bouncy ball bounced 2 meters. The 4cm bouncy ball bounced 4 meters.As we try to fit a line to this data, we will need a function called `calculate_all_error`, which takes `m` and `b` that describe a line, and `points`, a set of data like the example above.`calculate_all_error` should iterate through each `point` in `points` and calculate the error from that point to the line (using `calculate_error`). It should keep a running total of the error, and then return that total after the loop. | def calculate_all_error(m, b, points):
total_error = 0
for point in datapoints:
point_error = calculate_error(m, b, point)
total_error += point_error
return total_error | _____no_output_____ | BSD-2-Clause | Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb | jfreeman812/Project_ZF |
Let's test this function! | #every point in this dataset lies upon y=x, so the total error should be zero:
datapoints = [(1, 1), (3, 3), (5, 5), (-1, -1)]
print(calculate_all_error(1, 0, datapoints))
#every point in this dataset is 1 unit away from y = x + 1, so the total error should be 4:
datapoints = [(1, 1), (3, 3), (5, 5), (-1, -1)]
print(calculate_all_error(1, 1, datapoints))
#every point in this dataset is 1 unit away from y = x - 1, so the total error should be 4:
datapoints = [(1, 1), (3, 3), (5, 5), (-1, -1)]
print(calculate_all_error(1, -1, datapoints))
#the points in this dataset are 1, 5, 9, and 3 units away from y = -x + 1, respectively, so total error should be
# 1 + 5 + 9 + 3 = 18
datapoints = [(1, 1), (3, 3), (5, 5), (-1, -1)]
print(calculate_all_error(-1, 1, datapoints)) | 0
4
4
18
| BSD-2-Clause | Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb | jfreeman812/Project_ZF |
Great! It looks like we now have a function that can take in a line and Reggie's data and return how much error that line produces when we try to fit it to the data.Our next step is to find the `m` and `b` that minimizes this error, and thus fits the data best! Part 2: Try a bunch of slopes and intercepts!The way Reggie wants to find a line of best fit is by trial and error. He wants to try a bunch of different slopes (`a` values) and a bunch of different intercepts (`b` values) and see which one produces the smallest error value for his dataset.Using a list comprehension, let's create a list of possible `a` values to try. Make the list `possible_as` that goes from -10 to 10, in increments of 0.1. The way Reggie wants to find a line of best fit is by trial and error. He wants to try a bunch of different slopes (`m` values) and a bunch of different intercepts (`b` values) and see which one produces the smallest error value for his dataset.Using a list comprehension, let's create a list of possible `m` values to try. Make the list `possible_ms` that goes from -10 to 10, in increments of 0.1.Hint (to view this hint, either double-click this cell or highlight the following white space): you can go through the values in range(-100, 100) and multiply each one by 0.1 | possible_ms = [m * 0.1 for m in range(-100, 100)] | _____no_output_____ | BSD-2-Clause | Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb | jfreeman812/Project_ZF |
Now, let's make a list of `possible_bs` to check that would be the values from -20 to 20, in steps of 0.1: | possible_bs = [b * 0.1 for b in range(-200, 200)] | _____no_output_____ | BSD-2-Clause | Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb | jfreeman812/Project_ZF |
We are going to find the smallest error. First, we will make every possible `y = m*x + b` line by pairing all of the possible `m`s with all of the possible `b`s. Then, we will see which `y = m*x + b` line produces the smallest total error with the set of data stored in `datapoint`.First, create the variables that we will be optimizing:* `smallest_error` — this should start at infinity (`float("inf")`) so that any error we get at first will be smaller than our value of `smallest_error`* `best_m` — we can start this at `0`* `best_b` — we can start this at `0`We want to:* Iterate through each element `m` in `possible_ms`* For every `m` value, take every `b` value in `possible_bs`* If the value returned from `calculate_all_error` on this `m` value, this `b` value, and `datapoints` is less than our current `smallest_error`,* Set `best_m` and `best_b` to be these values, and set `smallest_error` to this error.By the end of these nested loops, the `smallest_error` should hold the smallest error we have found, and `best_m` and `best_b` should be the values that produced that smallest error value.Print out `best_m`, `best_b` and `smallest_error` after the loops. | datapoints = [(1, 2), (2, 0), (3, 4), (4, 4), (5, 3)]
best_error = float("inf")
best_m = 0
best_b = 0
for m in possible_ms:
for b in possible_bs:
error = calculate_all_error(m, b, datapoints)
if error < best_error:
best_m = m
best_b = b
best_error = error
print(best_m, best_b, best_error)
| 0.30000000000000004 1.7000000000000002 4.999999999999999
| BSD-2-Clause | Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb | jfreeman812/Project_ZF |
Part 3: What does our model predict?Now we have seen that for this set of observations on the bouncy balls, the line that fits the data best has an `m` of 0.3 and a `b` of 1.7:```y = 0.3x + 1.7```This line produced a total error of 5.Using this `m` and this `b`, what does your line predict the bounce height of a ball with a width of 6 to be?In other words, what is the output of `get_y()` when we call it with:* m = 0.3* b = 1.7* x = 6 | get_y(0.3, 1.7, 6) | _____no_output_____ | BSD-2-Clause | Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb | jfreeman812/Project_ZF |
Practical Examples of Interactive Visualizations in JupyterLab with Pixi.js and Jupyter Widgets PyData Berlin 2018 - 2018-07-08 Jeremy Tuloup [@jtpio](https://twitter.com/jtpio) [github.com/jtpio](https://github.com/jtpio) [jtp.io](https://jtp.io)  The Python Visualization Landscape (2017)Source:- [Jake VanderPlas: The Python Visualization Landscape PyCon 2017](https://www.youtube.com/watch?v=FytuB8nFHPQ)- [Source for the Visualization](https://github.com/rougier/python-visualization-landscape), by Nicolas P. Rougier  Motivation|Not This|This||:--------------------------:|:-----------------------------------------:|| | |  JupyterLab - Pixi.js - Jupyter Widgets?  Prerequisites * Jupyter Notebook * Python  JupyterLab   * Powerful 2D rendering engine written in JavaScript * Abstraction on top of Canvas and WebGL [Live Example!](http://localhost:4000)```javascriptlet app = new PIXI.Application(800, 600, {backgroundColor : 0x1099bb});document.body.appendChild(app.view);let bunny = PIXI.Sprite.fromImage('bunny.png')bunny.anchor.set(0.5);bunny.x = app.screen.width / 2;bunny.y = app.screen.height / 2;app.stage.addChild(bunny);app.ticker.add((delta) => { bunny.rotation += 0.1 * delta;});```  Jupyter Widgets[Open the image](./img/WidgetModelView.png)- Source: [https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Basics.htmlWhy-does-displaying-the-same-widget-twice-work?](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Basics.htmlWhy-does-displaying-the-same-widget-twice-work?) | from ipywidgets import IntSlider
slider = IntSlider(min=0, max=10)
slider
slider
slider.value
slider.value = 2 | _____no_output_____ | BSD-3-Clause | examples/presentation.ipynb | jtpio/pixijs-jupyter |
Tutorial to create your own https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Custom.html Libraries bqplot ipyleaflet ipyvolume  Motivation: Very Custom Visualizations   Drawing Shapes on a Canvas | from ipyutils import SimpleShape | _____no_output_____ | BSD-3-Clause | examples/presentation.ipynb | jtpio/pixijs-jupyter |
Implementation - [simple_shape.py](../ipyutils/simple_shape.py): defines the **SimpleShape** Python class - [widget.ts](../src/simple_shapes/widget.ts): defines the **SimpleShapeModel** and **SimpleShapeView** Typescript classes | square = SimpleShape()
square
square.rotate = True | _____no_output_____ | BSD-3-Clause | examples/presentation.ipynb | jtpio/pixijs-jupyter |
Level Up 🚀 | from ipyutils import Shapes
shapes = Shapes(n_shapes=100)
shapes
shapes.shape
shapes.shape = 'square'
shapes.rotate = True
shapes.wobble = True | _____no_output_____ | BSD-3-Clause | examples/presentation.ipynb | jtpio/pixijs-jupyter |
 Visualizing Recursion with the Bermuda Triangle Puzzle  Motivation * Solve the puzzle programmatically * Verify a solution visually * Animate the process  BermudaTriangle Widget | from ipyutils import TriangleAnimation, BermudaTriangle
triangles = TriangleAnimation()
triangles | _____no_output_____ | BSD-3-Clause | examples/presentation.ipynb | jtpio/pixijs-jupyter |
 What can we do with this widget?  Visualize TransitionsFrom | To:--------------------------:|:-------------------------: |  | # states
state_0 = [None] * 16
print(state_0)
state_1 = [[13, 1]] + [None] * 15
print(state_1)
state_2 = [[13, 1], [12, 0]] + [None] * 14
print(state_2) | [[13, 1], [12, 0], None, None, None, None, None, None, None, None, None, None, None, None, None, None]
| BSD-3-Clause | examples/presentation.ipynb | jtpio/pixijs-jupyter |
Example States and Animation | example_states = TriangleAnimation()
bermuda = example_states.bermuda
bermuda.states = [
[None] * 16,
[[7, 0]] + [None] * 15,
[[7, 1]] + [None] * 15,
[[7, 2]] + [None] * 15,
[[7, 2], [0, 0]] + [None] * 14,
[[7, 2], [0, 1]] + [None] * 14,
[[i, 0] for i in range(16)],
[[i, 1] for i in range(16)],
]
example_states | _____no_output_____ | BSD-3-Clause | examples/presentation.ipynb | jtpio/pixijs-jupyter |
 Solver | from copy import deepcopy
class Solver(BermudaTriangle):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.reset_state()
def reset_state(self):
self.board = [None] * self.N_TRIANGLES
self.logs = [deepcopy(self.board)]
self.it = 0
def solve(self):
'''
Method to implement
'''
raise NotImplementedError()
def log(self):
self.logs.append(deepcopy(self.board))
def found(self):
return all(self.is_valid(i) for i in range(self.N_TRIANGLES))
def save_state(self):
self.permutation = self.board
self.states = self.logs | _____no_output_____ | BSD-3-Clause | examples/presentation.ipynb | jtpio/pixijs-jupyter |
Valid Permutation - is_valid() | help(Solver.is_valid) | Help on function is_valid in module ipyutils.bermuda:
is_valid(self, i)
Parameters
----------
i: int
Position of the triangle to check, between 0 and 15 (inclusive)
Returns
-------
valid: bool
True if the triangle at position i doesn't have any conflict
False otherwise
| BSD-3-Clause | examples/presentation.ipynb | jtpio/pixijs-jupyter |
```pythonsolver.is_valid(7) False```  First Try: Random Permutations | import random
class RandomSearch(Solver):
def solve(self):
random.seed(42)
self.reset_state()
for i in range(200):
self.board = random.sample(self.permutation, self.N_TRIANGLES)
self.log()
if self.found():
print('Found!')
return True
return False
%%time
solver = RandomSearch()
res = solver.solve()
solver.save_state()
rnd = TriangleAnimation()
rnd.bermuda.title = 'Random Search'
rnd.bermuda.states = solver.states
rnd | _____no_output_____ | BSD-3-Clause | examples/presentation.ipynb | jtpio/pixijs-jupyter |
 Better: Brute Force using Recursion | class RecursiveSolver(Solver):
def solve(self):
self.used = [False] * self.N_TRIANGLES
self.reset_state()
self._place(0)
return self.board
def _place(self, i):
self.it += 1
if i == self.N_TRIANGLES:
return True
for j in range(self.N_TRIANGLES - 1, -1, -1):
if self.used[j]:
# piece number j already used
continue
self.used[j] = True
for rot in range(3):
# place the piece on the board
self.board[i] = (j, rot)
self.log()
# stop the recursion if the current configuration
# is not valid or a solution has been found
if self.is_valid(i) and self._place(i + 1):
return True
# remove the piece from the board
self.board[i] = None
self.used[j] = False
self.log()
return False
%%time
solver = RecursiveSolver()
res = solver.solve()
if solver.found():
print('Solution found!')
print(f'{len(solver.logs)} steps')
solver.save_state()
else:
print('No solution found')
recursion = TriangleAnimation()
recursion.bermuda.title = 'Recursive Search'
recursion.bermuda.states = solver.states
recursion | _____no_output_____ | BSD-3-Clause | examples/presentation.ipynb | jtpio/pixijs-jupyter |
Regressão linear **TOC:**Na aula de hoje, vamos explorar os seguintes tópicos em Python:- 1) [Introdução](intro)- 2) [Regressão linear simples](reglinear)- 3) [Regressão linear múltipla](multireglinear)- 4) [Tradeoff viés-variância](tradeoff) | # importe as principais bibliotecas de análise de dados
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns | _____no_output_____ | MIT | semana_6/intro_regressao.ipynb | rocabrera/curso_ml |
____________ 1) **Introdução** Imagine que você que vender sua casa.Você sabe os atributos da sua casa: quantos cômodos têm, quantos carros cabem na garagem, qual é a área construída, qual sua localidade, etc.Agora, a pergunta é: qual seria o melhor preço pra você colocá-la a venda, ou seja, quanto de fato ela vale?Você pode solicitar a avaliação de um corretor de imóveis (contanto com a experiência dele), ou então......fazer um modelo de **Machine Learning**, que, com base nos atributos e preços de diversas outras casas, pode fazer uma **predição** sobre o preço adequado da sua casa!Para resolver este problema, podemos utilizar um dos mais simples e importantes algoritmos de machine learning: a Regressão Linear!____ Para introduzirmos as ideias, vamos usar um [dataset de preço de casas](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data).Esta base de dados contém **70 features** (+ 1 ID), que são as características de cada uma das casas listadas; e **1 target**, que é o preço pelo qual aquela casa foi vendida.Para o significado de cada uma das features, e os valores que elas podem assumir, veja a página acima.**Vamos ler a base e começar a explorá-la!** | df = pd.read_csv("data/house_prices/house_price.csv") | _____no_output_____ | MIT | semana_6/intro_regressao.ipynb | rocabrera/curso_ml |
Por enquanto, não vamos nos preocupar com os dados missing, pois vamos usar apenas uma feature no nosso modelo inicial.Aproveite para depois explorar os dados da forma que quiser!Por enquanto, vamos dar uma olhada na coluna target! Fica evidente que a distribuição é desviada para a direita.Vamos tentar alterar isso na próximas versões do modelo para ver se teremos ganhos de performance!Por enquanto, seguimos assim. Parece que a variável de área construída ```GrLivArea```) é uma forte candidata a **explicar** o preço das casas, pois vemos calaramente uma correlação entre as variáveis!Mas note que há claramente dois outliers... Vamos agora iniciar a construção de um modelo bem simples, que utilize a variável GrLivArea para predizer o preço! _________ 2) **Regressão linear simples** Apesar de alguns outliers, parece bem adequado que os pontos plotados acima sejam descritos por uma reta, não é mesmo?Ou, melhor dizendo: **a variável GrLivArea parece estar relacionada ao target SalePrice linearmente!**Para modelarmos esta relação, vamos conhecer o modelo de **Regressão Linear Simples**.Como o próprio nome diz, o modelo de Regressão Linear será **uma reta (polinômio linear)**, que melhor se ajusta aos seus dados!O modelo de **Regressão Linear Simples** será uma linha reta que relaciona Y (o preço da casa) e X (os atributos da casa). Se utilizarmos **apenas um atributo** (como, por exemplo, a área construída), temos uma **Regressão Linear Simples**, e nosso modelo é:$$ y = b_0 + b_1 X $$Neste caso, o modelo tem dois coeficientes a serem determinados: $b_0$ (intercepto ou coeficiente linear) e $b_1$ (coeficiente angular). O algoritmo do estimador é utilizado justamente para encontrarmos os coeficientes $b_0$ e $b_1$ **que melhor se ajustam aos dados!**Para fazer isso, pode-se utilizar o método dos **mínimos quadrados** ou então o **gradiente descendente**.Mas não nos preocuparemos com os detalhes do treinamento: usaremos o sklearn para isso!Vamos começar? Agora que o modelo está treinado, podemos dar uma olhada nos coeficientes que foram encontrados! Como interpretamos este resultado?O nosso modelo final é dado por:$$ y = 1562.01 + 118.61 \times \text{GrLiveArea}$$Isto quer dizer que:> Aumentando a variável "GrLiveArea" em uma unidade faz com que o preço seja aumentado em USD 118.6!> O preço mínimo a ser pago, independente da área construída, é de 1562.01! Podemos visualizar o modelo treinado, neste caso: Fazendo uma previsão: Ou ainda: É raro que consigamos visualizar nosso modelo final como fizemos acima, mas no caso da regressão linear simples, temos essa sorte! :)Vamos agora fazer algumas previsões! Agora que temos o modelo treinado e algumas previsões, como avaliamos a performance do modelo?Para isso, podemos dar uma olhada nos **resíduos** das predições! Os resíduos nada mais são do que **os erros do modelo**, ou seja, **a diferença entre cada valor predito e o valor real**, para **os dados de teste!**. Isto é,$$R(y_i) = y_i - \hat{y}_i $$ O caso 100% ideal seria $y_i = \hat{y}_i$, o que produziria uma reta exata!Quanto mais "espalhados" estiverem os pontos em torno da reta, em geral **pior é o modelo**, pois ele está errando mais!Uma forma de quantificar isso através de uma métrica conhecida como **$R^2$**, o **coeficiente de determinação**.Este coeficiente indica **o quão próximos os dados estão da reta ajustada**. Por outro lado, o $R^2$ representa a porcentagem de variação na resposta que é explicada pelo modelo.$$R^2 = 1 - \frac{\sum_{i=1}^n(y_i-\hat{y}_i)^2}{\sum_{i=1}^n(y_i-\bar{y})^2}$$É possível utilizar o $R^2$ nos dados de treino, mas isso não é tão significante, devido ao overfitting, que discutiremos a seguir. Mais sgnificativo é calcularmos o $R^2$ nos dados de teste como faremos a seguir. Essa métrica equivale, portanto, **ao gráfico que fizemos acima!** Outra coisa importante é que os resíduos sejam **normalmente distribuídos**.Se esse não for o caso, é muito importante que você reveja se o modelo escolhido de fato é adequado ao seu problema! Além dos resíduos, existem três principais **métricas de avaliação** do modelo de regressão linear: **Mean Absolute Error** (MAE) é a média do valor absoluto de todos os resíduos (erros):$$\frac 1n\sum_{i=1}^n|y_i-\hat{y}_i|$$**Mean Squared Error** (MSE) é a média dos erros quadrados:$$\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2$$**Root Mean Squared Error** (RMSE) é a raiz quadrada da média dos erros quadrados:$$\sqrt{\frac 1n\sum_{i=1}^n(y_i-\hat{y}_i)^2}$$Comparando as métricas:- **MAE** é a mais simples de entender, mas ela penaliza mais erros menores;- **MSE** é a métrica mais popular, pois essa métrica penaliza mais erros maiores, o que faz mais sentido em aplicações reais.- **RMSE** é ainda mais popular, pois esta métrica está nas mesmas unidades que o target.Estas métricas todas podem ser utilizadas como **funções de custo** a serem minimizadas pelo algoritmo do estimador. ___ 3) **Regressão linear múltipla** O modelo que fizemos acima considera uma única feature como preditora do preço da casa.Mas temos outras 78 dessas features! Será que não há mais informação útil em todas essas outras variáveis?Em geral, sim! É natural que esperemos que **mais variáveis** tragam **mais informações** ao modelo, e, portanto, o torne mais preciso!Para incorporar estas outras variáveis ao modelo, é muito simples! Podemos passar a utilizar outros atributos (como o número de cômodos, qual é a renda média da vizinhança, etc.), e neste caso teremos uma **Regressão Linear Múltipla**, que nada mais é que a seguinte equação:$$ y = b_0 + b_1 X_1 + b_2 X_2 + \cdots + b_n X_n $$Neste caso, além de $b_0$ e $b_1$, temos também outros coeficientes, um pra cada uma das $n$ features que escolhermos!Modelos de regressão múltipla são potencialmente mais precisos, mas há também um lado ruim: nós perdemos a **possibilidade de visualização**. Agora, não temos mais uma reta, mas sim um **hiperplano** que relaciona todas as features com o target!Vamos construir esse modelo? Observação: a coluna "Id" traz apenas um número de identificação arbitrário que não deve ser correlacionado com o target. Portanto, vamos desconsiderar esta coluna de nosso modelo! A performance do modelo melhorou?Será que dá pra melhorar mais?Opções:- tentar apenas um subconjunto de features: **feature selection**- passar a utilizar as features categóricas: **feature engeneering** --- 4) **Tradeoff viés-variância** Veremos agora um dos conceitos mais importantes em apredizado de maquina.Muitas vezes alguns modelos têm 100% de acerto nos dados de **treino**, mas **na base de teste** a performance cai para menos de 50%.Isso pode acontecer porque o modelo fica **especialista apenas no conjunto de treino**, não conseguindo **generalizar os padrões para além dos dados vistos**.O overfitting está intimamente ligado com o conceito de **viés** (bias) e **variância** (variance):>**Viés**É a diferença entre o que o modelo prediz, e o valor correto a ser predito.Modelos com alto viés são muito simples, de modo a **não conseguir capturar as relações que os dados de treino exibem** (underfit).Issso faz com que ambos os erros de treino e de teste sejam altos.Em outras palavras:**Incapacidade de um modelo de capturar a verdadeira relação entre features e target**> **Variância**Variância se refere à variabilidade das predições de um modelo.Modelos com alta variância são muito complexos, por **aprenderem demais as relações exibidas nos dados de treino** (overfit).Isso faz com que os erros de treino sejam baixos, mas os erros de teste sejam altos.Em outras palavras:**Incapacidade de um modelo performar bem em outros datasets diferentes do usado no treinamento**. Para demonstrar overfit ser usado o conjuto de teste [anscombe](https://en.wikipedia.org/wiki/Anscombe%27s_quartet) | df_anscombe = sns.load_dataset('anscombe')
df_anscombe.groupby("dataset").agg({"mean", "std"}) | _____no_output_____ | MIT | semana_6/intro_regressao.ipynb | rocabrera/curso_ml |
Table of Contents 1 Lambda-calcul implémenté en OCaml1.1 Expressions1.2 But ?1.3 Grammaire1.4 L'identité1.5 Conditionnelles1.6 Nombres1.7 Test d'inégalité1.8 Successeurs1.9 Prédecesseurs1.10 Addition1.11 Multiplication1.12 Paires1.13 Prédécesseurs, deuxième essai1.14 Listes1.15 La fonction U1.16 La récursion via la fonction Y1.17 Conclusion Lambda-calcul implémenté en OCamlCe notebook est inspiré de [ce post de blog du Professeur Matt Might](http://matt.might.net/articles/python-church-y-combinator/), qui implémente un mini langage de programmation en $\lambda$-calcul, en Python.Je vais faire la même chose en OCaml. ExpressionsOn rappelle que les expressions du [Lambda calcul](https://fr.wikipedia.org/wiki/Lambda-calcul), ou $\lambda$-calcul, sont les suivantes :$$ \begin{cases}x, y, z & \text{(des variables)} \\u v & \text{(application de deux termes}\, u, v\; \text{)} \\\lambda x. v & \text{(lambda-function prenant la variable}\; x \;\text{et le terme}\; v \;\text{)}\end{cases} $$ But ?Le but ne va pas être de les représenter comme ça avec des types formels en Caml, mais plutôt d'utiliser les constructions de Caml, respectivement `u(v)` et `fun x -> v` pour l'application et les fonctions anonymes, et encoder des fonctionnalités de plus haut niveau dans ce langage réduit. GrammaireAvec une grammaire BNF, si `` désigne un nom d'expression valide (on se limitera à des noms en miniscules consistitués des 26 lettres `a,b,..,z`) : ::= | () | fun -> | () ---- L'identité | let identite = fun x -> x ;;
let vide = fun x -> x ;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
ConditionnellesLa conditionnelle est `si cond alors valeur_vraie sinon valeur_fausse`. | let si = fun cond valeur_vraie valeur_fausse -> cond valeur_vraie valeur_fausse ;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
C'est très simple, du moment qu'on s'assure que `cond` est soit `vrai` soit `faux` tels que définis par leur comportement : si vrai e1 e2 == e1 si faux e1 e2 == e2 | let vrai = fun valeur_vraie valeur_fausse -> valeur_vraie ;;
let faux = fun valeur_vraie valeur_fausse -> valeur_fausse ;; | File "[14]", line 1, characters 28-41:
Warning 27: unused variable valeur_fausse.
| MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
La négation est facile ! | let non = fun v x y -> v y x;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
En fait, on va forcer une évaluation paresseuse, comme ça si l'une des deux expressions ne terminent pas, l'évaluation fonctionne quand même. | let vrai_paresseux = fun valeur_vraie valeur_fausse -> valeur_vraie () ;;
let faux_paresseux = fun valeur_vraie valeur_fausse -> valeur_fausse () ;; | File "[16]", line 1, characters 38-51:
Warning 27: unused variable valeur_fausse.
| MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Pour rendre paresseux un terme, rien de plus simple ! | let paresseux = fun f -> fun () -> f ;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
NombresLa représentation de Church consiste a écrire $n$ comme $\lambda f. \lambda z. f^n z$. | type 'a nombres = ('a -> 'a) -> 'a -> 'a;; (* inutilisé *)
type entiers_church = (int -> int) -> int -> int;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
$0$ est trivialement $\lambda f. \lambda z. z$ : | let zero = fun (f : ('a -> 'a)) (z : 'a) -> z ;; | File "[34]", line 1, characters 16-17:
Warning 27: unused variable f.
| MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
$1$ est $\lambda f. \lambda z. f z$ : | let un = fun (f : ('a -> 'a)) -> f ;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Avec l'opérateur de composition, l'écriture des entiers suivants est facile. | let compose = fun f g x -> f (g x);;
let deux = fun f -> compose f f;; (* == compose f (un f) *)
let trois = fun f -> compose f (deux f) ;;
let quatre = fun f -> compose f (trois f) ;;
(* etc *) | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
On peut généraliser ça, avec une fonction qui transforme un entier (`int`) de Caml en un entier de Church : | let rec entierChurch (n : int) =
fun f z -> if n = 0 then z else f ((entierChurch (n-1)) f z)
;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Par exemple : | (entierChurch 0) (fun x -> x + 1) 0;; (* 0 *)
(entierChurch 7) (fun x -> x + 1) 0;; (* 7 *)
(entierChurch 3) (fun x -> 2*x) 1;; (* 8 *) | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Et une fonction qui fait l'inverse (note : cette fonction n'est *pas* un $\lambda$-terme) : | let entierNatif c : int =
c (fun x -> x + 1) 0
;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Un petit test : | entierNatif (si vrai zero un);; (* 0 *)
entierNatif (si faux zero un);; (* 1 *)
entierNatif (entierChurch 100);; (* 100 *) | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Test d'inégalitéOn a besoin de pouvoir tester si $n \leq 0$ (ou $n = 0$) en fait. | (* prend un lambda f lambda z. ... est donne vrai ssi n = 0 ou faux sinon *)
let estnul = fun n -> n (fun z -> faux) (vrai);;
(* prend un lambda f lambda z. ... est donne vrai ssi n > 0 ou faux sinon *)
let estnonnul = fun n -> n (fun z -> vrai) (faux);; | File "[44]", line 2, characters 32-33:
Warning 27: unused variable z.
| MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
On peut proposer cette autre implémentation, qui "fonctionne" pareil (au sens calcul des $\beta$-réductions) mais est plus compliquée : | let estnonnul2 = fun n -> non (estnul n);;
entierNatif (si (estnul zero) zero un);; (* 0 *)
entierNatif (si (estnul un) zero un);; (* 1 *)
entierNatif (si (estnul deux) zero un);; (* 1 *)
entierNatif (si (estnonnul zero) zero un);; (* 0 *)
entierNatif (si (estnonnul un) zero un);; (* 1 *)
entierNatif (si (estnonnul deux) zero un);; (* 1 *)
entierNatif (si (non (estnul zero)) zero un);; (* 0 *)
entierNatif (si (non (estnul un)) zero un);; (* 1 *)
entierNatif (si (non (estnul deux)) zero un);; (* 1 *) | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
SuccesseursVue la représentation de Churc, $n+1$ consiste a appliquer l'argument $f$ une fois de plus :$f^{n+1}(z) = f (f^n(z))$. | let succ = fun n f z -> f ((n f) z) ;;
entierNatif (succ un);; (* 2 *)
deux;;
succ un;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
On remarque qu'ils ont le même typage, mais OCaml indique qu'il a moins d'informations à propos du deuxième : ce `'_a` signifie que le type est *contraint*, il sera fixé dès la première utilisation de cette fonction.C'est assez mystérieux, mais il faut retenir le point suivant : `deux` était écrit manuellement, donc le système a vu le terme en entier, il le connaît et saît que `deux = fun f -> fun x -> f (f x))`, pas de surprise. Par contre, `succ un` est le résultat d'une évaluation *partielle* et vaut `fun f z -> f ((deux f) z)`. Sauf que le système ne calcule pas tout et laisse l'évaluation partielle ! (heureusement !) Si on appelle `succ un` à une fonction, le `'_a` va être contraint, et on ne pourra pas s'en reservir : | let succ_de_un = succ un;;
(succ_de_un) (fun x -> x + 1);;
(succ_de_un) (fun x -> x ^ "0");;
(succ un) (fun x -> x ^ "0");;
(* une valeur fraîchement calculée, sans contrainte *) | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
PrédecesseursVue la représentation de Church, $\lambda n. n-1$ n'existe pas... mais on peut tricher. | let pred = fun n ->
if (entierNatif n) > 0 then entierChurch ((entierNatif n) - 1)
else zero
;;
entierNatif (pred deux);; (* 1 *)
entierNatif (pred trois);; (* 2 *) | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
AdditionPour ajouter $n$ et $m$, il faut appliquer une fonction $f$ $n$ fois puis $m$ fois : $f^{n+m}(z) = f^n(f^m(z))$. | let somme = fun n m f z -> n(f)( m(f)(z));;
let cinq = somme deux trois ;;
entierNatif cinq;;
let sept = somme cinq deux ;;
entierNatif sept;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
MultiplicationPour multiplier $n$ et $m$, il faut appliquer le codage de $n$ exactement $m$ fois : $f^{nm}(z) = (f^n(f^n(...(f^n(z))...))$. | let produit = fun n m f z -> m(n(f))(z);; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
On peut faire encore mieux avec l'opérateur de composition : | let produit = fun n m -> compose m n;;
let six = produit deux trois ;;
entierNatif six;;
let huit = produit deux quatre ;;
entierNatif huit;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
PairesOn va écrire un constructeur de paires, `paire a b` qui sera comme `(a, b)`, et deux destructeurs, `gauche` et `droite`, qui vérifient : gauche (paire a b) == a droite (paire a b) == b | let paire = fun a b -> fun f -> f(a)(b);;
let gauche = fun p -> p(fun a b -> a);;
let droite = fun p -> p(fun a b -> b);;
entierNatif (gauche (paire zero un));;
entierNatif (droite (paire zero un));; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Prédécesseurs, deuxième essaiIl y a une façon, longue et compliquée ([source](http://gregfjohnson.com/pred/)) d'y arriver, avec des paires. | let pred n suivant premier =
let pred_suivant = paire vrai premier in
let pred_premier = fun p ->
si (gauche p)
(paire faux premier)
(paire faux (suivant (droite p)))
in
let paire_finale = n pred_suivant pred_premier in
droite paire_finale
;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Malheureusement, ce n'est pas bien typé. | entierNatif (pred deux);; (* 1 *) | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
ListesPour construire des listes (simplement chaînées), on a besoin d'une valeur pour la liste vide, `listevide`, d'un constructeur pour une liste `cons`, un prédicat pour la liste vide `estvide`, un accesseur `tete` et `queue`, et avec les contraintes suivantes (avec `vrai`, `faux` définis comme plus haut) : estvide (listevide) == vrai estvide (cons tt qu) == faux tete (cons tt qu) == tt queue (cons tt qu) == quOn va stocker tout ça avec des fonctions qui attendront deux arguments (deux fonctions - rappel tout est fonction en $\lambda$-calcul), l'une appellée si la liste est vide, l'autre si la liste n'est pas vide. | let listevide = fun survide surpasvide -> survide;;
let cons = fun hd tl -> fun survide surpasvide -> surpasvide hd tl;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Avec cette construction, `estvide` est assez simple : `survide` est `() -> vrai` et `surpasvide` est `tt qu -> faux`. | let estvide = fun liste -> liste (vrai) (fun tt qu -> faux);; | File "[60]", line 1, characters 45-47:
Warning 27: unused variable tt.
File "[60]", line 1, characters 48-50:
Warning 27: unused variable qu.
| MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Deux tests : | entierNatif (si (estvide (listevide)) un zero);; (* estvide listevide == vrai *)
entierNatif (si (estvide (cons un listevide)) un zero);; (* estvide (cons un listevide) == faux *) | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Et pour les deux extracteurs, c'est très facile avec cet encodage. | let tete = fun liste -> liste (vide) (fun tt qu -> tt);;
let queue = fun liste -> liste (vide) (fun tt qu -> qu);;
entierNatif (tete (cons un listevide));;
entierNatif (tete (queue (cons deux (cons un listevide))));;
entierNatif (tete (queue (cons trois (cons deux (cons un listevide)))));; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Visualisons les types que Caml trouve a des listes de tailles croissantes : | cons un (cons un listevide);; (* 8 variables pour une liste de taille 2 *)
cons un (cons un (cons un (cons un listevide)));; (* 14 variables pour une liste de taille 4 *)
cons un (cons un (cons un (cons un (cons un (cons un (cons un (cons un listevide)))))));; (* 26 variables pour une liste de taille 7 *) | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Pour ces raisons là, on se rend compte que le type donné par Caml à une liste de taille $k$ croît linéairement *en taille* en fonction de $k$ !Aucun espoir donc (avec cet encodage) d'avoir un type générique pour les listes représentés en Caml.Et donc nous ne sommes pas surpris de voir cet essai échouer : | let rec longueur liste =
liste (zero) (fun t q -> succ (longueur q))
;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
En effet, `longueur` devrait être bien typée et `liste` et `q` devraient avoir le même type, or le type de `liste` est strictement plus grand que celui de `q`... On peut essayer de faire une fonction `ieme`.On veut que `ieme zero liste = tete` et `ieme n liste = ieme (pred n) (queue liste)`.En écrivant en haut niveau, on aimerait pouvoir faire : | let pop liste =
si (estvide liste) (listevide) (queue liste)
;;
let ieme n liste =
tete (n pop liste)
;; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
La fonction UC'est le premier indice que le $\lambda$-calcul peut être utilisé comme modèle de calcul : le terme $U : f \to f(f)$ ne termine pas si on l'applique à lui-même.Mais ce sera la faiblesse de l'utilisation de Caml : ce terme ne peut être correctement typé ! | let u = fun f -> f (f);; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
A noter que même dans un langage non typé (par exemple Python), on peut définir $U$ mais son exécution échouera, soit à caude d'un dépassement de pile, soit parce qu'elle ne termine pas. La récursion via la fonction YLa fonction $Y$ trouve le point fixe d'une autre fonction.C'est très utile pour définir des fonctions par récurrence.Par exemple, la factorielle est le point fixe de la fonction suivante :"$\lambda f. \lambda n. 1$ si $n \leq 0$ sinon $n * f(n-1)$" (écrite dans un langage plus haut niveau, pas en $\lambda$-calcul).$Y$ satisfait ces contraintes : $Y(F) = f$ et $f = F(f)$.Donc $Y(F) = F(Y(F))$ et donc $Y = \lambda F. F(Y(F))$. Mais ce premier essai ne marche pas. | let rec y = fun f -> f (y(f));;
let fact = y(fun f n -> si (estnul n) (un) (produit n (f (pred n))));; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
On utilise la $\eta$-expansion : si $e$ termine, $e$ est équivalent (ie tout calcul donne le même terme) à $\lambda x. e(x)$. | let rec y = fun f -> f (fun x -> y(f)(x));; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Par contre, le typage n'arrive toujours pas à trouver que l'expression suivante devrait être bien définie : | let fact = y(fun f n -> si (estnul n) (un) (produit n (f (pred n))));; | _____no_output_____ | MIT | agreg/Lambda_Calcul_en_OCaml.ipynb | doc22940/notebooks-2 |
Subsets and Splits