markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Import the required modules
import boto3 import sagemaker import time import random import uuid import logging import stepfunctions import io import random import os from sagemaker.amazon.amazon_estimator import get_image_uri from stepfunctions import steps from stepfunctions.steps import TrainingStep, ModelStep, TransformStep from stepfunctions.inputs import ExecutionInput from stepfunctions.workflow import Workflow from stepfunctions.template import TrainingPipeline from stepfunctions.template.utils import replace_parameters_with_jsonpath session = sagemaker.Session() stepfunctions.set_stream_logger(level=logging.INFO) region = boto3.Session().region_name bucket = session.default_bucket() prefix = 'sagemaker/DEMO-xgboost-regression' bucket_path = 's3://{}/{}/'.format(bucket, prefix)
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Prepare the dataset This notebook uses the XGBoost algorithm to train and host a regression model. We use the [Abalone data](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html) originally from the [UCI data repository](https://archive.ics.uci.edu/ml/datasets/abalone). More details about the original dataset can be found [here](https://archive.ics.uci.edu/ml/machine-learning-databases/abalone/abalone.names). In the libsvm converted [version](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html), the nominal feature (Male/Female/Infant) has been converted into a real valued feature. Age of abalone is to be predicted from eight physical measurements.
try: #python3 from urllib.request import urlretrieve except: #python2 from urllib import urlretrieve # Load the dataset FILE_DATA = 'abalone' urlretrieve("https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression/abalone", FILE_DATA) import numpy as np from sklearn.datasets import load_svmlight_file, dump_svmlight_file data = load_svmlight_file(FILE_DATA) # Split the downloaded data into train/test/validation files PERCENT_TRAIN = 70 PERCENT_VALIDATION = 15 train_data_x, validation_data_x, test_data_x = np.split(data[0].toarray(), [int(PERCENT_TRAIN * len(data)), int((PERCENT_TRAIN+PERCENT_VALIDATION)*len(data))]) train_data_y, validation_data_y, test_data_y = np.split(data[1], [int(PERCENT_TRAIN * len(data)), int((PERCENT_TRAIN+PERCENT_VALIDATION)*len(data))]) # Save the files FILE_TRAIN = 'abalone.train' FILE_VALIDATION = 'abalone.validation' FILE_TEST = 'abalone.test' dump_svmlight_file(train_data_x, train_data_y, FILE_TRAIN) dump_svmlight_file(validation_data_x, validation_data_y, FILE_VALIDATION) dump_svmlight_file(test_data_x, test_data_y, FILE_TEST) # S3 files train_s3_file = os.path.join(prefix, 'train', FILE_TRAIN) validation_s3_file = os.path.join(prefix, 'train', FILE_VALIDATION) test_s3_file = os.path.join(prefix, 'train', FILE_TEST) # Upload the three files to Amazon S3 s3_client = boto3.client('s3') s3_client.upload_file(FILE_TRAIN, bucket, train_s3_file) s3_client.upload_file(FILE_VALIDATION, bucket, validation_s3_file) s3_client.upload_file(FILE_TEST, bucket, test_s3_file) # S3 URIs train_s3_file = 's3://{}/{}'.format(bucket, train_s3_file) validation_s3_file = 's3://{}/{}'.format(bucket, validation_s3_file) test_s3_file = 's3://{}/{}'.format(bucket, test_s3_file) output_s3 = 's3://{}/{}/{}/'.format(bucket, prefix, 'output')
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Configure the AWS Sagemaker estimator
xgb = sagemaker.estimator.Estimator( get_image_uri(region, 'xgboost', repo_version='0.90-2'), sagemaker_execution_role, train_instance_count = 1, train_instance_type = 'ml.m4.4xlarge', output_path = output_s3, sagemaker_session = session ) xgb.set_hyperparameters( objective = 'reg:linear', num_round = 50, max_depth = 5, eta = 0.2, gamma = 4, min_child_weight = 6, subsample = 0.7, silent = 0 )
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Build a machine learning workflow You can use a workflow to create a machine learning pipeline. The AWS Data Science Workflows SDK provides several AWS SageMaker workflow steps that you can use to construct an ML pipeline. In this tutorial you will use the Train and Transform steps.* [**TrainingStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.sagemaker.TrainingStep) - Starts a Sagemaker training job and outputs the model artifacts to S3.* [**ModelStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.sagemaker.ModelStep) - Creates a model on SageMaker using the model artifacts from S3.* [**TransformStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.sagemaker.TransformStep) - Starts a SageMaker transform job* [**EndpointConfigStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.sagemaker.EndpointConfigStep) - Defines an endpoint configuration on SageMaker.* [**EndpointStep**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.sagemaker.EndpointStep) - Deploys the trained model to the configured endpoint. Define the input schema for a workflow executionThe [**ExecutionInput**](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/placeholders.htmlstepfunctions.inputs.ExecutionInput) API defines the options to dynamically pass information to a workflow at runtime.The following cell defines the fields that must be passed to your workflow when starting an execution.While the workflow is usually static after it is defined, you may want to pass values dynamically that are used by steps in your workflow. To help with this, the SDK provides a way to create placeholders when you define your workflow. These placeholders can be dynamically assigned values when you execute your workflow.ExecutionInput values are accessible to each step of your workflow. You have the ability to define a schema for this placeholder collection, as shown in the cell below. When you execute your workflow the SDK will verify if the dynamic input conforms to the schema you defined.
# SageMaker expects unique names for each job, model and endpoint. # If these names are not unique the execution will fail. Pass these # dynamically for each execution using placeholders. execution_input = ExecutionInput(schema={ 'JobName': str, 'ModelName': str, 'EndpointName': str })
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Create the training step In the following cell we create the training step and pass the estimator we defined above. See [TrainingStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.sagemaker.TrainingStep) in the AWS Step Functions Data Science SDK documentation.
training_step = steps.TrainingStep( 'Train Step', estimator=xgb, data={ 'train': sagemaker.s3_input(train_s3_file, content_type='libsvm'), 'validation': sagemaker.s3_input(validation_s3_file, content_type='libsvm') }, job_name=execution_input['JobName'] )
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Create the model step In the following cell we define a model step that will create a model in SageMaker using the artifacts created during the TrainingStep. See [ModelStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.sagemaker.ModelStep) in the AWS Step Functions Data Science SDK documentation.The model creation step typically follows the training step. The Step Functions SDK provides the [get_expected_model](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.sagemaker.TrainingStep.get_expected_model) method in the TrainingStep class to provide a reference for the trained model artifacts. Please note that this method is only useful when the ModelStep directly follows the TrainingStep.
model_step = steps.ModelStep( 'Save model', model=training_step.get_expected_model(), model_name=execution_input['ModelName'] )
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Create the transform stepIn the following cell we create the transform step. See [TransformStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.sagemaker.TransformStep) in the AWS Step Functions Data Science SDK documentation.
transform_step = steps.TransformStep( 'Transform Input Dataset', transformer=xgb.transformer( instance_count=1, instance_type='ml.m5.large' ), job_name=execution_input['JobName'], model_name=execution_input['ModelName'], data=test_s3_file, content_type='text/libsvm' )
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Create an endpoint configuration stepIn the following cell we create an endpoint configuration step. See [EndpointConfigStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.sagemaker.EndpointConfigStep) in the AWS Step Functions Data Science SDK documentation.
endpoint_config_step = steps.EndpointConfigStep( "Create Endpoint Config", endpoint_config_name=execution_input['ModelName'], model_name=execution_input['ModelName'], initial_instance_count=1, instance_type='ml.m5.large' )
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Create an endpointIn the following cell we create a step to deploy the trained model to an endpoint in AWS SageMaker. See [EndpointStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.sagemaker.EndpointStep) in the AWS Step Functions Data Science SDK documentation.
endpoint_step = steps.EndpointStep( "Create Endpoint", endpoint_name=execution_input['EndpointName'], endpoint_config_name=execution_input['ModelName'] )
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Chain together steps for your workflowCreate your workflow definition by chaining the steps together. See [Chain](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/sagemaker.htmlstepfunctions.steps.states.Chain) in the AWS Step Functions Data Science SDK documentation.
workflow_definition = steps.Chain([ training_step, model_step, transform_step, endpoint_config_step, endpoint_step ])
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Create your workflow using the workflow definition above, and render the graph with [render_graph](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.htmlstepfunctions.workflow.Workflow.render_graph).
from time import strftime, gmtime timestamp = strftime('%d-%H-%M-%S', gmtime()) workflow = Workflow( name='{}-{}'.format('MyTrainTransformDeploy_v1', timestamp), definition=workflow_definition, role=workflow_execution_role, execution_input=execution_input ) workflow.render_graph()
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Create the workflow in AWS Step Functions with [create](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.htmlstepfunctions.workflow.Workflow.create).
workflow.create()
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Run the workflow with [execute](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.htmlstepfunctions.workflow.Workflow.execute).
execution = workflow.execute( inputs={ 'JobName': 'regression-{}'.format(uuid.uuid1().hex), # Each Sagemaker Job requires a unique name 'ModelName': 'regression-{}'.format(uuid.uuid1().hex), # Each Model requires a unique name, 'EndpointName': 'regression-{}'.format(uuid.uuid1().hex) # Each Endpoint requires a unique name, } )
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Render workflow progress with the [render_progress](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.htmlstepfunctions.workflow.Execution.render_progress).This generates a snapshot of the current state of your workflow as it executes. This is a static image. Run the cell again to check progress.
execution.render_progress()
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Use [list_events](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.htmlstepfunctions.workflow.Execution.list_events) to list all events in the workflow execution.
execution.list_events(html=True)
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Use [list_executions](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.htmlstepfunctions.workflow.Workflow.list_executions) to list all executions for a specific workflow.
workflow.list_executions(html=True)
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Use [list_workflows](https://aws-step-functions-data-science-sdk.readthedocs.io/en/latest/workflow.htmlstepfunctions.workflow.Workflow.list_workflows) to list all workflows in your AWS account.
Workflow.list_workflows(html=True) template = workflow.get_cloudformation_template() with open('workflow.json', 'w') as f: f.write(template) !cat template.json
_____no_output_____
Apache-2.0
step-functions-data-science-sdk/machine_learning_workflow_abalone/machine_learning_workflow_abalone.ipynb
juliensimon/amazon-sagemaker-examples
Kubernetes Jobs & ImagesThis topic describes running a kubernetes-based job using shared data, and building custom container images Define a New Function and its DependenciesDefine a single serverless function with two `handlers`, one for training and one for validation.
import mlrun
> 2021-01-24 00:04:38,841 [warning] Failed resolving version info. Ignoring and using defaults > 2021-01-24 00:04:40,691 [warning] Unable to parse server or client version. Assuming compatible: {'server_version': 'unstable', 'client_version': 'unstable'}
Apache-2.0
docs/runtimes/mlrun_jobs.ipynb
jasonnIguazio/ghpages-mlrun
Use the `%nuclio` magic commands to set package dependencies and configuration:
%nuclio cmd -c pip install pandas import time import pandas as pd from mlrun.artifacts import get_model, update_model def training( context, p1: int = 1, p2: int = 2 ) -> None: """Train a model. :param context: The runtime context object. :param p1: A model parameter. :param p2: Another model parameter. """ # access input metadata, values, and inputs print(f'Run: {context.name} (uid={context.uid})') print(f'Params: p1={p1}, p2={p2}') context.logger.info('started training') # <insert training code here> # log the run results (scalar values) context.log_result('accuracy', p1 * 2) context.log_result('loss', p1 * 3) # add a lable/tag to this run context.set_label('category', 'tests') # log a simple artifact + label the artifact # If you want to upload a local file to the artifact repo add src_path=<local-path> context.log_artifact('somefile', body=b'abc is 123', local_path='myfile.txt') # create a dataframe artifact df = pd.DataFrame([{'A':10, 'B':100}, {'A':11,'B':110}, {'A':12,'B':120}]) context.log_dataset('mydf', df=df) # Log an ML Model artifact, add metrics, params, and labels to it # and place it in a subdir ('models') under artifacts path context.log_model('mymodel', body=b'abc is 123', model_file='model.txt', metrics={'accuracy':0.85}, parameters={'xx':'abc'}, labels={'framework': 'xgboost'}, artifact_path=context.artifact_subpath('models')) def validation( context, model: mlrun.DataItem ) -> None: """Model validation. Dummy validation function. :param context: The runtime context object. :param model: The extimated model object. """ # access input metadata, values, files, and secrets (passwords) print(f'Run: {context.name} (uid={context.uid})') context.logger.info('started validation') # get the model file, class (metadata), and extra_data (dict of key: DataItem) model_file, model_obj, _ = get_model(model) # update model object elements and data update_model(model_obj, parameters={'one_more': 5}) print(f'path to local copy of model file - {model_file}') print('parameters:', model_obj.parameters) print('metrics:', model_obj.metrics) context.log_artifact('validation', body=b'<b> validated </b>', format='html')
_____no_output_____
Apache-2.0
docs/runtimes/mlrun_jobs.ipynb
jasonnIguazio/ghpages-mlrun
The following end-code annotation tells ```nuclio``` to stop parsing the notebook from this cell. _**Do not remove this cell**_:
# mlrun: end-code
_____no_output_____
Apache-2.0
docs/runtimes/mlrun_jobs.ipynb
jasonnIguazio/ghpages-mlrun
______________________________________________ Convert the Code to a Serverless JobCreate a ```function``` that defines the runtime environment (type, code, image, ..) and ```run()``` a job or experiment using that function.In each run you can specify the function, inputs, parameters/hyper-parameters, etc.Use the ```job``` runtime for running container jobs, or alternatively use another distributed runner like MpiJob, Spark, Dask, and Nuclio.**Setting up the environment**
project_name, artifact_path = mlrun.set_environment(project='jobs-demo', artifact_path='./data/{{run.uid}}')
_____no_output_____
Apache-2.0
docs/runtimes/mlrun_jobs.ipynb
jasonnIguazio/ghpages-mlrun
**Define the cluster jobs and build images**To use the function in a cluster you need to package the code and its dependencies.The ```code_to_function``` call automatically generates a ```function``` object from the current notebook (or specified file) with its list of dependencies and runtime configuration.
# create an ML function from the notebook, attache it to iguazio data fabric (v3io) trainer = mlrun.code_to_function(name='my-trainer', kind='job', image='mlrun/mlrun')
_____no_output_____
Apache-2.0
docs/runtimes/mlrun_jobs.ipynb
jasonnIguazio/ghpages-mlrun
The functions need a shared storage media (file or object) to pass and store artifacts.You can add _**Kubernetes**_ resources like volumes, environment variables, secrets, cpu/mem/gpu, etc. to a function.```mlrun``` uses _**KubeFlow**_ modifiers (apply) to configure resources. You can build your own resources or use predefined resources e.g. [AWS resources](https://github.com/kubeflow/pipelines/blob/master/sdk/python/kfp/aws.py). _**Option 1: Using file volumes for artifacts**_If you're using the [MLOps platform](https://www.iguazio.com/), use the `mount_v3io()` auto-mount modifier.If you're using another k8s PVC volume, use the `mlrun.platforms.mount_pvc(..)` modifier with the required parameters.This example uses the `auto_mount()` modifier. It auto-selects between the k8s PVC volume and the Iguazio data fabric. You can set the PVC volume configuration with the env var below or with the auto_mount params:``` MLRUN_PVC_MOUNT=:```If you apply `mount_v3io()` or `auto_mount()` when running the function in the MLOps platform, it attaches the function to Iguazio's real-time data fabric (mounted by default to _**home**_ of the current user).**Note**: If the notebook is not on the managed platform (it's running remotely) you may need to use secrets. For the current ```training``` function, run:
# for PVC volumes set the env var for PVC: MLRUN_PVC_MOUNT=<pvc-name>:<mount-path>, pass the relevant parameters from mlrun.platforms import auto_mount trainer.apply(auto_mount())
_____no_output_____
Apache-2.0
docs/runtimes/mlrun_jobs.ipynb
jasonnIguazio/ghpages-mlrun
_**Option 2: Using AWS S3 for artifacts**_ When using AWS, you can use S3. You need a `secret` with AWS credentials. Create the AWS secret with the following command: `kubectl create -n secret generic my-aws --from-literal=AWS_ACCESS_KEY_ID= --from-literal=AWS_SECRET_ACCESS_KEY=` To use the secret:
# from kfp.aws import use_aws_secret # trainer.apply(use_aws_secret(secret_name='my-aws')) # out = 's3://<your-bucket-name>/jobs/{{run.uid}}'
_____no_output_____
Apache-2.0
docs/runtimes/mlrun_jobs.ipynb
jasonnIguazio/ghpages-mlrun
______________________________________________ Deploy (build) the Function ContainerThe `deploy()` command builds a custom container image (creates a cluster build job) from the outlined function dependencies.If a pre-built container image already exists, pass the `image` name instead. _**Note that the code and params can be updated per run without building a new image**_.The image is stored in a container repository. By default it uses the repository configured on the MLRun API service. You can specify your own docker registry by first creating a secret, and adding that secret name to the build configuration: `kubectl create -n secret docker-registry my-docker --docker-server=https://index.docker.io/v1/ --docker-username= --docker-password= --docker-email=` And then run this: `trainer.build_config(image='target/image:tag', secret='my_docker')`
trainer.deploy(with_mlrun=False)
> 2021-01-24 00:05:18,384 [info] starting remote build, image: .mlrun/func-jobs-demo-my-trainer-latest INFO[0020] Retrieving image manifest mlrun/mlrun:unstable INFO[0020] Retrieving image manifest mlrun/mlrun:unstable INFO[0021] Built cross stage deps: map[] INFO[0021] Retrieving image manifest mlrun/mlrun:unstable INFO[0021] Retrieving image manifest mlrun/mlrun:unstable INFO[0021] Executing 0 build triggers INFO[0021] Unpacking rootfs as cmd RUN pip install pandas requires it. INFO[0037] RUN pip install pandas INFO[0037] Taking snapshot of full filesystem... INFO[0050] cmd: /bin/sh INFO[0050] args: [-c pip install pandas] INFO[0050] Running: [/bin/sh -c pip install pandas] Requirement already satisfied: pandas in /usr/local/lib/python3.7/site-packages (1.2.0) Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/site-packages (from pandas) (2020.5) Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/site-packages (from pandas) (2.8.1) Requirement already satisfied: numpy>=1.16.5 in /usr/local/lib/python3.7/site-packages (from pandas) (1.19.5) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0) WARNING: You are using pip version 20.2.4; however, version 21.0 is available. You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command. INFO[0051] Taking snapshot of full filesystem...
Apache-2.0
docs/runtimes/mlrun_jobs.ipynb
jasonnIguazio/ghpages-mlrun
Run the Function on the ClusterUse ```with_code``` to inject the latest code into the function (without requiring a new build).
trainer.with_code() # run our training task with params train_run = trainer.run(name='my-training', handler='training', params={'p1': 9}) # running validation, use the model result from the previous step model = train_run.outputs['mymodel'] validation_run = trainer.run(name='validation', handler='validation', inputs={'model': model}, watch=True)
> 2021-01-24 00:09:21,259 [info] starting run validation uid=c757ffcdc36d4412b4bcba1df75f079d DB=http://mlrun-api:8080 > 2021-01-24 00:09:21,536 [info] Job is running in the background, pod: validation-dwd78 > 2021-01-24 00:09:25,570 [warning] Unable to parse server or client version. Assuming compatible: {'server_version': 'unstable', 'client_version': 'unstable'} Run: validation (uid=c757ffcdc36d4412b4bcba1df75f079d) > 2021-01-24 00:09:25,719 [info] started validation path to local copy of model file - /User/data/30b8131285a74f87b16d957fabc5fac3/models/model.txt parameters: {'xx': 'abc', 'one_more': 5} metrics: {'accuracy': 0.85} > 2021-01-24 00:09:25,873 [info] run executed, status=completed final state: completed
Apache-2.0
docs/runtimes/mlrun_jobs.ipynb
jasonnIguazio/ghpages-mlrun
Create and Run a Kubeflow PipelineKubeflow pipelines are used for workflow automation, creating a graph of functions and their specified parameters, inputs, and outputs.You can chain the outputs and inputs of the pipeline steps, as illustrated below.
import kfp from kfp import dsl from mlrun import run_pipeline @dsl.pipeline( name = 'job test', description = 'demonstrating mlrun usage' ) def job_pipeline( p1: int = 9 ) -> None: """Define our pipeline. :param p1: A model parameter. """ train = trainer.as_step(handler='training', params={'p1': p1}, outputs=['mymodel']) validate = trainer.as_step(handler='validation', inputs={'model': train.outputs['mymodel']}, outputs=['validation'])
_____no_output_____
Apache-2.0
docs/runtimes/mlrun_jobs.ipynb
jasonnIguazio/ghpages-mlrun
Running the pipeline Pipeline results are stored at the `artifact_path` location: You can generate a unique folder per workflow by adding ```/{{workflow.uid}}``` to the path ```mlrun```.
artifact_path = 'v3io:///users/admin/kfp/{{workflow.uid}}/' arguments = {'p1': 8} run_id = run_pipeline(job_pipeline, arguments, experiment='my-job', artifact_path=artifact_path) from mlrun import wait_for_pipeline_completion, get_run_db wait_for_pipeline_completion(run_id) db = get_run_db().list_runs(project=project_name, labels=f'workflow={run_id}').show()
_____no_output_____
Apache-2.0
docs/runtimes/mlrun_jobs.ipynb
jasonnIguazio/ghpages-mlrun
Gradient descent algorithm for Scenario 2In this part, we implement an gradient descent algorithm to optimization the objective loss function in Scenario 2:$$\min F := \min \frac{1}{2(n-i)} \sum_{i=1000}^n (fbpredic(i) + a*tby(i) +b*ffr(i) + c*fta(i) - asp(i))^2$$Gradient descent: $$ \beta_k = \beta_{k-1} + \delta* \nabla F, $$where $\delta$ control how far does each iteration go. Detailed planFirst, split the data as train and test with 80% and 20% respectively. For the training part, we need prophet() predicted price, there are a couple of issues. One is prophet() can not predict too far in the future. The other is we can not call prophet() too many times, this takes a lot of time. So we will use a sliding window strategy:1, Split the train data as train_1 and train_2, where train_1 is used as a sliding window to fit prophet(), and give predictions in train_2. Train_2 is used train the model we proposed above.2, After we got full size (size of train_2) predictions from prophet(), then we use gradient descent to fit the above model, extracting the coefficients of features to make predicution in the testing data.
import pandas as pd import numpy as np from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.preprocessing import FunctionTransformer from numpy import meshgrid ## For plotting import matplotlib.pyplot as plt from matplotlib import style import datetime as dt import seaborn as sns sns.set_style("whitegrid") df= pd.read_csv('df7.csv', parse_dates=['Date']) df = df.rename(columns = {"Date":"ds","Close":"y"}) df # len(df) df.columns from datetime import datetime p = 0.9 # Train around 90% of dataset cutoff = int((p*len(df)//100)*100) df_train = df[:cutoff].copy() df_test = df.drop(df_train.index).copy() print(df_train, df_test)
ds y tby_sqsq une_div_eps_vix_fta 0 2005-06-20 1216.10 285.343042 8.989853e+10 1 2005-06-21 1213.61 271.709069 9.032286e+10 2 2005-06-22 1213.88 243.438006 8.984219e+10 3 2005-06-23 1200.73 245.912579 9.111354e+10 4 2005-06-24 1191.57 236.126249 9.202165e+10 ... ... ... ... ... 3495 2019-06-20 2954.18 16.322408 6.065473e+10 3496 2019-06-21 2950.46 18.360368 6.235005e+10 3497 2019-06-24 2945.35 16.649664 6.137054e+10 3498 2019-06-25 2917.38 16.000000 6.287749e+10 3499 2019-06-26 2913.78 17.661006 6.207108e+10 [3500 rows x 4 columns] ds y tby_sqsq une_div_eps_vix_fta 3500 2019-06-27 2924.92 16.322408 6.038335e+10 3501 2019-06-28 2941.76 16.000000 5.888314e+10 3502 2019-07-01 2964.33 16.981817 5.581583e+10 3503 2019-07-02 2973.01 15.369536 5.327365e+10 3504 2019-07-03 2995.82 14.757891 5.215275e+10 ... ... ... ... ... 3893 2021-01-25 3855.36 1.215506 1.826285e+11 3894 2021-01-26 3849.62 1.215506 1.787428e+11 3895 2021-01-27 3750.77 1.169859 2.331807e+11 3896 2021-01-28 3787.38 1.310796 2.151189e+11 3897 2021-01-29 3714.24 1.518070 2.317696e+11 [398 rows x 4 columns]
MIT
scratch work/Yuqing-Data-Merge/Scenario2-v9.ipynb
thinkhow/Market-Prediction-with-Macroeconomics-features
Use prophet() to make predictions, we will split training as train_1 and train_2 with ratio 40% vs 60%, train_1 will be used to fit prophet(), then predict on train_2. Getting the predictions, feed the data into the Scenario 2 model, train again to get the parameters a,b,c,....
#prophet part from fbprophet import Prophet start = 1000 # 1000 # the number of initial data for training pred_size =100 # predicted periods num_winds = int((df_train.shape[0]-start)/pred_size) #(4000-3000)/100 =30 pro_pred = [] # use accumulated data to predict the next pred_size data for i in range(num_winds): tmp_train = df_train.iloc[: start+ i*pred_size].copy() fbp = Prophet(daily_seasonality=True) # fit close price using fbprophet model fbp.fit(tmp_train[['ds','y']]) # predict pred_size futures and get the forecast price fut = fbp.make_future_dataframe(periods = pred_size,) tmp_forecast = fbp.predict(fut) # only require the forcast on test data of temporary training data pred = tmp_forecast[start+ i*pred_size:].yhat pro_pred.append(pred) pro_pred flat_pro_pred = [item for l1 in pro_pred for item in l1] df.columns df= pd.read_csv('df7.csv', parse_dates=['Date']) df = df.rename(columns = {"Date":"ds","Close":"y"}) df['tby_sqsq'] = df['tby']**2 # df['eps_sqrt'] = np.sqrt(df['eps']) df['une_div_vix'] =df['une'] * df['div'] * df['vix'] df = df.drop(columns=['tby','ffr', 'div', 'une','vix']) df.columns possible_features = ['fta', 'eps', 'tby_sqsq', 'une_div_vix'] df_train = df[:cutoff].copy() df_test = df[cutoff:].copy() from sklearn.linear_model import LinearRegression reg = LinearRegression(fit_intercept=False, normalize=True, copy_X = True) reg.fit(df_train[start:cutoff][possible_features], df_train[start:cutoff]['y'] - flat_pro_pred) coef = [] for i in range(len(possible_features)): coef.append(np.round(reg.coef_[i],5)) print(coef) # Forecast the Test Data from fbprophet import Prophet test_time = int((1-p)* len(df)) fbp = Prophet(daily_seasonality=True) fbp.fit(df_train[['ds','y']]) fut = fbp.make_future_dataframe(periods = test_time,) forecast = fbp.predict(fut) pred_test = forecast[cutoff:cutoff+test_time].yhat pred_test = pred_test.ravel() len(pred_test) pp_test = pred_test.copy() # predicted price on testing data pp_train = flat_pro_pred.copy() # predicted price on training data for i in range(len(possible_features)): pp_test += coef[i] * df_test[df_test.columns[i+2]][:test_time].ravel() pp_train += coef[i] * df_train[df_train.columns[i+2]][start:].ravel() from sklearn.metrics import mean_squared_error as MSE # MSE for test data # Actual close price: df_test[:test_time].y # Predicted price by prophet: pred_test # Predicted price by tuning mse1 = MSE(df_test[:test_time].y,pred_test) # mse2 = MSE(df_test[:test_time].y, pp_test) print(mse1,mse2) # MSE for train data mse3 = MSE(df_train[start:].y, flat_pro_pred) mse4 = MSE(df_train[start:].y, pp_train) print(mse3,mse4) train_pred_yhat = [np.nan for i in range(start)] + flat_pro_pred train_pp_train = [np.nan for i in range(start)] + pp_train.tolist() train_date = df_train[['ds']].to_numpy().ravel() train_date fc_train = pd.DataFrame(data={'ds':train_date,'fbsp':train_pred_yhat, 'imsp': train_pp_train}) fc_train m = len(forecast) -cutoff test_pred_yhat = forecast.loc[cutoff:].yhat.copy().to_numpy().ravel() test_date = df_test[['ds']][:m].to_numpy().ravel() fc_test = pd.DataFrame(data={'ds':test_date, 'fbsp':test_pred_yhat, 'imsp': pp_test.tolist() }) fc_test plt.figure(figsize=(18,10)) # plot the training data plt.plot(df_train.ds,df_train.y,'b', label = "Training Data") plt.plot(df_train.ds, fc_train.imsp,'g-', label = "Improved Fitted Values") # plot the fit plt.plot(df_train.ds, fc_train.fbsp,'r-', label = "FB Fitted Values") # # plot the forecast plt.plot(df_test[:m].ds, fc_test.fbsp,'r--', label = "FB Forecast") plt.plot(df_test[:m].ds, fc_test.imsp,'g--', label = "Improved Forecast") plt.plot(df_test[:m].ds,df_test[:m].y,'b--', label = "Test Data") plt.legend(fontsize=14) plt.xlabel("Date", fontsize=16) plt.ylabel("SP&500 Close Price", fontsize=16) plt.show()
_____no_output_____
MIT
scratch work/Yuqing-Data-Merge/Scenario2-v9.ipynb
thinkhow/Market-Prediction-with-Macroeconomics-features
Data Analysis in PythonIn this session we will learn how to properly utilize python's [pandas](https://pandas.pydata.org/) library for data transforming, cleaning, filtering and exploratory data analysis. PandasPython's Data Analysis LibraryPython has long been great for data munging and preparation, but less so for data analysis and modeling. *Pandas* helps fill this gap, enabling you to carry out your entire data analysis workflow in Python.Pandas is built on top of *numpy* aiming at providing higher-level functionality as well as a new data structure that works well with tabular data with heterogenous-typed columns (e.g. Excel spreadsheets, SQL tables). Data StructuresPandas introduces two new data structures to Python: the **Series** and the **DataFrame**. Both of which are built on top of NumPy. SeriesA **series**, in *pandas* is a one-dimensional *ndarray* with axis labels. The axis labels are collectively referred to as the **index**. The labels facilitate in allowing us to refer to the elements in the series either by their position (like in a list or an array) or by their label (like in a dictionary).The basic method to create a `pd.Series` is to call:```pythons = pd.Series(data, index=index)```where *data* is most commonly a dictionary (where the keys will be used as the `index` and the values as the elements) or a `numpy.array` and `index` is a *list* of labels.
from __future__ import print_function import pandas as pd # for simplicity we usually refer to pandas as pd import numpy as np s = pd.Series([1,3,5,np.nan,6,8], index=['a', 'b', 'c', 'd', 'e', 'f']) # By passing a list as the only argument in series, we let pandas create a default integer index print(s)
a 1.0 b 3.0 c 5.0 d NaN e 6.0 f 8.0 dtype: float64
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Like arrays, a series can only have one `dtype` (in this case `float64`). As we mentioned previously, indexing elements in the *Series* can be done either through their position or through their label.
print(s[4]) # position print(s['e']) # label
6.0 6.0
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
If we don't set an `index` during the creation of the *Series*, the labels will be set to the position of each element.
s = pd.Series([1,3,5,np.nan,6,8]) print(s)
0 1.0 1 3.0 2 5.0 3 NaN 4 6.0 5 8.0 dtype: float64
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
The last is the most common use of a series.We can easily keep the underlying `np.array` containing just the values of the *Series*.
s.values # a np.array with the values of the Series
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
A **DataFrame** is a two-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet. It is organized in such a way that it is essentially a collection of `pd.Series`, where each series is a column. This way each column must have a **single** data type, but the data type can **differ from column to column**.A *DataFrame* can have labels for both its rows and its columns, however we usually prefer to label **only the columns** and leave the rows to have their position as their labels. The easiest way to create a *DataFrame* is to pass in a dictionary of objects.
df = pd.DataFrame({'A' : 1, # repeats integer for the length of the dataframe 'B' : pd.Timestamp('20190330'), # timestamp datatype, repeats it for the length of the dataframe 'C' : pd.Series(range(4), dtype='float32'), # creates a series of ones and uses it as a column 'D' : np.array([3] * 4,dtype='int32'), # np.array as a column 'E' : pd.Categorical(["test","train","test","train"]), # categorical data type 'F' : 'foo' }) # string, repeats it for the length of the data frame df # renders better in jupyter if we don't use print
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
DataFrame inspectionIn most cases the *DataFrames* are thousands of rows long, we can't view all the data at once.- Look at the **first** entries.
df.head() # prints first entries (by default 5)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
- Look at the **last** entries.
df.tail(3) # prints last 3 entries
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
- Look at entries at **random**.
df.sample(2) # prints two random entries
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Information about the *DataFrame*The two main attributes of a *DataFrame* are:- Its `shape`. *DataFrames* are always two-dimensional, so the only information this provides is the **number of rows and samples**.- Its `dtypes`, which shows the data type of each of the columns.
print('shape:', df.shape) # prints the shape of the dataframe print(df.dtypes) # prints the data type of each column
shape: (4, 6) A int64 B datetime64[ns] C float32 D int32 E category F object dtype: object
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Another important attribute of the *DataFrame* is the labelling on its rows and columns.
print('Row names: ', df.index) print('Column names:', df.columns)
Row names: RangeIndex(start=0, stop=4, step=1) Column names: Index(['A', 'B', 'C', 'D', 'E', 'F'], dtype='object')
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Statistical summary of numeric columnsWe can also easily view a statistical description of our data (only the columns with numeric data types).
df.describe() # only numerical features appear when doing this
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Indexing dataSince *DataFrames* support both indexing through labels and through position we have two main ways of getting an item. **Positional** indexing.This is done through `.iloc`, which requires two arguments: the position of the desired element's row and the position of its column. `.iloc` essentially allows us to use the *DataFrame* as an array.
df.iloc[3, 2] # element in the 4th row of the 3rd column
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Slicing works the same way it does in *numpy*.
df.iloc[::2, -3:] # odd rows, last three columns
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
As does indexing through lists.
df.iloc[[0, 3], [1, 3, 4]] # 1st and 4th row; 2nd, 4th and 5th columns
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Indexing with labelsWe can use the row and column labels to access an element through `.loc`. Remember, if we haven't assigned any labels to the rows, their labels will be the same as their position.
df.loc[3, 'C'] # element in the row with the label 3 and the column with the label 'C'
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Slicing also works!
df.loc[::2, 'B':'D'] # odd rows, columns 'B' through 'D'
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
And even indexing through lists.
df.loc[[0, 3], ['B', 'D', 'E']] # 1st and 4th row; columns 'B', 'D', and 'E'
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Note that `.loc` **included** `'D'` in its slice! Without locators Columns Pandas offers an easier way of slicing one or more columns from a *DataFrame*.
df['B'] # get the column 'B' df[['B', 'D', 'E']] # get a slice of the columns 'B', 'D' and 'E'
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Note that if we slice a single column it will return a `pd.Series`, but if we slice more we'll get a `pd.DataFrame`.If we wanted to get a `pd.DataFrame` with a single column we could use this syntax:
df[['B']] # get a dataframe containing only the column 'B'
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Pandas also allows us to slice columns with this syntax:```pythondf.B gets the column 'B' Equivalient to:df['B']```However, it is **not** recommended! Slicing rowsWe can easily slice rows like this:```pythondf[:2] first two rowsdf[-3:] last three rowsdf[1:2] second row```However, if we try index a single row, it will raise an error (because it will be looking for a column named 2).```pythondf[2] KeyError Instead usedf.loc[2] ordf.iloc[2]``` FilteringPandas' allows us to easily apply filters on the *DataFrame* with the same syntax we saw in the previous tutorial. Here it is a bit more intuitive, due to the naming scheme! Single conditionLike in *numpy*, operations here (even logical) are performed element-wise and, if necessary, with broadcasting.
df['E'] == 'test'
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
If we use the result of the logical condition above as an index, pandas will filter the rows based on the `True` or `False` value.
df[df['E'] == 'test'] # keeps the rows that have a value equal to 'test' in column 'E'
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
This leads to very a intuitive and syntactically simple application of filters. Combining multiple conditionsTo combine the outcome of more than one logical conditions we have to use the following symbols:```python(cond1) & (cond2) logical AND(cond1) | (cond2) logical OR~ (cond1) logical NOT```**Don't forget the parentheses!**
df[(df['C'] > 1) | (df['E'] == 'test')] # keeps the rows that have a value equal to 'test' # in column 'E' or a value larger than 1 in column 'C'
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Adding / Deleting RowsTo add a new row, we can use `.append()`.
# Adds a fifth row to the DataFrame: df.append({'A': 3, 'B': pd.Timestamp('20190331'), 'C': 4.0, 'D': -3, 'E': 'train', 'F': 'bar'}, ignore_index=True)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Note that the length and the data types should be compatible! Because this syntax isn't very convenient we usually **avoid using it** altogether.Keep in mind that this operation **isn't performed inplace**. Instead it returns a copy of the *DataFrame*! If we want to make the append permanent, we can always assign it to itself.
df = df.append({'A': 3, 'B': pd.Timestamp('20190331'), 'C': 4.0, 'D': -3, 'E': 'train', 'F': 'bar'}, ignore_index=True) df
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Another option would be to add the row through `.loc`:```pythondf.loc[len(df)] = [3, pd.Timestamp('20190331'), 4.0, -3, 'train', 'bar']```To delete a row from a *DataFrame* we can use `.drop()`:```pythonrow_label label of the row we want to delete Doesn't overwrite df, instead returns a copy:df.drop(row_label) Overwrites df:df = df.drop(row_label)df.drop(row_label, inplace=True)```
df = df.drop(2) # drops the third row from the dataframe
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
ColumnsWe can add a new column in the *DataFrame* like we would an element in a dictionary. Just keep in mind that the dimensions must be compatible (e.g. we can't add 3 elements to a *DataFrame* with four rows).
df['G'] = [10, 22, -8, 13] df
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
To delete a row we, again, can use `.drop(col_label, axis=1)`. The parameter `axis=1` tells pandas that we are looking to drop a column and that it should look for the key `col_name` in the columns.
df = df.drop('A', axis=1) # drops column with the label 'A' df
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Sorting and rearranging TransposingThis works exactly like in *numpy*.
df.T # not inplace
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Sorting- By **value**
df = df.sort_values(by='G') # sorts DataFrame according to values from column 'B' df
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
**Caution**: that when performing operations that rearrange the rows, the row labels will **no longer match** the row positions!To solve this issue, we can reset the labels to match the positions:```pythondf.reindex()```This won't rearrange the *DataFrame* in any way; it will just **change the labelling of the rows**.- By **index**
df = df.sort_index() df
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
This **rearranged** the *DataFrame* so that the row labels are sorted!By adding the argument `axis=1` we can perform these operations on the columns instead.
df.sort_index(axis=1, ascending=False) # sort columns so that their names are descending
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Statistical informationThese work only for numerical values. A sample of them are presented below, while there are [many more](https://pandas.pydata.org/pandas-docs/stable/api.htmlapi-dataframe-stats) available.
print('Sum:') print(df.sum()) # sum of each column print('\nMean:') print(df.mean()) # mean of each column print('\nMin:') print(df.min()) # minimum element of each column print('\nMax:') print(df.max()) # maximum element of each column print('\nStandard deviation:') print(df.std()) # standard deviation of each column print('\nVariance:') print(df.var()) # variance of each column
Sum: C 8.0 D 6.0 G 37.0 dtype: float64 Mean: C 2.00 D 1.50 G 9.25 dtype: float64 Min: B 2019-03-30 00:00:00 C 0 D -3 E test F bar G -8 dtype: object Max: B 2019-03-31 00:00:00 C 4 D 3 E train F foo G 22 dtype: object Standard deviation: C 1.825742 D 3.000000 G 12.579746 dtype: float64 Variance: C 3.333333 D 9.000000 G 158.250000 dtype: float64
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Keep in mind that, contrary to *numpy*, *pandas* by default ignores `np.nan` values when performing operations. HistogramsAnother very convenient functionality offered by *pandas* is to find the unique values of a *Series* and count each value's number of occurrences. ```pythonSeries.unique() returns an array of the unique values in a pd.SeriesSeries.value_counts() returns the unique values along with their number of occurrences```
df['E'].unique() df['E'].value_counts()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Applying functionsOne of the most powerful methods offered is `.apply()`. There are actually two different things that can be done by this method, depending on if it's called from a *DataFrame* or a *Series*. *DataFrame.apply()*When called from a *DataFrame*, `.apply()` applies a function to each of the *DataFrame's* columns **independently**. The built-in methods we saw previously produce similar results, the application of a function (e.g. `max`, `min`, `sum`) to every *DataFrame* column.For example, how many **unique** values does each column have?
df['C'].unique()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
The `len()` of this array shows *how many* unique values we have.
len(df['C'].unique())
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Now, can we apply this function to every column in the *DataFrame*?
# First, we need to write a function def num_unique(series): # function that takes a series and returns the number of unique values it has return len(series.unique()) # Then apply in to each of the columns of the DataFrame df.apply(num_unique)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
It is common to write simple functions like these like **lambda functions** to save space.
df.apply(lambda s: len(s.unique()))
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
*Series.apply()*By calling `.apply()` from a *Series*, it applies the function to **each element** of the *Series* **independently**.For example:
df['C'].apply(lambda x: x**x)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
The above line applies the function $f(x) = x^x$ to every element $x$ of `df['C']`.This can be used to create **more complicated** filters! Advanced filtering with `.apply()`To do this, all we have to do is to create a function that returns `bool` values.For example, we want to filter `df['B']` so that we keep entries with `30` days. First, we'll create a function that checks if an entry has `30` days or not.
df['B'].apply(lambda x: x.day == 30)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
The above is equivalent with:```python Write a function that returns a bool value based on the condition we want to filter the dataframe withdef has_30_days(x): returns true if x has 30 days return x.day == 30 Apply the function on column 'B'df['B'].apply(has_30_days)```If we have created the function, all we have to do is to index the *DataFrame* with the result of the `.apply()`.
df[df['B'].apply(lambda x: x.day == 30)]
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Dealing with missing dataThis is a very interesting topic, which we will revisit in more detail in a future tutorial.In short there are a few easy ways we can quickly deal with missing data. The two main options are:- Dropping missing data.- Filling missing data.Since *pandas* is built on top of *numpy*, missing data is represented with `np.nan` values. If they aren't, they'll have to be converted to `np.nan`.Let's first download a sample *DataFrame* and fill it with missing values.
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data' # where to download the data from data = np.genfromtxt(url, delimiter=',', dtype='float', usecols=[0,1,2,3]) # load it into a numpy array data[np.random.randint(150, size=20), np.random.randint(4, size=20)] = np.nan # replace some values in random with np.nan data = pd.DataFrame(data, columns=['A', 'B', 'C', 'D']) # load it into a dataframe data.shape
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
This is $150 \times 4$ *DataFrame* with several missing values. How can we tell how many and where they are? Inspecting missing valuesThis can be done with `.isna()` or `.isnull()`. What's the difference between the two? Nothing at all ([here](https://datascience.stackexchange.com/a/37879/34269) in an explanation).`DataFrame.isna()` checks every value one by one if it is `np.nan` or not. The only thing we have to do is aggregate the resulting *DataFrame*.
data.isna().any() # checks if a column has at least one missing value or not data.isna().sum() # how many missing values per column data.isna().sum() / len(data) * 100 # percentage of values missing per column
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Dropping missing valuesThere are two ways to drop a missing value:- Drop its **row**.- Drop its **column**.Both can be accomplished through `.dropna()`.
tmp = data.dropna() # drops rows with missing values print(tmp.shape) tmp = data.dropna(axis=1) # drops columns with missing values print(tmp.shape)
(131, 4) (150, 0)
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Note that these operations are **not inplace**! If we wanted to overwrite the original *DataFrame* we'd have to write:```pythondata = data.dropna() ordata.dropna(inplace=True)```This method also offers many more parameters for - dropping rows that have missing values **only in specific columns** (`subset`)- dropping rows that have **multiple missing values** (more than a threshold `thres`)- dropping rows (or columns) that have **all their values missing** (`how='all'`) Filling missing valuesThis process is often referred to as **imputation**. In *pandas* it done with `.fillna()` and can be accomplished in two ways: either fill the whole *DataFrame* with a single value or fill the each column with a single value.The first is the easiest to implement.
tmp = data.fillna(999) # fills any missing value in the DataFrame with 999 print('Mean values for the original DataFrame:\n', data.mean()) print('\nMean values for the imputed DataFrame:\n', tmp.mean())
Mean values for the original DataFrame: A 5.846622 B 3.053793 C 3.709859 D 1.180822 dtype: float64 Mean values for the imputed DataFrame: A 19.088667 B 36.252000 C 56.792000 D 27.789333 dtype: float64
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
The second way is a bit more interesting. We'll first need to create a dictionary (or something equivalent) telling *pandas* which value to use for each column.
fill_values = {'A': -999, 'B':0, 'D': 999} # note that we purposely ignored column 'C' tmp = data.fillna(fill_values) print('Mean values for the original DataFrame:\n', data.mean()) print('\nMean values for the imputed DataFrame:\n', tmp.mean()) print('\nNumber of missing values of the original DataFrame:\n', data.isna().sum()) print('\nNumber of missing values of the imputed DataFrame:\n',tmp.isna().sum())
Mean values for the original DataFrame: A 5.846622 B 3.053793 C 3.709859 D 1.180822 dtype: float64 Mean values for the imputed DataFrame: A -7.551333 B 2.952000 C 3.709859 D 27.789333 dtype: float64 Number of missing values of the original DataFrame: A 2 B 5 C 8 D 4 dtype: int64 Number of missing values of the imputed DataFrame: A 0 B 0 C 8 D 0 dtype: int64
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
One interesting thing we can do is impute the missing values based on a statistic. For example, impute each missing value with its column's mean.
tmp = data.fillna(data.mean()) print('Mean values for the original DataFrame:\n', data.mean()) print('\nMean values for the imputed DataFrame:\n', tmp.mean())
Mean values for the original DataFrame: A 5.846622 B 3.053793 C 3.709859 D 1.180822 dtype: float64 Mean values for the imputed DataFrame: A 5.846622 B 3.053793 C 3.709859 D 1.180822 dtype: float64
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Encoding dataEncoding is the process of converting columns containing alphanumeric values (`str`) to numeric ones (`int` or `float`).This, too, will be covered in more detail in a later tutorial (*why is it necessary?, what ways there are? *what are the benefits of each?*). However, we'll show two easy ways this can be accomplished through *pandas*. Label encodingThis essentially means mapping each `str` value to an `int` one. One way to do this is to create a dictionary that maps each `str` to an `int` and use the built-in method `.map()`.
mapping_dict = {'train': 0, 'test': 1} df['E'].map(mapping_dict) # this is NOT inplace
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Or we could use `.apply()`.
df['E'].apply(lambda x: mapping_dict[x]) # NOT inplace
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
If we wanted to make the operations inplace we could simply write:```pythonmapping_dict = {'train': 0, 'test': 1}df['E'] = df['E'].map(mapping_dict) using map ordf['E'] = df['E'].apply(lambda x: mapping_dict_dict[x]) using apply``` One-hot encodingAlso known as **dummy encoding**, this technique is a bit more complicated. To one-hot encode a column, we have to create as many new columns as there are unique values in the original column. Each of those represents one of the unique values. For each entry, we check the original value and set the corresponding new column to $1$, while the rest are set to $0$. An illustration of the process can be seen in the figure below.![](https://i.imgur.com/mtimFxh.png)The good news are that in *pandas* it is easier than it looks!
pd.get_dummies(df) # only columns 'E' and 'F' need to be encoded
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Again, this operation is **not** inplace. Pivot tablesPivot tables can provide important insight in the relationship between two or more variables.*Pandas* actually offers to ways to generate pivot tables, one through a dedicated function `pd.pivot_table()` and one through the *DataFrame* method `.pivot()`. The first is **highly recommended** due to it allowing for the aggregation of duplicate values.
df2 = pd.DataFrame({'A': ['foo'] * 6 + ['bar'] * 4, 'B': ['one'] * 4 + ['two'] * 2 + ['one'] * 2 + ['two'] * 2, 'C': ['small', 'large'] * 5, 'D': [1, 2, 2, 2, 3, 3, 4, 5, 6, 7]}) df2
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
A pivot table requires 3 things:- `index`: A column so that its values can be set as the **rows** of the pivot table.- `columns`: A column so that its values can be set as the **columns** of the pivot table.- `values`: A column so that its values can be **aggregated** and placed into the grid defined by the rows and the columns of the pivot table.
pd.pivot_table(df2, index='A', columns='B', values='D')
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
The default aggregation function is `np.mean`. How is each position in the grid calculated?The first element in the pivot table corresponds to `A == 'bar'` and `B == 'one'`. How many values do we have with this criteria?
df2[(df2['A'] == 'bar') & (df2['B'] == 'one')][['D']]
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
We said that by default *pandas* uses `np.mean` as its aggregator, so:
df2[(df2['A'] == 'bar') & (df2['B'] == 'one')]['D'].mean()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Similarly, the second element in the pivot table has `A == 'bar'` and `B == 'two'`. So its value will be:
df2[(df2['A'] == 'bar') & (df2['B'] == 'two')]['D'].mean()
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Now, what if we want to change the aggregation function to something else, let's say `np.sum`.
pd.pivot_table(df2, index='A', columns='B', values='D', aggfunc=np.sum)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
This simply sums the values of `'D'` that correspond to each position in the pivot table.Another interesting choice for an aggregator is `len`. This will **count** the number of values in each position of the grid **instead of aggregating them**. This means the `values` argument is irrelevant when using `aggfunc=len`.
pd.pivot_table(df2, index='A', columns='B', values='D', aggfunc=len)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Creating custom functions for aggregation is also an option. For instance if we want to count the number of **unique values** per position:
pd.pivot_table(df2, index='A', columns='B', values='D', aggfunc=lambda x: len(x.unique()))
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Multi-index pivot tables are also an option but we won't go into any more detail.
pd.pivot_table(df2, index=['A', 'B'], columns='C', values='D', aggfunc=np.sum)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Merging DataFramesThis is the action of combining two or more *DataFrames* into one. *Pandas* offers multiple ways of performing such a merger. Let's first create two *DataFrames* that share **only some** of their rows and columns.
df3 = pd.DataFrame({'A': ['df3'] * 4, 'B': ['df3'] * 4, 'C': ['df3'] * 4, 'D': ['df3'] * 4}) df3 df4 = pd.DataFrame({'B': ['df4'] * 4, 'D': ['df4'] * 4, 'F': ['df4'] * 4}, index=[2, 3, 6, 7]) df4
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
`df3` and `df4` have only one column and two rows in common. ConcatenationConcatenating these two *DataFrames* is the simplest option and can be performed with `pd.concat()`. As we saw in the previous tutorial, there are two ways we can perform the concatenation:- along the **rows** (`axis=0`) which would produce a *DataFrame* with $4 + 4 = 8$ rows- along its **columns** (`axis=1`) which would produce a *DataFrame* with $4 + 3 = 7$ columnsLet's try the first.
pd.concat([df3, df4], sort=False)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
This concatenation did append the rows of the second *DataFrame* under the first one, but the columns are out of alignment. Why is this?This happens because *pandas* used the names of the columns to identify which columns to join. So `df4['B']` went under `df3['B']` and `df4['D']` went under `df3['D']`, but the rest of the columns don't match. The way *pandas* solved it is that it added column `'F'` to `df3` and columns `'A'` and `'C'` to `df4` and filled them with `nan` values. Then it performed the merger as if both *DataFrames* were $4 \times 5$. This type of merger is called an **outer join** and it is the default for `pd.concat()`. Also note that the rows with labels `2` and `3` are present two times in the *DataFrame*. In contrast an **inner join** would only keep the columns that exist in **both** *DataFrames* and discard the rest.
pd.concat([df3, df4], join='inner', sort=False)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
The same things can be said about concatenating along the columns.
pd.concat([df3, df4], axis=1, sort=False)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Again, the rows that didn't exist (i.e. `6` and `7` in `df3` and `0` and `1` in `df4`) were created, the columns now have duplicate names (i.e. `'B'` and `'D'` appear twice) and all non-existing values were set to `nan`.An inner join would look like this:
pd.concat([df3, df4], join='inner', axis=1, sort=False)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
What if we just wanted to concatenate the *DataFrames*, though... like we did in *numpy* (i.e. join rows regardless their name). To do this we'd have to change the labels of the rows of the `df4` to match those of `df3`.
tmp = df4.copy() # create a temporary DataFrame so that we don't overwrite df4 tmp.index = df3.index # change the index of df4 so that it's identical to df3 pd.concat([df3, tmp], axis=1, sort=False)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
SQL-type joinsAs we might have assumed from the previous step, *pandas* supports SQL-type joins.The merger is performed on specific columns in both *DataFrames* (referred to as *keys*) or on the row labels (like we did before). There are four types of joins:- **outer**, which, as we saw before, uses the **union of the keys** of the two *DataFrames*. So the rows of the merger will be the rows that exist in both *DataFrames* (i.e. `2` and `3`), the rows that exist only in the first *DataFrame* (i.e. `0` and `1`) and the rows that exist only in the second *DataFrame* (i.e. `6` and `7`).- **inner**, which, like before, uses **intersection of the keys** of the two *DataFrames*. Here the rows of the merger are only those existing in both *DataFrames* (i.e. `2` and `3`).- **left**, which only keeps the keys of the **first** *DataFrame*. The rows will be the keys of the first *DataFrame* (i.e. `0`, `1`, `2` and `3`).- **left**, which only keeps the keys of the **second** *DataFrame*. The rows will be the keys of the second *DataFrame* (i.e. `2`, `3`, `6` and `7`).In all cases, by default, **all columns will be kept**. They will be, however, renamed if necessary so that there aren't any duplicate column names.
pd.merge(df3, df4, how='outer', left_index=True, right_index=True) # the two last parameters instruct pandas # to use the rows labels as the keys pd.merge(df3, df4, how='inner', left_index=True, right_index=True) pd.merge(df3, df4, how='left', left_index=True, right_index=True) pd.merge(df3, df4, how='right', left_index=True, right_index=True)
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
"Group By" processBy “group by” we are referring to a process involving one or more of the following steps:- **Splitting** the data into groups based on some criteria.- **Applying** a function to each group independently. - **Aggregation**: compute a statistical summary of each group. - **Transformation**: perform an operation that alters the values in one or more groups. - **Filtration**: disregard some groups based on a group-wise computation.- **Combining** the results into a data structure.We'll use `df2` to illustrate this process.
df2
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Splitting the dataThis step **partitions** the data into **subsets**, based on the values of a column.
grouped = df2.groupby(['A'])
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
Since `df2['A']` can take only too values (`'foo'` and `'bar'`), this is roughly equivalent to:
df2[df2['A'] == 'foo'] df2[df2['A'] == 'bar']
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
However, groupby **doesn't** actually perform the partitioning, it will do so when required in the next steps.How can we access the groups?
grouped.groups
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial
This returns a dictionary with the unique values of `'A'` as its keys and the row indices that correspond to each key as its values.If we know which key we want to use we can manually partition the data.
grouped.get_group('foo')
_____no_output_____
MIT
notebooks/16_pandas.ipynb
sniafas/python_ml_tutorial