markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
In-Class Coding Lab: ListsThe goals of this lab are to help you understand: - List indexing and slicing - List methods such as insert, append, find, delete - How to iterate over lists with loops Python Lists work like Real-Life Lists In real life, we make lists all the time. To-Do lists. Shopping lists. Reading lists. These lists are collections of items, for example here's my shopping list: ``` Milk, Eggs, Bread, Beer ```There are 4 items in this list.Likewise, we can make a similar list in Python, and count the number of items in the list using the `len()` function:
shopping_list = [ 'Milk', 'Eggs', 'Bread', 'Beer'] item_count = len(shopping_list) print("List: %s has %d items" % (shopping_list, item_count))
List: ['Milk', 'Eggs', 'Bread', 'Beer'] has 4 items
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Enumerating Your List ItemsIn real-life, we *enumerate* lists all the time. We go through the items on our list one at a time and make a decision, for example: "Did I add that to my shopping cart yet?"In Python we go through items in our lists with the `for` loop. We use `for` because the number of items in pre-determined and thus a **definite** loop is the appropriate choice. Here's an example:
for item in shopping_list: print("I need to buy some %s " % (item))
I need to buy some Milk I need to buy some Eggs I need to buy some Bread I need to buy some Beer
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Now You Try It!Write code in the space below to print each stock on its own line.
stocks = [ 'IBM', 'AAPL', 'GOOG', 'MSFT', 'TWTR', 'FB'] #TODO: Write code here for item in stocks: print(item)
IBM AAPL GOOG MSFT TWTR FB
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Indexing ListsSometimes we refer to our items by their place in the list. For example "Milk is the first item on the list" or "Beer is the last item on the list."We can also do this in Python, and it is called *indexing* the list. **IMPORTANT** The first item in a Python lists starts at index **0**.
print("The first item in the list is:", shopping_list[0]) print("The last item in the list is:", shopping_list[3]) print("This is also the last item in the list:", shopping_list[-1]) print("This is the second to last item in the list:", shopping_list[-2])
The first item in the list is: Milk The last item in the list is: Beer This is also the last item in the list: Beer This is the second to last item in the list: Bread
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
For Loop with IndexYou can also loop through your Python list using an index. In this case we use the `range()` function to determine how many times we should loop:
for i in range(len(shopping_list)): print("I need to buy some %s " % (shopping_list[i]))
I need to buy some Milk I need to buy some Eggs I need to buy some Bread I need to buy some Beer
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Now You Try It!Write code to print the 2nd and 4th stocks in the list variable `stocks`. For example:`AAPL MSFT`
#TODO: Write code here print("This is the second stock in the list:", stocks[1]) print("This is the fourth stock in the list:", stocks[3])
This is the second stock in the list: AAPL This is the fourth stock in the list: MSFT
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Lists are MutableUnlike strings, lists are mutable. This means we can change a value in the list.For example, I want `'Craft Beer'` not just `'Beer'`:
print(shopping_list) shopping_list[-1] = 'Craft Beer' print(shopping_list)
['Milk', 'Eggs', 'Bread', 'Beer'] ['Milk', 'Eggs', 'Bread', 'Craft Beer']
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
List MethodsIn your readings and class lecture, you encountered some list methods. These allow us to maniupulate the list by adding or removing items.
print("Shopping List: %s" %(shopping_list)) print("Adding 'Cheese' to the end of the list...") shopping_list.append('Cheese') #add to end of list print("Shopping List: %s" %(shopping_list)) print("Adding 'Cereal' to position 0 in the list...") shopping_list.insert(0,'Cereal') # add to the beginning of the list (position 0) print("Shopping List: %s" %(shopping_list)) print("Removing 'Cheese' from the list...") shopping_list.remove('Cheese') # remove 'Cheese' from the list print("Shopping List: %s" %(shopping_list)) print("Removing item from position 0 in the list...") del shopping_list[0] # remove item at position 0 print("Shopping List: %s" %(shopping_list))
Shopping List: ['Milk', 'Eggs', 'Bread', 'Craft Beer'] Adding 'Cheese' to the end of the list... Shopping List: ['Milk', 'Eggs', 'Bread', 'Craft Beer', 'Cheese'] Adding 'Cereal' to position 0 in the list... Shopping List: ['Cereal', 'Milk', 'Eggs', 'Bread', 'Craft Beer', 'Cheese'] Removing 'Cheese' from the list... Shopping List: ['Cereal', 'Milk', 'Eggs', 'Bread', 'Craft Beer'] Removing item from position 0 in the list... Shopping List: ['Milk', 'Eggs', 'Bread', 'Craft Beer']
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Now You Try It!Write a program to remove the following stocks: `IBM` and `TWTR`Then add this stock to the end `NFLX` and this stock to the beginning `TSLA`Print your list when you are done. It should look like this:`['TSLA', 'AAPL', 'GOOG', 'MSFT', 'FB', 'NFLX']`
# TODO: Write Code here print("Stocks: %s" % (stocks)) print('Removing Stocks: IBM, TWTR') stocks.remove('IBM') stocks.remove('TWTR') print("Stocks: %s" % (stocks)) print('Adding Stock to End:NFLX') stocks.append('NFLX') print("Stocks: %s" % (stocks)) print('Adding Stock to Beginning:TSLA') stocks.insert(0, 'TSLA') print("Final Stocks: %s" % (stocks))
Stocks: ['IBM', 'AAPL', 'GOOG', 'MSFT', 'TWTR', 'FB'] Removing Stocks: IBM, TWTR Stocks: ['AAPL', 'GOOG', 'MSFT', 'FB'] Adding Stock to End:NFLX Stocks: ['AAPL', 'GOOG', 'MSFT', 'FB', 'NFLX'] Adding Stock to Beginning:TSLA Final Stocks: ['TSLA', 'AAPL', 'GOOG', 'MSFT', 'FB', 'NFLX']
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
SortingSince Lists are mutable. You can use the `sort()` method to re-arrange the items in the list alphabetically (or numerically if it's a list of numbers)
print("Before Sort:", shopping_list) shopping_list.sort() print("After Sort:", shopping_list)
Before Sort: ['Milk', 'Eggs', 'Bread', 'Craft Beer'] After Sort: ['Bread', 'Craft Beer', 'Eggs', 'Milk']
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Putting it all togetherWinning Lotto numbers. When the lotto numbers are drawn, they are in any order, when they are presented they're allways sorted. Let's write a program to input 5 numbers then output them sorted```1. for i in range(5)2. input a number3. append the number you input to the lotto_numbers list4. sort the lotto_numbers list5. print the lotto_numbers list like this: 'today's winning numbers are [1, 5, 17, 34, 56]'```
## TODO: Write program here: lotto_numbers = [] # start with an empty list for i in range(5): number = int(input("Enter a number: ")) lotto_numbers.append(number) lotto_numbers.sort() print("Today's winning lotto numbers are", lotto_numbers)
Enter a number: 12 Enter a number: 15 Enter a number: 22 Enter a number: 9 Enter a number: 4 Today's winning lotto numbers are [4, 9, 12, 15, 22]
MIT
content/lessons/09/Class-Coding-Lab/CCL-Lists.ipynb
jvrecca-su/ist256project
Composing a pipeline from reusable, pre-built, and lightweight componentsThis tutorial describes how to build a Kubeflow pipeline from reusable, pre-built, and lightweight components. The following provides a summary of the steps involved in creating and using a reusable component:- Write the program that contains your component’s logic. The program must use files and command-line arguments to pass data to and from the component.- Containerize the program.- Write a component specification in YAML format that describes the component for the Kubeflow Pipelines system.- Use the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline.Then, we will compose a pipeline from a reusable component, a pre-built component, and a lightweight component. The pipeline will perform the following steps:- Train an MNIST model and export it to Google Cloud Storage.- Deploy the exported TensorFlow model on AI Platform Prediction service.- Test the deployment by calling the endpoint with test data. Note: Ensure that you have Docker installed, if you want to build the image locally, by running the following command: `which docker` The result should be something like:`/usr/bin/docker`
import kfp import kfp.gcp as gcp import kfp.dsl as dsl import kfp.compiler as compiler import kfp.components as comp import datetime import kubernetes as k8s # Required Parameters PROJECT_ID='<ADD GCP PROJECT HERE>' GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Create clientIf you run this notebook **outside** of a Kubeflow cluster, run the following command:- `host`: The URL of your Kubeflow Pipelines instance, for example "https://``.endpoints.``.cloud.goog/pipeline"- `client_id`: The client ID used by Identity-Aware Proxy- `other_client_id`: The client ID used to obtain the auth codes and refresh tokens.- `other_client_secret`: The client secret used to obtain the auth codes and refresh tokens.```pythonclient = kfp.Client(host, client_id, other_client_id, other_client_secret)```If you run this notebook **within** a Kubeflow cluster, run the following command:```pythonclient = kfp.Client()```You'll need to create OAuth client ID credentials of type `Other` to get `other_client_id` and `other_client_secret`. Learn more about [creating OAuth credentials](https://cloud.google.com/iap/docs/authentication-howtoauthenticating_from_a_desktop_app)
# Optional Parameters, but required for running outside Kubeflow cluster # The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com' # The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline' # Examples are: # https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com # https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline HOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>' # For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following # will be needed to access the endpoint. CLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>' OTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>' OTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>' # This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines' # If you are not working with 'AI Platform Pipelines', this step is not necessary ! gcloud auth print-access-token # Create kfp client in_cluster = True try: k8s.config.load_incluster_config() except: in_cluster = False pass if in_cluster: client = kfp.Client() else: if HOST.endswith('googleusercontent.com'): CLIENT_ID = None OTHER_CLIENT_ID = None OTHER_CLIENT_SECRET = None client = kfp.Client(host=HOST, client_id=CLIENT_ID, other_client_id=OTHER_CLIENT_ID, other_client_secret=OTHER_CLIENT_SECRET)
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Build reusable components Writing the program code The following cell creates a file `app.py` that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage.Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as `/output.txt`.
%%bash # Create folders if they don't exist. mkdir -p tmp/reuse_components_pipeline/mnist_training # Create the Python file that lists GCS blobs. cat > ./tmp/reuse_components_pipeline/mnist_training/app.py <<HERE import argparse from datetime import datetime import tensorflow as tf parser = argparse.ArgumentParser() parser.add_argument( '--model_path', type=str, required=True, help='Name of the model file.') parser.add_argument( '--bucket', type=str, required=True, help='GCS bucket name.') args = parser.parse_args() bucket=args.bucket model_path=args.model_path model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) print(model.summary()) mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 callbacks = [ tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()), # Interrupt training if val_loss stops improving for over 2 epochs tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'), ] model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks, validation_data=(x_test, y_test)) from tensorflow import gfile gcs_path = bucket + "/" + model_path # The export require the folder is new if gfile.Exists(gcs_path): gfile.DeleteRecursively(gcs_path) tf.keras.experimental.export_saved_model(model, gcs_path) with open('/output.txt', 'w') as f: f.write(gcs_path) HERE
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Create a Docker containerCreate your own container image that includes your program. Creating a Dockerfile Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The `FROM` statement specifies the Base Image from which you are building. `WORKDIR` sets the working directory. When you assemble the Docker image, `COPY` copies the required files and directories (for example, `app.py`) to the file system of the container. `RUN` executes a command (for example, install the dependencies) and commits the results.
%%bash # Create Dockerfile. # AI platform only support tensorflow 1.14 cat > ./tmp/reuse_components_pipeline/mnist_training/Dockerfile <<EOF FROM tensorflow/tensorflow:1.14.0-py3 WORKDIR /app COPY . /app EOF
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Build docker image Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options:- Use the `kfp.containers.build_image_from_working_dir` to build the image and push to the Container Registry (GCR). This requires [kaniko](https://cloud.google.com/blog/products/gcp/introducing-kaniko-build-container-images-in-kubernetes-and-google-container-builder-even-without-root-access), which will be auto-installed with 'full Kubeflow deployment' but not 'AI Platform Pipelines'.- Use [Cloud Build](https://cloud.google.com/cloud-build), which would require the setup of GCP project and enablement of corresponding API. If you are working with GCP 'AI Platform Pipelines' with GCP project running, it is recommended to use Cloud Build.- Use [Docker](https://www.docker.com/get-started) installed locally and push to e.g. GCR. **Note**:If you run this notebook **within Kubeflow cluster**, **with Kubeflow version >= 0.7** and exploring **kaniko option**, you need to ensure that valid credentials are created within your notebook's namespace.- With Kubeflow version >= 0.7, the credential is supposed to be copied automatically while creating notebook through `Configurations`, which doesn't work properly at the time of creating this notebook. - You can also add credentials to the new namespace by either [copying credentials from an existing Kubeflow namespace, or by creating a new service account](https://www.kubeflow.org/docs/gke/authentication/kubeflow-v0-6-and-before-gcp-service-account-key-as-secret).- The following cell demonstrates how to copy the default secret to your own namespace.```bash%%bashNAMESPACE=SOURCE=kubeflowNAME=user-gcp-saSECRET=$(kubectl get secrets \${NAME} -n \${SOURCE} -o jsonpath="{.data.\${NAME}\.json}" | base64 -D)kubectl create -n \${NAMESPACE} secret generic \${NAME} --from-literal="\${NAME}.json=\${SECRET}"```
IMAGE_NAME="mnist_training_kf_pipeline" TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)" GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format( PROJECT_ID=PROJECT_ID, IMAGE_NAME=IMAGE_NAME, TAG=TAG ) APP_FOLDER='./tmp/reuse_components_pipeline/mnist_training/' # In the following, for the purpose of demonstration # Cloud Build is choosen for 'AI Platform Pipelines' # kaniko is choosen for 'full Kubeflow deployment' if HOST.endswith('googleusercontent.com'): # kaniko is not pre-installed with 'AI Platform Pipelines' import subprocess # ! gcloud builds submit --tag ${IMAGE_NAME} ${APP_FOLDER} cmd = ['gcloud', 'builds', 'submit', '--tag', GCR_IMAGE, APP_FOLDER] build_log = (subprocess.run(cmd, stdout=subprocess.PIPE).stdout[:-1].decode('utf-8')) print(build_log) else: if kfp.__version__ <= '0.1.36': # kfp with version 0.1.36+ introduce broken change that will make the following code not working import subprocess builder = kfp.containers._container_builder.ContainerBuilder( gcs_staging=GCS_BUCKET + "/kfp_container_build_staging" ) kfp.containers.build_image_from_working_dir( image_name=GCR_IMAGE, working_dir=APP_FOLDER, builder=builder ) else: raise("Please build the docker image use either [Docker] or [Cloud Build]")
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
If you want to use docker to build the imageRun the following in a cell```bash%%bash -s "{PROJECT_ID}"IMAGE_NAME="mnist_training_kf_pipeline"TAG="latest" "v_$(date +%Y%m%d_%H%M%S)" Create script to build docker image and push it.cat > ./tmp/components/mnist_training/build_image.sh <<HEREPROJECT_ID="${1}"IMAGE_NAME="${IMAGE_NAME}"TAG="${TAG}"GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}:\${TAG}"docker build -t \${IMAGE_NAME} .docker tag \${IMAGE_NAME} \${GCR_IMAGE}docker push \${GCR_IMAGE}docker image rm \${IMAGE_NAME}docker image rm \${GCR_IMAGE}HEREcd tmp/components/mnist_trainingbash build_image.sh```
image_name = GCR_IMAGE
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Writing your component definition fileTo create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system.For the complete definition of a Kubeflow Pipelines component, see the [component specification](https://www.kubeflow.org/docs/pipelines/reference/component-spec/). However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial.Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section:
%%bash -s "{image_name}" GCR_IMAGE="${1}" echo ${GCR_IMAGE} # Create Yaml # the image uri should be changed according to the above docker image push output cat > mnist_pipeline_component.yaml <<HERE name: Mnist training description: Train a mnist model and save to GCS inputs: - name: model_path description: 'Path of the tf model.' type: String - name: bucket description: 'GCS bucket name.' type: String outputs: - name: gcs_model_path description: 'Trained model path.' type: GCSPath implementation: container: image: ${GCR_IMAGE} command: [ python, /app/app.py, --model_path, {inputValue: model_path}, --bucket, {inputValue: bucket}, ] fileOutputs: gcs_model_path: /output.txt HERE import os mnist_train_op = kfp.components.load_component_from_file(os.path.join('./', 'mnist_pipeline_component.yaml')) mnist_train_op.component_spec
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Define deployment operation on AI Platform
mlengine_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/2df775a28045bda15372d6dd4644f71dcfe41bfe/components/gcp/ml_engine/deploy/component.yaml') def deploy( project_id, model_uri, model_id, runtime_version, python_version): return mlengine_deploy_op( model_uri=model_uri, project_id=project_id, model_id=model_id, runtime_version=runtime_version, python_version=python_version, replace_existing_version=True, set_default=True)
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Kubeflow serving deployment component as an option. **Note that, the deployed Endppoint URI is not availabe as output of this component.**```pythonkubeflow_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/2df775a28045bda15372d6dd4644f71dcfe41bfe/components/gcp/ml_engine/deploy/component.yaml')def deploy_kubeflow( model_dir, tf_server_name): return kubeflow_deploy_op( model_dir=model_dir, server_name=tf_server_name, cluster_name='kubeflow', namespace='kubeflow', pvc_name='', service_type='ClusterIP')``` Create a lightweight component for testing the deployment
def deployment_test(project_id: str, model_name: str, version: str) -> str: model_name = model_name.split("/")[-1] version = version.split("/")[-1] import googleapiclient.discovery def predict(project, model, data, version=None): """Run predictions on a list of instances. Args: project: (str), project where the Cloud ML Engine Model is deployed. model: (str), model name. data: ([[any]]), list of input instances, where each input instance is a list of attributes. version: str, version of the model to target. Returns: Mapping[str: any]: dictionary of prediction results defined by the model. """ service = googleapiclient.discovery.build('ml', 'v1') name = 'projects/{}/models/{}'.format(project, model) if version is not None: name += '/versions/{}'.format(version) response = service.projects().predict( name=name, body={ 'instances': data }).execute() if 'error' in response: raise RuntimeError(response['error']) return response['predictions'] import tensorflow as tf import json mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 result = predict( project=project_id, model=model_name, data=x_test[0:2].tolist(), version=version) print(result) return json.dumps(result) # # Test the function with already deployed version # deployment_test( # project_id=PROJECT_ID, # model_name="mnist", # version='ver_bb1ebd2a06ab7f321ad3db6b3b3d83e6' # previous deployed version for testing # ) deployment_test_op = comp.func_to_container_op( func=deployment_test, base_image="tensorflow/tensorflow:1.15.0-py3", packages_to_install=["google-api-python-client==1.7.8"])
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Create your workflow as a Python function Define your pipeline as a Python function. ` @kfp.dsl.pipeline` is a required decoration, and must include `name` and `description` properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.
# Define the pipeline @dsl.pipeline( name='Mnist pipeline', description='A toy pipeline that performs mnist model training.' ) def mnist_reuse_component_deploy_pipeline( project_id: str = PROJECT_ID, model_path: str = 'mnist_model', bucket: str = GCS_BUCKET ): train_task = mnist_train_op( model_path=model_path, bucket=bucket ).apply(gcp.use_gcp_secret('user-gcp-sa')) deploy_task = deploy( project_id=project_id, model_uri=train_task.outputs['gcs_model_path'], model_id="mnist", runtime_version="1.14", python_version="3.5" ).apply(gcp.use_gcp_secret('user-gcp-sa')) deploy_test_task = deployment_test_op( project_id=project_id, model_name=deploy_task.outputs["model_name"], version=deploy_task.outputs["version_name"], ).apply(gcp.use_gcp_secret('user-gcp-sa')) return True
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Submit a pipeline run
pipeline_func = mnist_reuse_component_deploy_pipeline experiment_name = 'minist_kubeflow' arguments = {"model_path":"mnist_model", "bucket":GCS_BUCKET} run_name = pipeline_func.__name__ + ' run' # Submit pipeline directly from pipeline function run_result = client.create_run_from_pipeline_func(pipeline_func, experiment_name=experiment_name, run_name=run_name, arguments=arguments)
_____no_output_____
Apache-2.0
samples/tutorials/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
eedorenko/pipelines
Notes for Using Python for Research(HarvardX PH526x) Part 4: Randomness and Time 1. Simulating Randomness(模拟随机性) 2. Examples Involving Randomness 3. Using the NumPy Random Module 4. Measuring Time(测量时间) 5. Random Walks(RW,随机游走)$$x(t=x) = x(t=0) + \Delta x(t=1) + \ldots + \Delta x(t=k)$$ Simulating Randomness 部分的代码
import random random.choice(["H","T"]) random.choice([0, 1]) random.choice([1,2,3,4,5,6]) random.choice(range(1, 7)) random.choice([range(1,7)]) random.choice(random.choice([range(1, 7), range(1, 9), range(1, 11)]))
_____no_output_____
MulanPSL-1.0
python_notes/using_python_for_research_ph256x_harvard/week2/4randomness_n_time.ipynb
ZaynChen/notes
Examples Involving Randomness 部分的代码
import random import matplotlib.pyplot as plt import numpy as np rolls = [] for k in range(100000): rolls.append(random.choice([1,2,3,4,5,6])) plt.hist(rolls, bins = np.linspace(0.5, 6.5, 7)); ys = [] for rep in range(100000): y = 0 for k in range(10): x = random.choice([1,2,3,4,5,6]) y = y + x ys.append(y) plt.hist(ys);
_____no_output_____
MulanPSL-1.0
python_notes/using_python_for_research_ph256x_harvard/week2/4randomness_n_time.ipynb
ZaynChen/notes
Using the NumPy Random Module 部分的代码
import numpy as np np.random.random() np.random.random(5) np.random.random((5, 3)) np.random.normal(0, 1) np.random.normal(0, 1, 5) np.random.normal(0, 1, (2, 5)) import matplotlib.pyplot as plt X = np.random.randint(1, 7, (100000, 10)) Y = np.sum(X, axis=1) plt.hist(Y);
_____no_output_____
MulanPSL-1.0
python_notes/using_python_for_research_ph256x_harvard/week2/4randomness_n_time.ipynb
ZaynChen/notes
Measuring Time 部分的代码
import time import random import numpy as np start_time = time.time() ys = [] for rep in range(1000000): y = 0 for k in range(10): x = random.choice([1,2,3,4,5,6]) y = y + x ys.append(y) end_time = time.time() print(end_time - start_time) start_time = time.time() X = np.random.randint(1, 7, (1000000, 10)) Y = np.sum(X, axis=1) end_time = time.time() print(end_time - start_time)
_____no_output_____
MulanPSL-1.0
python_notes/using_python_for_research_ph256x_harvard/week2/4randomness_n_time.ipynb
ZaynChen/notes
Random Walks 部分的代码
import numpy as np import matplotlib.pyplot as plt delta_X = np.random.normal(0,1,(2,5)) plt.plot(delta_X[0], delta_X[1], "go") X = np.cumsum(delta_X, axis=1) X X_0 = np.array([[0], [0]]) delta_X = np.random.normal(0, 1, (2, 100)) X = np.concatenate((X_0, np.cumsum(delta_X, axis=1)), axis=1) plt.plot(X[0], X[1], "ro-") # plt.savefig("rw.pdf")
_____no_output_____
MulanPSL-1.0
python_notes/using_python_for_research_ph256x_harvard/week2/4randomness_n_time.ipynb
ZaynChen/notes
Mask R-CNN DemoA quick intro to using the pre-trained model to detect and segment objects.
import os os.environ["CUDA_VISIBLE_DEVICES"] = "-1" import sys import random import math import numpy as np import skimage.io import matplotlib import matplotlib.pyplot as plt # Root directory of the project ROOT_DIR = os.path.abspath("../") # Import Mask RCNN sys.path.append(ROOT_DIR) # To find local version of the library from mrcnn import utils import mrcnn.model as modellib from mrcnn import visualize # Import COCO config sys.path.append(os.path.join(ROOT_DIR, "samples/coco/")) # To find local version import coco %matplotlib inline # Directory to save logs and trained model MODEL_DIR = os.path.join(ROOT_DIR, "logs") # Local path to trained weights file COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5") # Download COCO trained weights from Releases if needed if not os.path.exists(COCO_MODEL_PATH): utils.download_trained_weights(COCO_MODEL_PATH) # Directory of images to run detection on IMAGE_DIR = os.path.join(ROOT_DIR, "images")
Using TensorFlow backend.
MIT
samples/demo.ipynb
jaewon-jun9/Mask_RCNN
ConfigurationsWe'll be using a model trained on the MS-COCO dataset. The configurations of this model are in the ```CocoConfig``` class in ```coco.py```.For inferencing, modify the configurations a bit to fit the task. To do so, sub-class the ```CocoConfig``` class and override the attributes you need to change.
class InferenceConfig(coco.CocoConfig): # Set batch size to 1 since we'll be running inference on # one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU GPU_COUNT = 1 IMAGES_PER_GPU = 1 config = InferenceConfig() config.display()
Configurations: BACKBONE resnet101 BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 1 BBOX_STD_DEV [0.1 0.1 0.2 0.2] COMPUTE_BACKBONE_SHAPE None DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.7 DETECTION_NMS_THRESHOLD 0.3 FPN_CLASSIF_FC_LAYERS_SIZE 1024 GPU_COUNT 1 GRADIENT_CLIP_NORM 5.0 IMAGES_PER_GPU 1 IMAGE_CHANNEL_COUNT 3 IMAGE_MAX_DIM 1024 IMAGE_META_SIZE 93 IMAGE_MIN_DIM 800 IMAGE_MIN_SCALE 0 IMAGE_RESIZE_MODE square IMAGE_SHAPE [1024 1024 3] LEARNING_MOMENTUM 0.9 LEARNING_RATE 0.001 LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0} MASK_POOL_SIZE 14 MASK_SHAPE [28, 28] MAX_GT_INSTANCES 100 MEAN_PIXEL [123.7 116.8 103.9] MINI_MASK_SHAPE (56, 56) NAME coco NUM_CLASSES 81 POOL_SIZE 7 POST_NMS_ROIS_INFERENCE 1000 POST_NMS_ROIS_TRAINING 2000 PRE_NMS_LIMIT 6000 ROI_POSITIVE_RATIO 0.33 RPN_ANCHOR_RATIOS [0.5, 1, 2] RPN_ANCHOR_SCALES (32, 64, 128, 256, 512) RPN_ANCHOR_STRIDE 1 RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] RPN_NMS_THRESHOLD 0.7 RPN_TRAIN_ANCHORS_PER_IMAGE 256 STEPS_PER_EPOCH 1000 TOP_DOWN_PYRAMID_SIZE 256 TRAIN_BN False TRAIN_ROIS_PER_IMAGE 200 USE_MINI_MASK True USE_RPN_ROIS True VALIDATION_STEPS 50 WEIGHT_DECAY 0.0001
MIT
samples/demo.ipynb
jaewon-jun9/Mask_RCNN
Create Model and Load Trained Weights
# Create model object in inference mode. model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config) # Load weights trained on MS-COCO model.load_weights(COCO_MODEL_PATH, by_name=True)
_____no_output_____
MIT
samples/demo.ipynb
jaewon-jun9/Mask_RCNN
Class NamesThe model classifies objects and returns class IDs, which are integer value that identify each class. Some datasets assign integer values to their classes and some don't. For example, in the MS-COCO dataset, the 'person' class is 1 and 'teddy bear' is 88. The IDs are often sequential, but not always. The COCO dataset, for example, has classes associated with class IDs 70 and 72, but not 71.To improve consistency, and to support training on data from multiple sources at the same time, our ```Dataset``` class assigns it's own sequential integer IDs to each class. For example, if you load the COCO dataset using our ```Dataset``` class, the 'person' class would get class ID = 1 (just like COCO) and the 'teddy bear' class is 78 (different from COCO). Keep that in mind when mapping class IDs to class names.To get the list of class names, you'd load the dataset and then use the ```class_names``` property like this.``` Load COCO datasetdataset = coco.CocoDataset()dataset.load_coco(COCO_DIR, "train")dataset.prepare() Print class namesprint(dataset.class_names)```We don't want to require you to download the COCO dataset just to run this demo, so we're including the list of class names below. The index of the class name in the list represent its ID (first class is 0, second is 1, third is 2, ...etc.)
# COCO Class names # Index of the class in the list is its ID. For example, to get ID of # the teddy bear class, use: class_names.index('teddy bear') class_names = ['BG', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus', 'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza', 'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table', 'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush']
_____no_output_____
MIT
samples/demo.ipynb
jaewon-jun9/Mask_RCNN
Run Object Detection
# Load a random image from the images folder file_names = next(os.walk(IMAGE_DIR))[2] image = skimage.io.imread(os.path.join(IMAGE_DIR, random.choice(file_names))) # Run detection results = model.detect([image], verbose=1) # Visualize results r = results[0] visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'], class_names, r['scores']) r.get
_____no_output_____
MIT
samples/demo.ipynb
jaewon-jun9/Mask_RCNN
Classes and subclasses In this notebook, I will show you the basics of classes and subclasses in Python. As you've seen in the lectures from this week, `Trax` uses layer classes as building blocks for deep learning models, so it is important to understand how classes and subclasses behave in order to be able to build custom layers when needed. By completing this notebook, you will:- Be able to define classes and subclasses in Python- Understand how inheritance works in subclasses- Be able to work with instances Part 1: Parameters, methods and instances First, let's define a class `My_Class`.
class My_Class: #Definition of My_class x = None
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
`My_Class` has one parameter `x` without any value. You can think of parameters as the variables that every object assigned to a class will have. So, at this point, any object of class `My_Class` would have a variable `x` equal to `None`. To check this, I'll create two instances of that class and get the value of `x` for both of them.
instance_a= My_Class() #To create an instance from class "My_Class" you have to call "My_Class" instance_b= My_Class() print('Parameter x of instance_a: ' + str(instance_a.x)) #To get a parameter 'x' from an instance 'a', write 'a.x' print('Parameter x of instance_b: ' + str(instance_b.x))
Parameter x of instance_a: None Parameter x of instance_b: None
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
For an existing instance you can assign new values for any of its parameters. In the next cell, assign a value of `5` to the parameter `x` of `instance_a`.
### START CODE HERE (1 line) ### instance_a.x = 5 ### END CODE HERE ### print('Parameter x of instance_a: ' + str(instance_a.x))
Parameter x of instance_a: 5
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
1.1 The `__init__` method When you want to assign values to the parameters of your class when an instance is created, it is necessary to define a special method: `__init__`. The `__init__` method is called when you create an instance of a class. It can have multiple arguments to initialize the paramenters of your instance. In the next cell I will define `My_Class` with an `__init__` method that takes the instance (`self`) and an argument `y` as inputs.
class My_Class: def __init__(self, y): # The __init__ method takes as input the instance to be initialized and a variable y self.x = y # Sets parameter x to be equal to y
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
In this case, the parameter `x` of an instance from `My_Class` would take the value of an argument `y`. The argument `self` is used to pass information from the instance being created to the method `__init__`. In the next cell, create an instance `instance_c`, with `x` equal to `10`.
### START CODE HERE (1 line) ### instance_c = My_Class(10) ### END CODE HERE ### print('Parameter x of instance_c: ' + str(instance_c.x))
Parameter x of instance_c: 10
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Note that in this case, you had to pass the argument `y` from the `__init__` method to create an instance of `My_Class`. 1.2 The `__call__` method Another important method is the `__call__` method. It is performed whenever you call an initialized instance of a class. It can have multiple arguments and you can define it to do whatever you want like- Change a parameter, - Print a message,- Create new variables, etc.In the next cell, I'll define `My_Class` with the same `__init__` method as before and with a `__call__` method that adds `z` to parameter `x` and prints the result.
class My_Class: def __init__(self, y): # The __init__ method takes as input the instance to be initialized and a variable y self.x = y # Sets parameter x to be equal to y def __call__(self, z): # __call__ method with self and z as arguments self.x += z # Adds z to parameter x when called print(self.x)
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Let’s create `instance_d` with `x` equal to 5.
instance_d = My_Class(5)
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
And now, see what happens when `instance_d` is called with argument `10`.
instance_d(10)
15
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Now, you are ready to complete the following cell so any instance from `My_Class`:- Is initialized taking two arguments `y` and `z` and assigns them to `x_1` and `x_2`, respectively. And, - When called, takes the values of the parameters `x_1` and `x_2`, sums them, prints and returns the result.
class My_Class: def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z ### START CODE HERE (2 lines) ### self.x_1 = y self.x_2 = z ### END CODE HERE ### def __call__(self): #When called, adds the values of parameters x_1 and x_2, prints and returns the result ### START CODE HERE (1 line) ### result = self.x_1 + self.x_2 ### END CODE HERE ### print("Addition of {} and {} is {}".format(self.x_1,self.x_2,result)) return result
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Run the next cell to check your implementation. If everything is correct, you shouldn't get any errors.
instance_e = My_Class(10,15) def test_class_definition(): assert instance_e.x_1 == 10, "Check the value assigned to x_1" assert instance_e.x_2 == 15, "Check the value assigned to x_2" assert instance_e() == 25, "Check the __call__ method" print("\033[92mAll tests passed!") test_class_definition()
Addition of 10 and 15 is 25 All tests passed!
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
1.3 Custom methods In addition to the `__init__` and `__call__` methods, your classes can have custom-built methods to do whatever you want when called. To define a custom method, you have to indicate its input arguments, the instructions that you want it to perform and the values to return (if any). In the next cell, `My_Class` is defined with `my_method` that multiplies the values of `x_1` and `x_2`, sums that product with an input `w`, and returns the result.
class My_Class: def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z self.x_1 = y self.x_2 = z def __call__(self): #Performs an operation with x_1 and x_2, and returns the result a = self.x_1 - 2*self.x_2 return a def my_method(self, w): #Multiplies x_1 and x_2, adds argument w and returns the result result = self.x_1*self.x_2 + w return result
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Create an instance `instance_f` of `My_Class` with any integer values that you want for `x_1` and `x_2`. For that instance, see the result of calling `My_method`, with an argument `w` equal to `16`.
### START CODE HERE (1 line) ### instance_f = My_Class(1,10) ### END CODE HERE ### print("Output of my_method:",instance_f.my_method(16))
Output of my_method: 26
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
As you can corroborate in the previous cell, to call a custom method `m`, with arguments `args`, for an instance `i` you must write `i.m(args)`. With that in mind, methods can call others within a class. In the following cell, try to define `new_method` which calls `my_method` with `v` as input argument. Try to do this on your own in the cell given below.
class My_Class: def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z self.x_1 = None self.x_2 = None def __call__(self): #Performs an operation with x_1 and x_2, and returns the result a = None return a def my_method(self, w): #Multiplies x_1 and x_2, adds argument w and returns the result b = None return b def new_method(self, v): #Calls My_method with argument v ### START CODE HERE (1 line) ### result = None ### END CODE HERE ### return result
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
SPOILER ALERT Solution:
# hidden-cell class My_Class: def __init__(self, y, z): #Initialization of x_1 and x_2 with arguments y and z self.x_1 = y self.x_2 = z def __call__(self): #Performs an operation with x_1 and x_2, and returns the result a = self.x_1 - 2*self.x_2 return a def my_method(self, w): #Multiplies x_1 and x_2, adds argument w and returns the result b = self.x_1*self.x_2 + w return b def new_method(self, v): #Calls My_method with argument v result = self.my_method(v) return result instance_g = My_Class(1,10) print("Output of my_method:",instance_g.my_method(16)) print("Output of new_method:",instance_g.new_method(16))
Output of my_method: 26 Output of new_method: 26
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Part 2: Subclasses and Inheritance `Trax` uses classes and subclasses to define layers. The base class in `Trax` is `layer`, which means that every layer from a deep learning model is defined as a subclass of the `layer` class. In this part of the notebook, you are going to see how subclasses work. To define a subclass `sub` from class `super`, you have to write `class sub(super):` and define any method and parameter that you want for your subclass. In the next cell, I define `sub_c` as a subclass of `My_Class` with only one method (`additional_method`).
class sub_c(My_Class): #Subclass sub_c from My_class def additional_method(self): #Prints the value of parameter x_1 print(self.x_1)
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
2.1 Inheritance When you define a subclass `sub`, every method and parameter is inherited from `super` class, including the `__init__` and `__call__` methods. This means that any instance from `sub` can use the methods defined in `super`. Run the following cell and see for yourself.
instance_sub_a = sub_c(1,10) print('Parameter x_1 of instance_sub_a: ' + str(instance_sub_a.x_1)) print('Parameter x_2 of instance_sub_a: ' + str(instance_sub_a.x_2)) print("Output of my_method of instance_sub_a:",instance_sub_a.my_method(16))
Parameter x_1 of instance_sub_a: 1 Parameter x_2 of instance_sub_a: 10 Output of my_method of instance_sub_a: 26
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
As you can see, `sub_c` does not have an initialization method `__init__`, it is inherited from `My_class`. However, you can overwrite any method you want by defining it again in the subclass. For instance, in the next cell define a class `sub_c` with a redefined `my_Method` that multiplies `x_1` and `x_2` but does not add any additional argument.
class sub_c(My_Class): #Subclass sub_c from My_class def my_method(self): #Multiplies x_1 and x_2 and returns the result ### START CODE HERE (1 line) ### b = self.x_1*self.x_2 ### END CODE HERE ### return b
_____no_output_____
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
To check your implementation run the following cell.
test = sub_c(3,10) assert test.my_method() == 30, "The method my_method should return the product between x_1 and x_2" print("Output of overridden my_method of test:",test.my_method()) #notice we didn't pass any parameter to call my_method #print("Output of overridden my_method of test:",test.my_method(16)) #try to see what happens if you call it with 1 argument
Output of overridden my_method of test: 30
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
In the next cell, two instances are created, one of `My_Class` and another one of `sub_c`. The instances are initialized with equal `x_1` and `x_2` parameters.
y,z= 1,10 instance_sub_a = sub_c(y,z) instance_a = My_Class(y,z) print('My_method for an instance of sub_c returns: ' + str(instance_sub_a.my_method())) print('My_method for an instance of My_Class returns: ' + str(instance_a.my_method(10)))
My_method for an instance of sub_c returns: 10 My_method for an instance of My_Class returns: 20
MIT
NLP/Sequences/1/NLP_C3_W1_lecture_nb_02_classes.ipynb
verneh/DataSci
Interactive Data ExplorationThis notebook demonstrates how the functions and techniques we covered in the first notebook can be combined to build interactive data exploration tools. The code in the cells below will generate two interactive panels. The The first panel enables comparison of LIS output, SNODAS, and SNOTEL snow depth and snow water equivalent at SNOTEL site locations. The second panel enables exploration of LIS output using an interactive map.**Note: some cells below take several minutes to run.** Import Libraries
import numpy as np import pandas as pd import geopandas import xarray as xr import fsspec import s3fs from datetime import datetime as dt from scipy.spatial import distance import holoviews as hv, geoviews as gv from geoviews import opts from geoviews import tile_sources as gvts from datashader.colors import viridis import datashader from holoviews.operation.datashader import datashade, shade, dynspread, spread, rasterize from holoviews.streams import Selection1D, Params import panel as pn import param as pm import hvplot.pandas import hvplot.xarray # create S3 filesystem object s3 = s3fs.S3FileSystem() # define S3 bucket name bucket = "s3://eis-dh-hydro/SNOWEX-HACKWEEK" # set holoviews backend to Bokeh gv.extension('bokeh')
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Load Data SNOTEL Sites info
# create dictionary linking state names and abbreviations snotel = {"AZ" : "arizona", "CO" : "colorado", "ID" : "idaho", "MT" : "montana", "NM" : "newmexico", "UT" : "utah", "WY" : "wyoming"} # load SNOTEL site metadata for sites in the given state def load_site(state): # define path to file key = f"SNOTEL/snotel_{state}.csv" # load csv into pandas DataFrame df = pd.read_csv(s3.open(f'{bucket}/{key}', mode='r')) return df
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
SNOTEL Depth & SWE
def load_snotel_txt(state, var): # define path to file key = f"SNOTEL/snotel_{state}{var}_20162020.txt" # open text file fh = s3.open(f"{bucket}/{key}") # read each line and note those that begin with '#' lines = fh.readlines() skips = sum(1 for ln in lines if ln.decode('ascii').startswith('#')) # load txt file into pandas DataFrame (skipping lines beginning with '#') df = pd.read_csv(s3.open(f"{bucket}/{key}"), skiprows=skips) # convert Date column from str to pandas datetime objects df['Date'] = pd.to_datetime(df['Date']) return df # load SNOTEL depth & swe into dictionaries # define empty dicts snotel_depth = {} snotel_swe = {} # loop over states and load SNOTEL data for state in snotel.keys(): print(f"Loading state {state}") snotel_depth[state] = load_snotel_txt(state, 'depth') snotel_swe[state] = load_snotel_txt(state, 'swe')
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
SNODAS Depth & SWELike the LIS output we have been working with, a sample of SNODAS data is available on our S3 bucket in Zarr format. We can therefore load the SNODAS just as we load the LIS data.
# load snodas depth data key = "SNODAS/snodas_snowdepth_20161001_20200930.zarr" snodas_depth = xr.open_zarr(s3.get_mapper(f"{bucket}/{key}"), consolidated=True) # load snodas swe data key = "SNODAS/snodas_swe_20161001_20200930.zarr" snodas_swe = xr.open_zarr(s3.get_mapper(f"{bucket}/{key}"), consolidated=True)
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
LIS OutputsNext we'll load the LIS outputs. First, we'll define the helper function we saw in the previous notebook that adds `lat` and `lon` as coordinate variables. We'll use this immediately upon loading the data.
def add_latlon_coords(dataset: xr.Dataset)->xr.Dataset: """Adds lat/lon as dimensions and coordinates to an xarray.Dataset object.""" # get attributes from dataset attrs = dataset.attrs # get x, y resolutions dx = round(float(attrs['DX']), 3) dy = round(float(attrs['DY']), 3) # get grid cells in x, y dimensions ew_len = len(dataset['east_west']) ns_len = len(dataset['north_south']) # get lower-left lat and lon ll_lat = round(float(attrs['SOUTH_WEST_CORNER_LAT']), 3) ll_lon = round(float(attrs['SOUTH_WEST_CORNER_LON']), 3) # calculate upper-right lat and lon ur_lat = ll_lat + (dy * ns_len) ur_lon = ll_lon + (dx * ew_len) # define the new coordinates coords = { # create an arrays containing the lat/lon at each gridcell 'lat': np.linspace(ll_lat, ur_lat, ns_len, dtype=np.float32, endpoint=False), 'lon': np.linspace(ll_lon, ur_lon, ew_len, dtype=np.float32, endpoint=False) } # drop the original lat and lon variables dataset = dataset.rename({'lon': 'orig_lon', 'lat': 'orig_lat'}) # rename the grid dimensions to lat and lon dataset = dataset.rename({'north_south': 'lat', 'east_west': 'lon'}) # assign the coords above as coordinates dataset = dataset.assign_coords(coords) # reassign variable attributes dataset.lon.attrs = dataset.orig_lon.attrs dataset.lat.attrs = dataset.orig_lat.attrs return dataset
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Load the LIS data and apply `add_latlon_coords()`:
# LIS surfacemodel DA_10km key = "DA_SNODAS/SURFACEMODEL/LIS_HIST.d01.zarr" lis_sf = xr.open_zarr(s3.get_mapper(f"{bucket}/{key}"), consolidated=True) # (optional for 10km simulation?) lis_sf = add_latlon_coords(lis_sf) # drop off irrelevant variables drop_vars = ['_history', '_eis_source_path', 'orig_lat', 'orig_lon'] lis_sf = lis_sf.drop(drop_vars) lis_sf
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Working with the full LIS output dataset can be slow and consume lots of memory. Here we temporally subset the data to a shorter window of time. The full dataset contains daily values from 10/1/2016 to 9/30/2018. Feel free to explore the full dataset by modifying the `time_range` variable below and re-running all cells that follow.
# subset LIS data for two years time_range = slice('2016-10-01', '2017-04-30') lis_sf = lis_sf.sel(time=time_range)
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
In the next cell, we extract the data variable names and timesteps from the LIS outputs. These will be used to define the widget options.
# gather metadata from LIS # get variable names:string vnames = list(lis_sf.data_vars) print(vnames) # get time-stamps:string tstamps = list(np.datetime_as_string(lis_sf.time.values, 'D')) print(len(tstamps), tstamps[0], tstamps[-1])
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
By default, the `holoviews` plotting library automatically adjusts the range of plot colorbars based on the range of values in the data being plotted. This may not be ideal when comparing data on different timesteps. In the next cell we extract the upper and lower bounds for each data variable which we'll later use to set a static colorbar range.**Note: this cell will take ~1m40s to run**
%%time # pre-load min/max range for LIS variables def get_cmap_range(vns): vals = [(lis_sf[x].sel(time='2016-12').min(skipna=True).values.item(), lis_sf[x].sel(time='2016-12').max(skipna=True).values.item()) for x in vns] return dict(zip(vns, vals)) cmap_lims = get_cmap_range(vnames)
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Interactive Widgets SNOTEL Site Map and TimeseriesThe two cells that follow will create an interactive panel for comparing LIS, SNODAS, and SNOTEL snow depth and snow water equivalent. The SNOTEL site locations are plotted as points on an interactive map. Hover over the sites to view metadata and click on a site to generate a timeseries!**Note: it will take some time for the timeseries to display.**
# get snotel depth def get_depth(state, site, ts, te): df = snotel_depth[state] # subset between time range mask = (df['Date'] >= ts) & (df['Date'] <= te) df = df.loc[mask] # extract timeseries for the site return pd.concat([df.Date, df.filter(like=site)], axis=1).set_index('Date') # get snotel swe def get_swe(state, site, ts, te): df = snotel_swe[state] # subset between time range mask = (df['Date'] >= ts) & (df['Date'] <= te) df = df.loc[mask] # extract timeseries for the site return pd.concat([df.Date, df.filter(like=site)], axis=1).set_index('Date') # co-locate site & LIS model cell def nearest_grid(pt): # pt : input point, tuple (longtitude, latitude) # output: # x_idx, y_idx loc_valid = df_loc.dropna() pts = loc_valid[['lon', 'lat']].to_numpy() idx = distance.cdist([pt], pts).argmin() return loc_valid['east_west'].iloc[idx], loc_valid['north_south'].iloc[idx] # get LIS variable def var_subset(dset, v, lon, lat, ts, te): return dset[v].sel(lon=lon, lat=lat, method="nearest").sel(time=slice(ts, te)).load() # line plots def line_callback(index, state, vname, ts_tag, te_tag): sites = load_site(snotel[state]) row = sites.iloc[0] tmp = var_subset(lis_sf, vname, row.lon, row.lat, ts_tag, te_tag) xr_sf = xr.zeros_like(tmp) xr_snodas = xr_sf ck = get_depth(state, row.site_name, ts_tag, te_tag).to_xarray().rename({'Date': 'time'}) xr_snotel = xr.zeros_like(ck) if not index: title='Var: -- Lon: -- Lat: --' return (xr_sf.hvplot(title=title, color='blue', label='LIS') \ * xr_snotel.hvplot(color='red', label='SNOTEL') \ * xr_snodas.hvplot(color='green', label='SNODAS')).opts(legend_position='right') else: sites = load_site(snotel[state]) first_index = index[0] row = sites.iloc[first_index] xr_sf = var_subset(lis_sf, vname, row.lon, row.lat, ts_tag, te_tag) vs = vname.split('_')[0] title=f'Var: {vs} Lon: {row.lon} Lat: {row.lat}' # update snotel data if 'depth' in vname.lower(): xr_snotel = get_depth(state, row.site_name, ts_tag, te_tag).to_xarray().rename({'Date': 'time'})*0.01 xr_snodas = var_subset(snodas_depth, 'SNOWDEPTH', row.lon, row.lat, ts_tag, te_tag)*0.001 if 'swe' in vname.lower(): xr_snotel = get_swe(state, row.site_name, ts_tag, te_tag).to_xarray().rename({'Date': 'time'}) xr_snodas = var_subset(snodas_swe, 'SWE', row.lon, row.lat, ts_tag, te_tag) return xr_sf.hvplot(title=title, color='blue', label='LIS') \ * xr_snotel.hvplot(color='red', label='SNOTEL') \ * xr_snodas.hvplot(color='green', label='SNODAS') # sites on map def plot_points(state): # dataframe to hvplot obj Points sites=load_site(snotel[state]) pts_opts=dict(size=12, nonselection_alpha=0.4,tools=['tap', 'hover']) site_points=sites.hvplot.points(x='lon', y='lat', c='elev', cmap='fire', geo=True, hover_cols=['site_name', 'ntwk', 'state', 'lon', 'lat']).opts(**pts_opts) return site_points # base map tiles = gvts.OSM() # state widget state_select = pn.widgets.Select(options=list(snotel.keys()), name="State") state_stream = Params(state_select, ['value'], rename={'value':'state'}) # variable widget var_select = pn.widgets.Select(options=['SnowDepth_tavg', 'SWE_tavg'], name="LIS Variable List") var_stream = Params(var_select, ['value'], rename={'value':'vname'}) # date range widget date_fmt = '%Y-%m-%d' sdate_input = pn.widgets.DatetimeInput(name='Start date', value=dt(2016,10,1),start=dt.strptime(tstamps[0], date_fmt), end=dt.strptime(tstamps[-1], date_fmt), format=date_fmt) sdate_stream = Params(sdate_input, ['value'], rename={'value':'ts_tag'}) edate_input = pn.widgets.DatetimeInput(name='End date', value=dt(2017,3,31),start=dt.strptime(tstamps[0], date_fmt), end=dt.strptime(tstamps[-1], date_fmt),format=date_fmt) edate_stream = Params(edate_input, ['value'], rename={'value':'te_tag'}) # generate site points as dynamic map # plots points and calls plot_points() when user selects a site site_dmap = hv.DynamicMap(plot_points, streams=[state_stream]).opts(height=400, width=600) # pick site select_stream = Selection1D(source=site_dmap) # link widgets to callback function line = hv.DynamicMap(line_callback, streams=[select_stream, state_stream, var_stream, sdate_stream, edate_stream]) # create panel layout pn.Row(site_dmap*tiles, pn.Column(state_select, var_select, pn.Row(sdate_input, edate_input), line))
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Interactive LIS Output ExplorerThe cell below creates a `panel` layout for exploring LIS output rasters. Select a variable using the drop down and then use the date slider to scrub back and forth in time!
# date widget (slider & key in) # start and end dates date_fmt = '%Y-%m-%d' b = dt.strptime('2016-10-01', date_fmt) e = dt.strptime('2017-04-30', date_fmt) # define date widgets date_slider = pn.widgets.DateSlider(start=b, end=e, value=b, name="LIS Model Date") dt_input = pn.widgets.DatetimeInput(name='LIS Model Date Input', value=b, format=date_fmt) date_stream = Params(date_slider, ['value'], rename={'value':'date'}) # variable widget var_select = pn.widgets.Select(options=vnames, name="LIS Variable List") var_stream = Params(var_select, ['value'], rename={'value':'vname'}) # base map widget map_layer= pn.widgets.RadioButtonGroup( name='Base map layer', options=['Open Street Map', 'Satellite Imagery'], value='Satellite Imagery', button_type='primary', background='#f307eb') # lis output display callback function # returns plot of LIS output when date/variable is changed def var_layer(vname, date): t_stamp = dt.strftime(date, '%Y-%m-%d') dssm = lis_sf[vname].sel(time=t_stamp) image = dssm.hvplot(geo=True) clim = cmap_lims[vname] return image.opts(clim=clim) # watches date widget for updates @pn.depends(dt_input.param.value, watch=True) def _update_date(dt_input): date_slider.value=dt_input # updates basemap on widget change def update_map(maps): tile = gvts.OSM if maps=='Open Street Map' else gvts.EsriImagery return tile.opts(alpha=0.7) # link widgets to callback functions streams = dict(vname=var_select.param.value, date=date_slider.param.value) dmap = hv.DynamicMap(var_layer, streams=streams) dtile = hv.DynamicMap(update_map, streams=dict(maps=map_layer.param.value)) # create panel layout of widgets and plot pn.Column(var_select, date_slider, dt_input, map_layer, dtile*rasterize(dmap, aggregator=datashader.mean()).opts(cmap=viridis,colorbar=True,width=800, height=600))
_____no_output_____
MIT
book/tutorials/lis/2_interactive_data_exploration.ipynb
zachghiaccio/website
Data Science Unit 2 Sprint Challenge 4 — Model Validation Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. Predicting Blood DonationsOur dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive.The goal is to predict the last column, whether the donor made a donation in March 2007, using information about each donor's history. We'll measure success using recall score as the model evaluation metric.Good data-driven systems for tracking and predicting donations and supply needs can improve the entire supply chain, making sure that more patients get the blood transfusions they need. Run this cell to load the data:
# all imports here import pandas as pd from sklearn.metrics import accuracy_score import numpy as np from sklearn.model_selection import train_test_split from sklearn.feature_selection import f_regression, SelectKBest from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler from sklearn.pipeline import make_pipeline from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import RobustScaler from sklearn.metrics import recall_score df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/blood-transfusion/transfusion.data') df = df.rename(columns={ 'Recency (months)': 'months_since_last_donation', 'Frequency (times)': 'number_of_donations', 'Monetary (c.c. blood)': 'total_volume_donated', 'Time (months)': 'months_since_first_donation', 'whether he/she donated blood in March 2007': 'made_donation_in_march_2007' }) df.head()
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 1.1 — Begin with baselinesWhat **accuracy score** would you get here with a **"majority class baseline"?** (You don't need to split the data into train and test sets yet. You can answer this question either with a scikit-learn function or with a pandas function.)
#determine majority class df['made_donation_in_march_2007'].value_counts(normalize=True) # Guess the majority class for every prediction: majority_class = 0 y_pred = [majority_class] * len(df['made_donation_in_march_2007']) #accuracy score same as majority class, because dataset not split yet accuracy_score(df['made_donation_in_march_2007'], y_pred)
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
What **recall score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of recall.)
#when it is actually yes, how often do you predict yes? 0, because always predicting no # recall = true_positive / actual_positive
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 1.2 — Split dataIn this Sprint Challenge, you will use "Cross-Validation with Independent Test Set" for your model evaluation protocol.First, **split the data into `X_train, X_test, y_train, y_test`**, with random shuffle. (You can include 75% of the data in the train set, and hold out 25% for the test set.)
#split data X = df.drop(columns='made_donation_in_march_2007') y = df['made_donation_in_march_2007'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42) #validate 75% in train set X_train.shape #validate 25% in test set X_test.shape
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 2.1 — Make a pipelineMake a **pipeline** which includes:- Preprocessing with any scikit-learn [**Scaler**](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing)- Feature selection with **[`SelectKBest`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectKBest.html)([`f_classif`](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.f_classif.html))**- Classification with [**`LogisticRegression`**](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
#make pipeline with 3 prerequisites kbest = SelectKBest(f_regression) pipeline = Pipeline([('scale', StandardScaler()),('kbest', kbest), ('lr', LogisticRegression(solver='lbfgs'))]) pipe = make_pipeline(RobustScaler(),SelectKBest(),LogisticRegression(solver='lbfgs'))
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 2.2 — Do Grid Search Cross-ValidationDo [**GridSearchCV**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) with your pipeline. Use **5 folds** and **recall score**.Include these **parameters for your grid:** `SelectKBest`- `k : 1, 2, 3, 4` `LogisticRegression`- `class_weight : None, 'balanced'`- `C : .0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0`**Fit** on the appropriate data.
param_grid = {'selectkbest__k':[1,2,3,4],'logisticregression__class_weight':[None,'balanced'],'logisticregression__C':[.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]} gs = GridSearchCV(pipe,param_grid,cv=5,scoring='recall') gs.fit(X_train, y_train) # grid_search = GridSearchCV(pipeline, { 'lr__class_weight': [None,'balanced'],'kbest__k': [1,2,3,4], 'lr__C': [.0001, .001, .01, .1, 1.0, 10.0, 100.00, 1000.0, 10000.0]},scoring='recall', cv=5,verbose=1) # grid_search.fit(X_train, y_train)
/usr/local/lib/python3.6/dist-packages/sklearn/model_selection/_search.py:841: DeprecationWarning: The default of the `iid` parameter will change from True to False in version 0.22 and will be removed in 0.24. This will change numeric results when test-set sizes are unequal. DeprecationWarning)
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 3 — Show best score and parametersDisplay your **best cross-validation score**, and the **best parameters** (the values of `k, class_weight, C`) from the grid search.(You're not evaluated here on how good your score is, or which parameters you find. You're only evaluated on being able to display the information. There are several ways you can get the information, and any way is acceptable.)
validation_score = gs.best_score_ print() print('Cross-Validation Score:', -validation_score) print() print('Best estimator:', gs.best_estimator_) print() gs.best_estimator_ # Cross-Validation Score: -0.784519402166461 # best parameters: k=1,C=0.0001,class_weight=balanced
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36
true_negative = 85 false_positive = 58 false_negative = 8 true_positive = 36 predicted_positive = 58+36 actual_positive = 8 + 36
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Calculate accuracy
accuracy = (true_negative + true_positive) / (true_negative + false_positive +false_negative + true_positive) print ('Accuracy:', accuracy)
Accuracy: 0.6470588235294118
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Calculate precision
precision = true_positive / predicted_positive print ('Precision:', precision)
Precision: 0.3829787234042553
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Calculate recall
recall = true_positive / actual_positive print ('Recall:', recall)
Recall: 0.8181818181818182
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
BONUS — How you can earn a score of 3 Part 1Do feature engineering, to try improving your cross-validation score. Part 2Add transformations in your pipeline and parameters in your grid, to try improving your cross-validation score. Part 3Show names of selected features. Then do a final evaluation on the test set — what is the test score? Part 4Calculate F1 score and False Positive Rate.
# # Which features were selected? selector = gs.best_estimator_.named_steps['selectkbest'] all_names = X_train.columns selected_mask = selector.get_support() selected_names = all_names[selected_mask] unselected_names = all_names[~selected_mask] print('Features selected:') for name in selected_names: print(name) print() print('Features not selected:') for name in unselected_names: print(name) # Predict with X_test features y_pred = grid_search.predict(X_test) # Compare predictions to y_test labels test_score = recall_score(y_test, y_pred) print('Test Score:', test_score) f1 = 2*precision*recall/(precision+recall) print('f1:', f1) false_positive_rate = false_positive / (false_positive+true_negative) print('False Positive Rate:', false_positive_rate)
_____no_output_____
MIT
DS_Unit_2_Sprint_Challenge_4_Model_Validation.ipynb
donw385/DS-Unit-2-Sprint-4-Model-Validation
Visualizing Chipotle's Data This time we are going to pull data directly from the internet.Special thanks to: https://github.com/justmarkham for sharing the dataset and materials. Step 1. Import the necessary libraries
import pandas as pd from collections import Counter import matplotlib.pyplot as plt # set this so the graphs open internally %matplotlib inline
_____no_output_____
BSD-3-Clause
07_Visualization/Chipotle/Exercises.ipynb
duongv/pandas_exercises
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv). Step 3. Assign it to a variable called chipo.
url = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv' chipo = pd.read_csv(url, sep = '\t')
_____no_output_____
BSD-3-Clause
07_Visualization/Chipotle/Exercises.ipynb
duongv/pandas_exercises
Step 4. See the first 10 entries
chipo.head(10)
_____no_output_____
BSD-3-Clause
07_Visualization/Chipotle/Exercises.ipynb
duongv/pandas_exercises
Step 5. Create a histogram of the top 5 items bought
# Create a Series of Item_name x = chipo.item_name # use the Counter to count frequency with keys and frequency. letter_counts = Counter(x) new_data = pd.DataFrame.from_dict(letter_counts, orient='index') data = new_data.sort_values(0,ascending=False)[0:5] data.plot(kind='bar') plt.xlabel('Item') plt.ylabel ('The number of orders') plt.title('Most ordered Chipotle') plt.show()
_____no_output_____
BSD-3-Clause
07_Visualization/Chipotle/Exercises.ipynb
duongv/pandas_exercises
Figure 2: Illustration of graphical method for finding best adaptation strategy in uncorrelated environmentsGoal: illustration of the steps of the graphical method
import numpy as np import scipy.spatial %matplotlib inline import matplotlib.pyplot as plt plt.style.use(['transitions.mplstyle']) import matplotlib colors = matplotlib.rcParams['axes.prop_cycle'].by_key()['color'] from matplotlib import patches import sys sys.path.append('lib/') import evolimmune, plotting def paretofrontier(points): "Naive Pareto frontier calculation of a set of points where along every axis larger is better" paretopoints = [] for point in points: if not np.any(np.all(points - point > 0, axis=1)): paretopoints.append(point) paretopoints.sort(key=lambda row: row[0]) return np.asarray(paretopoints) fs = [] prng = np.random.RandomState(1234) while len(fs) < 20: f = prng.rand(2) a = 1.7 if f[1] < (1.0-f[0]**(1.0/a))**a and np.amin(f) > 0.04: if not fs or (np.amin(np.sum((f - np.asarray(fs))**2, axis=1)**.5) > 0.05): fs.append(f) fs = np.asarray(fs) pienvs = [0.3, 0.7] fig, axes = plt.subplots(figsize=(7, 2), ncols=4, subplot_kw=dict(aspect='equal')) # plot phenotype fitnesses for ax in [axes[0], axes[1]]: ax.scatter(fs[:, 0], fs[:, 1], color=colors[1]) # calculate and plot convex hull hull = scipy.spatial.ConvexHull(fs) p = patches.Polygon(fs[hull.vertices], alpha=0.5, color=colors[1]) axes[1].add_patch(p) # calc pareto pareto = [f for f in fs[hull.vertices] if f in paretofrontier(fs)] pareto.sort(key=lambda row: row[0]) pareto = np.asarray(pareto) # plot pareto boundaries for ax in [axes[1], axes[2]]: ax.plot(pareto[:, 0], pareto[:, 1], '-', c=colors[0], lw=2.0) for i in range(len(pareto)-1): N = 100 x, y = pareto[i:i+2, 0], pareto[i:i+2, 1] axes[3].plot(np.linspace(x[0], x[1], N), np.linspace(y[0], y[1], N), '-', c=colors[0], lw=2.0) for ax in [axes[1], axes[2], axes[3]]: ax.plot(pareto[:, 0], pareto[:, 1], 'o', c=colors[0], markeredgecolor=colors[0]) # calc optimal fitnesses for different pienvs copts = [] opts = [] for pienv in pienvs: for i in range(len(pareto)-1): pih = evolimmune.pihat(pienv, pareto[i], pareto[i+1]) if 0.0 < pih < 1.0: opt = pareto[i]*pih + pareto[i+1]*(1.0-pih) opts.append(opt) copts.append(pienv*np.log(opt[1]) + (1.0-pienv)*np.log(opt[0])) # plot isolines f0 = np.linspace(0.001, 0.999) handles = [None, None] for i, copt in enumerate(copts): pienv = pienvs[i] alpha = (1.0-pienv)/pienv for dc in [-0.2, 0.0, 0.2]: c = copt + dc for ax in [axes[2], axes[3]]: l, = ax.plot(f0, np.exp(c/pienv)/f0**alpha, '-', c=colors[i+2], lw=.75, alpha=.5) handles[i] = l axes[3].legend(handles, pienvs, title='$p(x=2)$') # plot opt for i, opt in enumerate(opts): for ax in [axes[2], axes[3]]: ax.plot(opt[0], opt[1], '*', c=colors[i+2], markeredgecolor=colors[i+2]) # axes limits, labels, etc. for ax in [axes[0], axes[1], axes[2]]: ax.set_xlim(0.0, 0.9) ax.set_ylim(0.0, 0.9) ax.set_xlabel('fitness in env. 1,\n$f(x=1)$') ax.set_ylabel('fitness in env. 2,\n$f(x=2)$') ax = axes[3] ax.set_xlim(0.03, 1.5) ax.set_ylim(0.03, 1.5) ax.set_xscale('log') ax.set_yscale('log') ax.set_xlabel('log-fitness in env. 1,\n$m(x=1)$') ax.set_ylabel('log-fitness in env. 2,\n$m(x=2)$') for ax in axes: plotting.despine(ax) ax.set_xticks([]) ax.set_yticks([]) plotting.label_axes(axes, xy=(-0.15, 0.95)) fig.tight_layout(pad=0.25) fig.savefig('svgs/graphicalmethod.svg')
_____no_output_____
MIT
graphicalmethod.ipynb
andim/transitions-paper
Test SKNW for Cahn-Hillard dataset.
#os.chdir(r'/Users/devyanijivani/git/pygraspi/notebooks/data') dest = "/Users/devyanijivani/git/pygraspi/notebooks/junctions" myFiles = glob.glob('*.txt') myFiles.sort() for i, file in enumerate(myFiles): morph = np.array(pandas.read_csv(file, delimiter=' ', header=None)).swapaxes(0, 1) skel, distance = medial_axis(morph, return_distance=True) graph = sknw.build_sknw(skel) for (s,e) in graph.edges(): ps = graph[s][e]['pts'] plt.plot(ps[:,1], ps[:,0], 'green', zorder=-1) # draw node by o nodes = graph.nodes() ps = np.array([nodes[i]['o'] for i in nodes], dtype = int) plt.scatter(ps[:,1], ps[:,0], s = 1, c ='r') # title and show plt.title('Build Graph') plt.gca().set_aspect('equal') print(os.path.splitext(file)[0]) file_loc = os.path.join(dest, os.path.splitext(file)[0]+'.png') #print(file_loc) #plt.savefig(file_loc,dpi=1200) plt.close() pwd def skeletonize(morph): skel, distance = medial_axis(morph, return_distance=True) return skel, distance morph = np.array([[1,1,1],\ [1,1,1],\ [1,1,1]]) skel = skeletonize(morph)[0] skel def getEndJunction(graph): l = [graph.degree[n] for n in graph.nodes()] return np.array([l.count(1), l.count(3)]) graph = sknw.build_sknw(skel) getEndJunction(graph) def getBranchLen(graph): b_l = [graph.edges[e]['weight'] for e in graph.edges()] return np.array([len(b_l), round(sum(b_l)/len(b_l), 2)]) getBranchLen(graph)
_____no_output_____
MIT
notebooks/junctions_testcases.ipynb
devyanijivani/pygraspi
Exploring tabular data with pandasIn this notebook, we will explore a time series of water levels at the Point Atkinson lighthouse using pandas. This is a basic introduction to pandas and we touch on the following topics:* Reading a csv file* Simple plots* Indexing and subsetting* DatetimeIndex* Grouping* Time series methods Getting startedYou will need to have the python libraries pandas, numpy and matplotlib installed. These are all available through the Anaconda distribution of python.* https://store.continuum.io/cshop/anaconda/ResourcesThere is a wealth of information in the pandas documentation.* http://pandas.pydata.org/pandas-docs/stable/Water level data (7795-01-JAN-2000_slev.csv) is from Fisheries and Oceans Canada and is available at this website:* http://www.isdm-gdsi.gc.ca/isdm-gdsi/twl-mne/index-eng.htm
import pandas as pd import matplotlib.pyplot as plt import datetime import numpy as np %matplotlib inline
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Read the data It is helpful to understand the structure of your dataset before attempting to read it with pandas.
!head 7795-01-JAN-2000_slev.csv
Station_Name,Point Atkinson, B.C. Station_Number,7795 Latitude_Decimal_Degrees,49.337 Longitude_Decimal_Degrees,123.253 Datum,CD Time_zone,UTC SLEV=Observed Water Level Obs_date,SLEV(metres) 2000/01/01 08:00,2.95, 2000/01/01 09:00,3.34,
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
This dataset contains comma separated values. It has a few rows of metadata (station name, longitude, latitude, etc).The actual data begins with timestamps and water level records at row 9. We can read this data with a pandas function read_csv().read_csv() has many arguments to help customize the reading of many different csv files. For this file, we will* skip the first 8 rows* use index_col=False so that the first column is treated as data and not an index* tell pandas to read the first column as dates (parse_dates=[0])* name the columns as 'date' and 'wlev'.
data = pd.read_csv('7795-01-JAN-2000_slev.csv', skiprows = 8, index_col=False, parse_dates=[0], names=['date','wlev'])
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
data is a DataFrame object
type(data)
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Let's take a quick peak at the dataset.
data.head() data.tail() data.describe()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Notice that pandas did not apply the summary statistics to the date column. Simple Plots pandas has support for some simple plotting features, like line plots, scatter plots, box plots, etc. For full overview of plots visit http://pandas.pydata.org/pandas-docs/stable/visualization.htmlPlotting is really easy. pandas even takes care of labels and legends.
data.plot('date','wlev') data.plot(kind='hist') data.plot(kind='box')
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Indexing and Subsetting We can index and subset the data in different ways.By row numberFor example, grab the first two rows.
data[0:2]
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Note that accessing a single row by the row number doesn't work!
data[0]
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
In that case, I would recommend using .iloc or slice for one row.
data.iloc[0] data[0:1]
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
By columnFor example, print the first few lines of the wlev column.
data['wlev'].head()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
By a conditionFor example, subset the data with date greater than Jan 1, 2008. We pass our condition into the square brackets of data.
data_20082009 = data[data['date']>datetime.datetime(2008,1,1)] data_20082009.plot('date','wlev')
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Mulitple conditionsFor example, look for extreme water level events. That is, instances where the water level is above 5 m or below 0 m.Don't forget to put brackets () around each part of the condition.
data_extreme = data[(data['wlev']>5) | (data['wlev']<0)] data_extreme.head()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
ExerciseWhat was the maximum water level in 2006? Bonus: When?Solution Isolate the year 2006. Use describe to look up the max water level.
data_2006 = data[(data['date']>=datetime.datetime(2006,1,1)) & (data['date'] < datetime.datetime(2007,1,1))] data_2006.describe()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
The max water level is 5.49m. Use a condition to determine the date.
date_max = data_2006[data_2006['wlev']==5.49]['date'] print date_max
53399 2006-02-04 17:00:00 Name: date, dtype: datetime64[ns]
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Manipulating dates In the above example, it would have been convenient if we could access only the year part of the time stamp. But this doesn't work:
data['date'].year
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
We can use the pandas DatetimeIndex class to make this work. The DatetimeIndex allows us to easily access properties, like year, month, and day of each timestamp. We will use this to add new Year, Month, Day, Hour and DayOfYear columns to the dataframe.
date_index = pd.DatetimeIndex(data['date']) print date_index data['Day'] = date_index.day data['Month'] = date_index.month data['Year'] = date_index.year data['Hour'] = date_index.hour data['DayOfYear'] = date_index.dayofyear data.head() data.describe()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Notice that now pandas applies the describe function to these new columns because it sees them as numerical data.Now, we can access a single year with a simpler conditional.
data_2006 = data[data['Year']==2006] data_2006.head()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Grouping Sometimes, it is convenient to group data with similar characteristics. We can do this with the groupby() method.For example, we might want to group by year.
data_annual = data.groupby(['Year']) data_annual['wlev'].describe().head(20)
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
Now the data is organized into groups based on the year of the observation.AggregatingOnce the data is grouped, we may want to summarize it in some way. We can do this with the apply() function. The argument of apply() is a function that we want to apply to each group. For example, we may want to calculate the mean sea level of each year.
annual_means = data_annual['wlev'].apply(np.mean) print annual_means
Year 2000 3.067434 2001 3.057653 2002 3.078112 2003 3.112990 2004 3.104097 2005 3.127036 2006 3.142052 2007 3.095614 2008 3.070757 2009 3.080533 Name: wlev, dtype: float64
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson
It is also really easy to plot the aggregated data.
annual_means.plot()
_____no_output_____
MIT
Pandas Lesson.ipynb
nsoontie/pythonPandasLesson