markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Vertex constantsSetup up the following constants for Vertex:- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
# API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
AutoML constantsSet constants unique to AutoML datasets and training:- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).- Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for.
# Video Dataset type DATA_SCHEMA = 'gs://google-cloud-aiplatform/schema/dataset/metadata/video_1.0.0.yaml' # Video Labeling type LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/video_action_recognition_io_format_1.0.0.yaml" # Video Training task TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_video_action_recognition_1.0.0.yaml"
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for prediction.Set the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100Otherwise specify `(None, None)` to use a container image to run on a CPU.
if os.getenv("IS_TESTING_DEPOLY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPOLY_GPU"))) else: DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Container (Docker) imageFor AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected. Machine TypeNext, set the machine type to use for prediction.- Set the variable `DEPLOY_COMPUTE` to configure the compute resources for the VM you will use for prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*
if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = 'n1-standard' VCPU = '4' DEPLOY_COMPUTE = MACHINE_TYPE + '-' + VCPU print('Deploy machine type', DEPLOY_COMPUTE)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
TutorialNow you are ready to start creating your own AutoML video action recognition model. Set up clientsThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.- Dataset Service for `Dataset` resources.- Model Service for `Model` resources.- Pipeline Service for training.- Job Service for batch prediction and custom training.
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient( client_options=client_options ) return client def create_model_client(): client = aip.ModelServiceClient( client_options=client_options ) return client def create_pipeline_client(): client = aip.PipelineServiceClient( client_options=client_options ) return client def create_job_client(): client = aip.JobServiceClient( client_options=client_options ) return client clients = {} clients['dataset'] = create_dataset_client() clients['model'] = create_model_client() clients['pipeline'] = create_pipeline_client() clients['job'] = create_job_client() for client in clients.items(): print(client)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
DatasetNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create `Dataset` resource instanceUse the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:1. Uses the dataset client service.2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters: - `display_name`: The human-readable name you choose to give it. - `metadata_schema_uri`: The schema for the dataset type.3. Calls the client dataset service method `create_dataset`, with the following parameters: - `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources. - `dataset`: The Vertex dataset object instance you created.4. The method returns an `operation` object.An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method:| Method | Description || ----------- | ----------- || result() | Waits for the operation to complete and returns a result object in JSON format. || running() | Returns True/False on whether the operation is still running. || done() | Returns True/False on whether the operation is completed. || canceled() | Returns True/False on whether the operation was canceled. || cancel() | Cancels the operation (this may take up to 30 seconds). |
TIMEOUT = 90 def create_dataset(name, schema, labels=None, timeout=TIMEOUT): start_time = time.time() try: dataset = aip.Dataset(display_name=name, metadata_schema_uri=schema, labels=labels) operation = clients['dataset'].create_dataset(parent=PARENT, dataset=dataset) print("Long running operation:", operation.operation.name) result = operation.result(timeout=TIMEOUT) print("time:", time.time() - start_time) print("response") print(" name:", result.name) print(" display_name:", result.display_name) print(" metadata_schema_uri:", result.metadata_schema_uri) print(" metadata:", dict(result.metadata)) print(" create_time:", result.create_time) print(" update_time:", result.update_time) print(" etag:", result.etag) print(" labels:", dict(result.labels)) return result except Exception as e: print("exception:", e) return None result = create_dataset("golf-" + TIMESTAMP, DATA_SCHEMA)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Now save the unique dataset identifier for the `Dataset` resource instance you created.
# The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split('/')[-1] print(dataset_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Data preparationThe Vertex `Dataset` resource for video has some requirements for your data.- Videos must be stored in a Cloud Storage bucket.- Each video file must be in a video format (MPG, AVI, ...).- There must be an index file stored in your Cloud Storage bucket that contains the path and label for each video.- The index file must be either CSV or JSONL. CSVFor video action recognition, the CSV index file has a few requirements:- No heading.- First column is the Cloud Storage path to the video.- Second column is the time offset for the start of the video segment to analyze.- Third column is the time offset for the end of the video segment to analyze.- Fourth column is label for the action (e.g., swing).- Fifth column is the time offset for the recognized action. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
IMPORT_FILES = ['gs://automl-video-demo-data/hmdb_golf_swing_train.csv', 'gs://automl-video-demo-data/hmdb_golf_swing_test.csv']
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Quick peek at your dataYou will use a version of the Golf Swings dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.
if 'IMPORT_FILES' in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Import dataNow, import the data into your Vertex Dataset resource. Use this helper function `import_data` to import the data. The function does the following:- Uses the `Dataset` client.- Calls the client method `import_data`, with the following parameters: - `name`: The human readable name you give to the `Dataset` resource (e.g., golf). - `import_configs`: The import configuration.- `import_configs`: A Python list containing a dictionary, with the key/value entries: - `gcs_sources`: A list of URIs to the paths of the one or more index files. - `import_schema_uri`: The schema identifying the labeling type.The `import_data()` method returns a long running `operation` object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
def import_data(dataset, gcs_sources, schema): config = [{ 'gcs_source': {'uris': gcs_sources}, 'import_schema_uri': schema }] print("dataset:", dataset_id) start_time = time.time() try: operation = clients['dataset'].import_data(name=dataset_id, import_configs=config) print("Long running operation:", operation.operation.name) result = operation.result() print("result:", result) print("time:", int(time.time() - start_time), "secs") print("error:", operation.exception()) print("meta :", operation.metadata) print("after: running:", operation.running(), "done:", operation.done(), "cancelled:", operation.cancelled()) return operation except Exception as e: print("exception:", e) return None import_data(dataset_id, IMPORT_FILES, LABEL_SCHEMA)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Train the modelNow train an AutoML video action recognition model using your Vertex `Dataset` resource. To train the model, do the following steps:1. Create an Vertex training pipeline for the `Dataset` resource.2. Execute the pipeline to start the training. Create a training pipelineYou may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:1. Being reusable for subsequent training jobs.2. Can be containerized and ran as a batch job.3. Can be distributed.4. All the steps are associated with the same pipeline job for tracking progress.Use this helper function `create_pipeline`, which takes the following parameters:- `pipeline_name`: A human readable name for the pipeline job.- `model_name`: A human readable name for the model.- `dataset`: The Vertex fully qualified dataset identifier.- `schema`: The dataset labeling (annotation) training schema.- `task`: A dictionary describing the requirements for the training job.The helper function calls the `Pipeline` client service'smethod `create_pipeline`, which takes the following parameters:- `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources.- `training_pipeline`: the full specification for the pipeline training job.Let's look now deeper into the *minimal* requirements for constructing a `training_pipeline` specification:- `display_name`: A human readable name for the pipeline job.- `training_task_definition`: The dataset labeling (annotation) training schema.- `training_task_inputs`: A dictionary describing the requirements for the training job.- `model_to_upload`: A human readable name for the model.- `input_data_config`: The dataset specification. - `dataset_id`: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier. - `fraction_split`: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML. - Note for video, validation split is not supported -- only training and test.
def create_pipeline(pipeline_name, model_name, dataset, schema, task): dataset_id = dataset.split('/')[-1] input_config = {'dataset_id': dataset_id, 'fraction_split': { 'training_fraction': 0.8, 'test_fraction': 0.2 }} training_pipeline = { "display_name": pipeline_name, "training_task_definition": schema, "training_task_inputs": task, "input_data_config": input_config, "model_to_upload": {"display_name": model_name}, } try: pipeline = clients['pipeline'].create_training_pipeline(parent=PARENT, training_pipeline=training_pipeline) print(pipeline) except Exception as e: print("exception:", e) return None return pipeline
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Construct the task requirementsNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.The minimal fields you need to specify are:- `model_type`: The type of deployed model, ex. CLOUD for deploying to Google Cloud.Finally, create the pipeline by calling the helper function `create_pipeline`, which returns an instance of a training pipeline object.
PIPE_NAME = "golf_pipe-" + TIMESTAMP MODEL_NAME = "golf_model-" + TIMESTAMP task = json_format.ParseDict({'model_type': "CLOUD", }, Value()) response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Now save the unique identifier of the training pipeline you created.
# The full unique ID for the pipeline pipeline_id = response.name # The short numeric ID for the pipeline pipeline_short_id = pipeline_id.split('/')[-1] print(pipeline_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Get information on a training pipelineNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:- `name`: The Vertex fully qualified pipeline identifier.When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`.
def get_training_pipeline(name, silent=False): response = clients['pipeline'].get_training_pipeline(name=name) if silent: return response print("pipeline") print(" name:", response.name) print(" display_name:", response.display_name) print(" state:", response.state) print(" training_task_definition:", response.training_task_definition) print(" training_task_inputs:", dict(response.training_task_inputs)) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", dict(response.labels)) return response response = get_training_pipeline(pipeline_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
DeploymentTraining the above model may take upwards of 240 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`.
while True: response = get_training_pipeline(pipeline_id, True) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_to_deploy_id = None if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: raise Exception("Training Job Failed") else: model_to_deploy = response.model_to_upload model_to_deploy_id = model_to_deploy.name print("Training Time:", response.end_time - response.start_time) break time.sleep(60) print("model to deploy:", model_to_deploy_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Model informationNow that your model is trained, you can get some information on your model. Evaluate the Model resourceNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slicesUse this helper function `list_model_evaluations`, which takes the following parameter:- `name`: The Vertex fully qualified model identifier for the `Model` resource.This helper function uses the model client service's `list_model_evaluations` method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.For each evaluation -- you probably only have one, you then print all the key names for each metric in the evaluation, and for a small set (`videoActionMetrics`) you will print the result.
def list_model_evaluations(name): response = clients['model'].list_model_evaluations(parent=name) for evaluation in response: print("model_evaluation") print(" name:", evaluation.name) print(" metrics_schema_uri:", evaluation.metrics_schema_uri) metrics = json_format.MessageToDict(evaluation._pb.metrics) for metric in metrics.keys(): print(metric) print('videoActionMetrics', metrics['videoActionMetrics']) return evaluation.name last_evaluation = list_model_evaluations(model_to_deploy_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Model deployment for batch predictionNow deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for on-demand prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource.3. Make online prediction requests to the `Endpoint` resource.For batch-prediction, you:1. Create a batch prediction job.2. The job service will provision resources for the batch prediction request.3. The results of the batch prediction request are returned to the caller.4. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction requestNow do a batch prediction to your deployed model. Get test item(s)Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
import json import_file = IMPORT_FILES[0] test_items = ! gsutil cat $import_file | head -n2 cols = str(test_items[0]).split(',') test_item_1 = str(cols[0]) test_label_1 = str(cols[-1]) cols = str(test_items[1]).split(',') test_item_2 = str(cols[0]) test_label_2 = str(cols[-1]) print(test_item_1, test_label_1) print(test_item_2, test_label_2)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Make a batch input fileNow make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each video. The dictionary contains the key/value pairs:- `content`: The Cloud Storage path to the video.- `mimeType`: The content type. In our example, it is an `avi` file.- `timeSegmentStart`: The start timestamp in the video to do prediction on. *Note*, the timestamp must be specified as a string and followed by s (second), m (minute) or h (hour).- `timeSegmentEnd`: The end timestamp in the video to do prediction on.
import json import tensorflow as tf gcs_input_uri = BUCKET_NAME + '/test.jsonl' with tf.io.gfile.GFile(gcs_input_uri, 'w') as f: data = { "content": test_item_1, "mimeType": "video/avi", "timeSegmentStart": "0.0s", 'timeSegmentEnd': '5.0s' } f.write(json.dumps(data) + '\n') data = { "content": test_item_2, "mimeType": "video/avi", "timeSegmentStart": "0.0s", 'timeSegmentEnd': '5.0s' } f.write(json.dumps(data) + '\n') print(gcs_input_uri) ! gsutil cat $gcs_input_uri
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Compute instance scalingYou have several choices on scaling the compute instances for handling your batch prediction requests:- Single Instance: The batch prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.- Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
MIN_NODES = 1 MAX_NODES = 1
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Make batch prediction requestNow that your batch of two test items is ready, let's do the batch request. Use this helper function `create_batch_prediction_job`, with the following parameters:- `display_name`: The human readable name for the prediction job.- `model_name`: The Vertex fully qualified identifier for the `Model` resource.- `gcs_source_uri`: The Cloud Storage path to the input file -- which you created above.- `gcs_destination_output_uri_prefix`: The Cloud Storage path that the service will write the predictions to.- `parameters`: Additional filtering parameters for serving prediction results.The helper function calls the job client service's `create_batch_prediction_job` metho, with the following parameters:- `parent`: The Vertex location root path for Dataset, Model and Pipeline resources.- `batch_prediction_job`: The specification for the batch prediction job.Let's now dive into the specification for the `batch_prediction_job`:- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex fully qualified identifier for the `Model` resource.- `dedicated_resources`: The compute resources to provision for the batch prediction job. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `starting_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.- `model_parameters`: Additional filtering parameters for serving prediction results. - `confidenceThreshold`: The minimum confidence threshold on doing a prediction. - `maxPredictions`: The maximum number of predictions to return per action, sorted by confidence.- `input_config`: The input source and format type for the instances to predict. - `instances_format`: The format of the batch prediction request file: `csv` or `jsonl`. - `gcs_source`: A list of one or more Cloud Storage paths to your batch prediction requests.- `output_config`: The output destination and format for the predictions. - `prediction_format`: The format of the batch prediction response file: `jsonl` only. - `gcs_destination`: The output destination for the predictions.You might ask, how does confidence_threshold affect the model accuracy? The threshold won't change the accuracy. What it changes is recall and precision. - Precision: The higher the precision the more likely what is predicted is the correct prediction, but return fewer predictions. Increasing the confidence threshold increases precision. - Recall: The higher the recall the more likely a correct prediction is returned in the result, but return more prediction with incorrect prediction. Decreasing the confidence threshold increases recall.In this example, you will predict for precision. You set the confidence threshold to 0.5 and the maximum number of predictions for an action to two. Since, all the confidence values across the classes must add up to one, there are only two possible outcomes: 1. There is a tie, both 0.5, and returns two predictions. 2. One value is above 0.5 and the rest are below 0.5, and returns one prediction.This call is an asychronous operation. You will print from the response object a few select fields, including:- `name`: The Vertex fully qualified identifier assigned to the batch prediction job.- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex fully qualified identifier for the Model resource.- `generate_explanations`: Whether True/False explanations were provided with the predictions (explainability).- `state`: The state of the prediction job (pending, running, etc).Since this call will take a few moments to execute, you will likely get `JobState.JOB_STATE_PENDING` for `state`.
BATCH_MODEL = "golf_batch-" + TIMESTAMP def create_batch_prediction_job(display_name, model_name, gcs_source_uri, gcs_destination_output_uri_prefix, parameters=None): if DEPLOY_GPU: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_type": DEPLOY_GPU, "accelerator_count": DEPLOY_NGPU, } else: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_count": 0, } batch_prediction_job = { "display_name": display_name, # Format: 'projects/{project}/locations/{location}/models/{model_id}' "model": model_name, "model_parameters": json_format.ParseDict(parameters, Value()), "input_config": { "instances_format": IN_FORMAT, "gcs_source": {"uris": [gcs_source_uri]}, }, "output_config": { "predictions_format": OUT_FORMAT, "gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix}, }, "dedicated_resources": { "machine_spec": machine_spec, "starting_replica_count": MIN_NODES, "max_replica_count": MAX_NODES } } response = clients['job'].create_batch_prediction_job( parent=PARENT, batch_prediction_job=batch_prediction_job ) print("response") print(" name:", response.name) print(" display_name:", response.display_name) print(" model:", response.model) try: print(" generate_explanation:", response.generate_explanation) except: pass print(" state:", response.state) print(" create_time:", response.create_time) print(" start_time:", response.start_time) print(" end_time:", response.end_time) print(" update_time:", response.update_time) print(" labels:", response.labels) return response IN_FORMAT = 'jsonl' OUT_FORMAT = 'jsonl' # [jsonl] response = create_batch_prediction_job(BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, {'confidenceThreshold': 0.5, 'maxPredictions': 2})
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Now get the unique identifier for the batch prediction job you created.
# The full unique ID for the batch job batch_job_id = response.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split('/')[-1] print(batch_job_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Get information on a batch prediction jobUse this helper function `get_batch_prediction_job`, with the following paramter:- `job_name`: The Vertex fully qualified identifier for the batch prediction job.The helper function calls the job client service's `get_batch_prediction_job` method, with the following paramter:- `name`: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- `batch_job_id`The helper function will return the Cloud Storage path to where the predictions are stored -- `gcs_destination`.
def get_batch_prediction_job(job_name, silent=False): response = clients['job'].get_batch_prediction_job(name=job_name) if silent: return response.output_config.gcs_destination.output_uri_prefix, response.state print("response") print(" name:", response.name) print(" display_name:", response.display_name) print(" model:", response.model) try: # not all data types support explanations print(" generate_explanation:", response.generate_explanation) except: pass print(" state:", response.state) print(" error:", response.error) gcs_destination = response.output_config.gcs_destination print(" gcs_destination") print(" output_uri_prefix:", gcs_destination.output_uri_prefix) return gcs_destination.output_uri_prefix, response.state predictions, state = get_batch_prediction_job(batch_job_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Get the predictionsWhen the batch prediction is done processing, the job state will be `JOB_STATE_SUCCEEDED`.Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name `prediction`, and under that folder will be a file called `predictions*.jsonl`.Now display (cat) the contents. You will see multiple JSON objects, one for each prediction.For each prediction:- `content`: The video that was input for the prediction request.- `displayName`: The prediction action.- `confidence`: The confidence in the prediction between 0 and 1.- `timeSegmentStart/timeSegmentEnd`: The time offset of the start and end of the predicted action.
def get_latest_predictions(gcs_out_dir): ''' Get the latest prediction subfolder using the timestamp in the subfolder name''' folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split('/')[-2] if subfolder.startswith('prediction-'): if subfolder > latest: latest = folder[:-1] return latest while True: predictions, state = get_batch_prediction_job(batch_job_id, True) if state != aip.JobState.JOB_STATE_SUCCEEDED: print("The job has not completed:", state) if state == aip.JobState.JOB_STATE_FAILED: raise Exception("Batch Job Failed") else: folder = get_latest_predictions(predictions) ! gsutil ls $folder/prediction*.jsonl ! gsutil cat $folder/prediction*.jsonl break time.sleep(60)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex fully qualified identifier for the dataset try: if delete_dataset and 'dataset_id' in globals(): clients['dataset'].delete_dataset(name=dataset_id) except Exception as e: print(e) # Delete the training pipeline using the Vertex fully qualified identifier for the pipeline try: if delete_pipeline and 'pipeline_id' in globals(): clients['pipeline'].delete_training_pipeline(name=pipeline_id) except Exception as e: print(e) # Delete the model using the Vertex fully qualified identifier for the model try: if delete_model and 'model_to_deploy_id' in globals(): clients['model'].delete_model(name=model_to_deploy_id) except Exception as e: print(e) # Delete the endpoint using the Vertex fully qualified identifier for the endpoint try: if delete_endpoint and 'endpoint_id' in globals(): clients['endpoint'].delete_endpoint(name=endpoint_id) except Exception as e: print(e) # Delete the batch job using the Vertex fully qualified identifier for the batch job try: if delete_batchjob and 'batch_job_id' in globals(): clients['job'].delete_batch_prediction_job(name=batch_job_id) except Exception as e: print(e) # Delete the custom job using the Vertex fully qualified identifier for the custom job try: if delete_customjob and 'job_id' in globals(): clients['job'].delete_custom_job(name=job_id) except Exception as e: print(e) # Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job try: if delete_hptjob and 'hpt_job_id' in globals(): clients['job'].delete_hyperparameter_tuning_job(name=hpt_job_id) except Exception as e: print(e) if delete_bucket and 'BUCKET_NAME' in globals(): ! gsutil rm -r $BUCKET_NAME
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Tensor analysis using Amazon SageMaker DebuggerLooking at the distributions of activation inputs/outputs, gradients and weights per layer can give useful insights. For instance, it helps to understand whether the model runs into problems like neuron saturation, whether there are layers in your model that are not learning at all or whether the network consists of too many layers etc. The following animation shows the distribution of gradients of a convolutional layer from an example application as the training progresses. We can see that it starts as Gaussian distribution but then becomes more and more narrow. We can also see that the range of gradients starts very small (order of $1e-5$) and becomes even tinier as training progresses. If tiny gradients are observed from the start of training, it is an indication that we should check the hyperparameters of our model. ![](images/example.gif)In this notebook we will train a poorly configured neural network and use Amazon SageMaker Debugger with custom rules to aggregate and analyse specific tensors. Before we proceed let us install the smdebug binary which allows us to perform interactive analysis in this notebook. After installing it, please restart the kernel, and when you come back skip this cell. Installing smdebug
! python -m pip install smdebug
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Configuring the inputs for the training jobNow we'll call the Sagemaker MXNet Estimator to kick off a training job . The `entry_point_script` points to the MXNet training script. The users can create a custom *SessionHook* in their training script. If they chose not to create such hook in the training script (similar to the one we will be using in this example) Amazon SageMaker Debugger will create the appropriate *SessionHook* based on specified *DebugHookConfig* parameters.The `hyperparameters` are the parameters that will be passed to the training script. We choose `Uniform(1)` as initializer and learning rate of `0.001`. This leads to the model not training well because the model is poorly initialized.The goal of a good intialization is - to break the symmetry such that parameters do not receive same gradients and updates- to keep variance similar across layersA bad intialization may lead to vanishing or exploiding gradients and the model not training at all. Once the training is running we will look at the distirbutions of activation inputs/outputs, gradients and weights across the training to see how these hyperparameters influenced the training.
entry_point_script = 'mnist.py' bad_hyperparameters = {'initializer': 2, 'lr': 0.001} import sagemaker from sagemaker.mxnet import MXNet from sagemaker.debugger import DebuggerHookConfig, CollectionConfig import boto3 import os estimator = MXNet(role=sagemaker.get_execution_role(), base_job_name='mxnet', train_instance_count=1, train_instance_type='ml.m5.xlarge', train_volume_size=400, source_dir='src', entry_point=entry_point_script, hyperparameters=bad_hyperparameters, framework_version='1.6.0', py_version='py3', debugger_hook_config = DebuggerHookConfig( collection_configs=[ CollectionConfig( name="all", parameters={ "include_regex": ".*", "save_interval": "100" } ) ] ) )
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Start the training job
estimator.fit(wait=False)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Get S3 location of tensorsWe can get information related to the training job:
job_name = estimator.latest_training_job.name client = estimator.sagemaker_session.sagemaker_client description = client.describe_training_job(TrainingJobName=job_name) description
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
We can retrieve the S3 location of the tensors:
path = estimator.latest_job_debugger_artifacts_path() print('Tensors are stored in: ', path)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
We can check the status of our training job, by executing `describe_training_job`:
job_name = estimator.latest_training_job.name print('Training job name: {}'.format(job_name)) client = estimator.sagemaker_session.sagemaker_client description = client.describe_training_job(TrainingJobName=job_name)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
We can access the tensors from S3 once the training job is in status `Training` or `Completed`. In the following code cell we check the job status:
import time if description['TrainingJobStatus'] != 'Completed': while description['SecondaryStatus'] not in {'Training', 'Completed'}: description = client.describe_training_job(TrainingJobName=job_name) primary_status = description['TrainingJobStatus'] secondary_status = description['SecondaryStatus'] print('Current job status: [PrimaryStatus: {}, SecondaryStatus: {}]'.format(primary_status, secondary_status)) time.sleep(15)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Once the job is in status `Training` or `Completed`, we can create the trial that allows us to access the tensors in Amazon S3.
from smdebug.trials import create_trial trial1 = create_trial(path)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
We can check the available steps. A step presents one forward and backward pass.
trial1.steps()
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
As training progresses more steps will become available. Next we will access specific tensors like weights, gradients and activation outputs and plot their distributions. We will use Amazon SageMaker Debugger and define custom rules to retrieve certain tensors. Rules are supposed to return True or False. However in this notebook we will use custom rules to store dictionaries of aggregated tensors per layer and step, which we then plot afterwards.A custom rule inherits from the smdebug Rule class and implements the function `invoke_at_step`. This function is called everytime tensors of a new step become available:```from smdebug.rules.rule import Ruleclass MyCustomRule(Rule): def __init__(self, base_trial): super().__init__(base_trial) def invoke_at_step(self, step): if np.max(self.base_trial.tensor('conv0_relu_output_0').value(step) < 0.001: return True return False``` Above example rule checks if the first convolutional layer outputs only small values. If so the rule returns `True` which corresponds to an `Issue found`, otherwise False `No Issue found`. Activation outputsThis rule will use Amazon SageMaker Debugger to retrieve tensors from the ReLU output layers. It sums the activations across batch and steps. If there is a large fraction of ReLUs outputing 0 across many steps it means that the neuron is dying.
from smdebug.trials import create_trial from smdebug.rules.rule_invoker import invoke_rule from smdebug.exceptions import NoMoreData from smdebug.rules.rule import Rule import numpy as np import utils import collections import os from IPython.display import Image class ActivationOutputs(Rule): def __init__(self, base_trial): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*relu_output'): if "gradients" not in tname: try: tensor = self.base_trial.tensor(tname).value(step) if tname not in self.tensors: self.tensors[tname] = collections.OrderedDict() if step not in self.tensors[tname]: self.tensors[tname][step] = 0 neg_values = np.where(tensor <= 0)[0] if len(neg_values) > 0: self.logger.info(f" Step {step} tensor {tname} has {len(neg_values)/tensor.size*100}% activation outputs which are smaller than 0 ") batch_over_sum = np.sum(tensor, axis=0)/tensor.shape[0] self.tensors[tname][step] += batch_over_sum except: self.logger.warning(f"Can not fetch tensor {tname}") return False rule = ActivationOutputs(trial1) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/activation_outputs.gif') Image(url='images/activation_outputs.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Activation InputsIn this rule we look at the inputs into activation function, rather than the output. This can be helpful to understand if there are extreme negative or positive values that saturate the activation functions.
class ActivationInputs(Rule): def __init__(self, base_trial): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*relu_input'): if "gradients" not in tname: try: tensor = self.base_trial.tensor(tname).value(step) if tname not in self.tensors: self.tensors[tname] = {} if step not in self.tensors[tname]: self.tensors[tname][step] = 0 neg_values = np.where(tensor <= 0)[0] if len(neg_values) > 0: self.logger.info(f" Tensor {tname} has {len(neg_values)/tensor.size*100}% activation inputs which are smaller than 0 ") batch_over_sum = np.sum(tensor, axis=0)/tensor.shape[0] self.tensors[tname][step] += batch_over_sum except: self.logger.warning(f"Can not fetch tensor {tname}") return False rule = ActivationInputs(trial1) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/activation_inputs.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
We can see that second convolutional layer `conv1_relu_input_0` receives only negative input values, which means that all ReLUs in this layer output 0.
Image(url='images/activation_inputs.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
GradientsThe following code retrieves the gradients and plots their distribution. If variance is tiny, that means that the model parameters do not get updated effectively with each training step or that the training has converged to a minimum.
class GradientsLayer(Rule): def __init__(self, base_trial): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*gradient'): try: tensor = self.base_trial.tensor(tname).value(step) if tname not in self.tensors: self.tensors[tname] = {} self.logger.info(f" Tensor {tname} has gradients range: {np.min(tensor)} {np.max(tensor)} ") self.tensors[tname][step] = tensor except: self.logger.warning(f"Can not fetch tensor {tname}") return False rule = GradientsLayer(trial1) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/gradients.gif') Image(url='images/gradients.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Check variance across layersThe rule retrieves gradients, but this time we compare variance of gradient distribution across layers. We want to identify if there is a large difference between the min and max variance per training step. For instance, very deep neural networks may suffer from vanishing gradients the deeper we go. By checking this ratio we can determine if we run into such a situation.
class GradientsAcrossLayers(Rule): def __init__(self, base_trial, ): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*gradient'): try: tensor = self.base_trial.tensor(tname).value(step) if step not in self.tensors: self.tensors[step] = [np.inf, 0] variance = np.var(tensor.flatten()) if variance < self.tensors[step][0]: self.tensors[step][0] = variance elif variance > self.tensors[step][1]: self.tensors[step][1] = variance self.logger.info(f" Step {step} current ratio: {self.tensors[step][0]} {self.tensors[step][1]} Ratio: {self.tensors[step][1] / self.tensors[step][0]}") except: self.logger.warning(f"Can not fetch tensor {tname}") return False rule = GradientsAcrossLayers(trial1) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Let's check min and max values of the gradients across layers:
for step in rule.tensors: print("Step", step, "variance of gradients: ", rule.tensors[step][0], " to ", rule.tensors[step][1])
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Distribution of weightsThis rule retrieves the weight tensors and checks the variance. If the distribution does not change much across steps it may indicate that the learning rate is too low, that gradients are too small or that the training has converged to a minimum.
class WeightRatio(Rule): def __init__(self, base_trial, ): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*weight'): if "gradient" not in tname: try: tensor = self.base_trial.tensor(tname).value(step) if tname not in self.tensors: self.tensors[tname] = {} self.logger.info(f" Tensor {tname} has weights with variance: {np.var(tensor.flatten())} ") self.tensors[tname][step] = tensor except: self.logger.warning(f"Can not fetch tensor {tname}") return False rule = WeightRatio(trial1) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/weights.gif') Image(url='images/weights.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
InputsThis rule retrieves layer inputs excluding activation inputs.
class Inputs(Rule): def __init__(self, base_trial, ): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*input'): if "relu" not in tname: try: tensor = self.base_trial.tensor(tname).value(step) if tname not in self.tensors: self.tensors[tname] = {} self.logger.info(f" Tensor {tname} has inputs with variance: {np.var(tensor.flatten())} ") self.tensors[tname][step] = tensor except: self.logger.warning(f"Can not fetch tensor {tname}") return False rule = Inputs(trial1) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/layer_inputs.gif') Image(url='images/layer_inputs.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Layer outputsThis rule retrieves outputs of layers excluding activation outputs.
class Outputs(Rule): def __init__(self, base_trial, ): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*output'): if "relu" not in tname: try: tensor = self.base_trial.tensor(tname).value(step) if tname not in self.tensors: self.tensors[tname] = {} self.logger.info(f" Tensor {tname} has inputs with variance: {np.var(tensor.flatten())} ") self.tensors[tname][step] = tensor except: self.logger.warning(f"Can not fetch tensor {tname}") return False rule = Outputs(trial1) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/layer_outputs.gif') Image(url='images/layer_outputs.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Comparison In the previous section we have looked at the distribution of gradients, activation outputs and weights of a model that has not trained well due to poor initialization. Now we will compare some of these distributions with a model that has been well intialized.
entry_point_script = 'mnist.py' hyperparameters = {'lr': 0.01} estimator = MXNet(role=sagemaker.get_execution_role(), base_job_name='mxnet', train_instance_count=1, train_instance_type='ml.m5.xlarge', train_volume_size=400, source_dir='src', entry_point=entry_point_script, hyperparameters=hyperparameters, framework_version='1.6.0', py_version='py3', debugger_hook_config = DebuggerHookConfig( collection_configs=[ CollectionConfig( name="all", parameters={ "include_regex": ".*", "save_interval": "100" } ) ] ) )
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Start the training job
estimator.fit(wait=False)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Get S3 path where tensors have been stored
path = estimator.latest_job_debugger_artifacts_path() print('Tensors are stored in: ', path)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Check the status of the training job:
job_name = estimator.latest_training_job.name print('Training job name: {}'.format(job_name)) client = estimator.sagemaker_session.sagemaker_client description = client.describe_training_job(TrainingJobName=job_name) if description['TrainingJobStatus'] != 'Completed': while description['SecondaryStatus'] not in {'Training', 'Completed'}: description = client.describe_training_job(TrainingJobName=job_name) primary_status = description['TrainingJobStatus'] secondary_status = description['SecondaryStatus'] print('Current job status: [PrimaryStatus: {}, SecondaryStatus: {}]'.format(primary_status, secondary_status)) time.sleep(15)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Now we create a new trial object `trial2`:
from smdebug.trials import create_trial trial2 = create_trial(path)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
GradientsLets compare distribution of gradients of the convolutional layers of both trials. `trial` is the trial object of the first training job, `trial2` is the trial object of second training job. We can now easily compare tensors from both training jobs.
rule = GradientsLayer(trial1) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.') dict_gradients = {} dict_gradients['gradient/conv0_weight_bad_hyperparameters'] = rule.tensors['gradient/conv0_weight'] dict_gradients['gradient/conv1_weight_bad_hyperparameters'] = rule.tensors['gradient/conv1_weight']
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Second trial:
rule = GradientsLayer(trial2) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.') dict_gradients['gradient/conv0_weight_good_hyperparameters'] = rule.tensors['gradient/conv0_weight'] dict_gradients['gradient/conv1_weight_good_hyperparameters'] = rule.tensors['gradient/conv1_weight']
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(dict_gradients, filename='images/gradients_comparison.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
In the case of the poorly initalized model, gradients are fluctuating a lot leading to very high variance.
Image(url='images/gradients_comparison.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Activation inputsLets compare distribution of activation inputs of both trials.
rule = ActivationInputs(trial1) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.') dict_activation_inputs = {} dict_activation_inputs['conv0_relu_input_0_bad_hyperparameters'] = rule.tensors['conv0_relu_input_0'] dict_activation_inputs['conv1_relu_input_0_bad_hyperparameters'] = rule.tensors['conv1_relu_input_0']
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Second trial
rule = ActivationInputs(trial2) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.') dict_activation_inputs['conv0_relu_input_0_good_hyperparameters'] = rule.tensors['conv0_relu_input_0'] dict_activation_inputs['conv1_relu_input_0_good_hyperparameters'] = rule.tensors['conv1_relu_input_0']
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(dict_activation_inputs, filename='images/activation_inputs_comparison.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
The distribution of activation inputs into first activation layer `conv0_relu_input_0` look quite similar in both trials. However in the case of the second layer they drastically differ.
Image(url='images/activation_inputs_comparison.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Copyright 2019 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
単語埋め込み (Word embeddings) View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを保証することはできません。 この翻訳の品質を向上させるためのご意見をお持ちの方は、GitHubリポジトリ[tensorflow/docs](https://github.com/tensorflow/docs)にプルリクエストをお送りください。 コミュニティによる翻訳やレビューに参加していただける方は、 [[email protected] メーリングリスト](https://groups.google.com/a/tensorflow.org/forum/!forum/docs-ja)にご連絡ください。 このチュートリアルでは、単語埋め込みを紹介します。このチュートリアルには、小さいデータセットを使って単語埋め込みを最初から学習させ、その埋め込みベクトルを [Embedding Projector](http://projector.tensorflow.org) (下図参照)を使って可視化するためのプログラムがすべて含まれています。 テキストを数値で表す機械学習モデルは、ベクトル(数値の配列)を入力として受け取ります。テキストを扱う際、最初に決めなければならないのは、文字列を機械学習モデルに入力する前に、数値に変換する(あるいはテキストを「ベクトル化」する)ための戦略です。このセクションでは、これを行う3つの戦略を見てみます。 ワンホット・エンコーディング最初のアイデアとして、ボキャブラリの中の単語それぞれを「ワンホット」エンコードするというのがあります。 "The cat sat on the mat" という文を考えてみましょう。この文に含まれるボキャブラリ(ユニークな単語)は、 (cat, mat, on, sat, the) です。それぞれの単語を表現するため、ボキャブラリの長さに等しいゼロベクトルを作り、その単語に対応するインデックスの場所に 1 を立てます。これを下図で示します。 文をエンコードしたベクトルを作成するには、その後、それぞれの単語のワンホット・ベクトルをつなげればよいのです。Key point: この手法は非効率です。ワンホット・エンコードされたベクトルは疎(つまり、ほとんどのインデックスではゼロ)です。ボキャブラリに 10,000 の単語があると考えてみましょう。単語をすべてワンホット・エンコードするということは、要素の 99.99% がゼロであるベクトルを作ることになります。 それぞれの単語をユニークな数値としてエンコードする2つ目のアプローチとして、それぞれの単語をユニークな数値でエンコードするというのがあります。上記の例をとれば、"cat" に 1、"mat" に 2、というふうに番号を割り当てることができます。そうすれば、 "The cat sat on the mat" という文は、 [5, 1, 4, 3, 5, 2] という密なベクトルで表すことができます。この手法は効率的です。疎なベクトルの代わりに、密な(すべての要素が入っている)ベクトルが得られます。しかしながら、このアプローチには 2つの欠点があります。* 整数エンコーディングは勝手に決めたものです(単語間のいかなる関係性も含んでいません)。* 整数エンコーディングはモデルにとっては解釈しにくいものです。たとえば、線形分類器はそれぞれの特徴量について単一の重みしか学習しません。したがって、2つの単語が似かよっていることと、それらのエンコーディングが似かよっていることの間には、なんの関係もありません。この特徴と重みの組み合わせには意味がありません。 単語埋め込み単語埋め込みを使うと、似たような単語が似たようにエンコードされる、効率的で密な表現が得られます。重要なのは、このエンコーディングを手動で行う必要がないということです。埋め込みは浮動小数点数の密なベクトルです(そのベクトルの長さはあなたが指定するパラメータです)。埋め込みベクトルの値は指定するものではなく、学習されるパラメータです(モデルが密結合レイヤーの重みを学習するように、訓練をとおしてモデルが学習する重みです)。一般的には、(小さいデータセットの場合の)8次元の埋め込みベクトルから、大きなデータセットを扱う 1024次元のものまで見られます。高次元の埋め込みは単語間の細かな関係を取得できますが、学習にはよりたくさんのデータが必要です。上図は単語埋め込みを図示したものです。それぞれの単語が 4次元の浮動小数点数のベクトルで表されています。埋め込みは「参照テーブル」と考えることもできます。重みが学習された後では、テーブルを参照して、それぞれの単語を対応する密ベクトルにエンコードできます。 設定
from __future__ import absolute_import, division, print_function, unicode_literals try: # %tensorflow_version は Colab 中でのみ使用できます !pip install tf-nightly except Exception: pass import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_datasets as tfds tfds.disable_progress_bar()
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
Embedding レイヤーを使うKeras では単語埋め込みを使うのも簡単です。[Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) レイヤーを見てみましょう。 Embedding レイヤーは、(特定の単語を示す)整数のインデックスに(その埋め込みである)密なベクトルを対応させる参照テーブルとして理解することができます。埋め込みの次元数(あるいはその幅)は、取り組んでいる問題に適した値を実験して求めるパラメータです。これは、Dense レイヤーの中のニューロンの数を実験で求めるのとまったくおなじです。
embedding_layer = layers.Embedding(1000, 5)
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
Embedding レイヤーを作成するとき、埋め込みの重みは(ほかのレイヤーとおなじように)ランダムに初期化されます。訓練を通じて、これらの重みはバックプロパゲーションによって徐々に調整されます。いったん訓練が行われると、学習された単語埋め込みは、(モデルを訓練した特定の問題のために学習された結果)単語の間の類似性をおおまかにコード化しています。Embedding レイヤーに整数を渡すと、結果はそれぞれの整数が埋め込みテーブルのベクトルに置き換えられます。
result = embedding_layer(tf.constant([1,2,3])) result.numpy()
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
テキストあるいはシーケンスの問題では、入力として、Embedding レイヤーは shape が `(samples, sequence_length)` の2次元整数テンソルを取ります。ここで、各エントリは整数のシーケンスです。このレイヤーは、可変長のシーケンスを埋め込みベクトルにすることができます。上記のバッチでは、 `(32, 10)` (長さ10のシーケンス32個のバッチ)や、 `(64, 15)` (長さ15のシーケンス64個のバッチ)を埋め込みレイヤーに投入可能です。返されたテンソルは入力より 1つ軸が多くなっており、埋め込みベクトルはその最後の新しい軸に沿って並べられます。`(2, 3)` の入力バッチを渡すと、出力は `(2, 3, N)` となります。
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]])) result.shape
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
シーケンスのバッチを入力されると、Embedding レイヤーは shape が `(samples, sequence_length, embedding_dimensionality)` の3次元浮動小数点数テンソルを返します。この可変長のシーケンスを、固定長の表現に変換するには、さまざまな標準的なアプローチが存在します。Dense レイヤーに渡す前に、RNNやアテンション、プーリングレイヤーを使うことができます。ここでは、一番単純なのでプーリングを使用します。[RNN を使ったテキスト分類](https://github.com/tensorflow/docs/blob/master/site/ja/tutorials/text/text_classification_rnn.ipynb) は次のステップとしてよいチュートリアルでしょう。 埋め込みを最初から学習する IMDB の映画レビューの感情分析器を訓練しようと思います。そのプロセスを通じて、埋め込みを最初から学習します。ここでは、前処理済みのデータセットを使用します。テキストデータセットを最初からロードする方法については、[テキスト読み込みのチュートリアル](../load_data/text.ipynb)を参照してください。
(train_data, test_data), info = tfds.load( 'imdb_reviews/subwords8k', split = (tfds.Split.TRAIN, tfds.Split.TEST), with_info=True, as_supervised=True)
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
エンコーダー(`tfds.features.text.SubwordTextEncoder`)を取得し、すこしボキャブラリを見てみましょう。ボキャブラリ中の "\_" は空白を表しています。ボキャブラリの中にどんなふうに("\_")で終わる単語全体と、長い単語を構成する単語の一部が含まれているかに注目してください。
encoder = info.features['text'].encoder encoder.subwords[:20]
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
映画のレビューはそれぞれ長さが異なっているはずです。`padded_batch` メソッドを使ってレビューの長さを標準化します。
train_batches = train_data.shuffle(1000).padded_batch(10) test_batches = test_data.shuffle(1000).padded_batch(10)
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
インポートした状態では、レビューのテキストは整数エンコードされています(それぞれの整数がボキャブラリ中の特定の単語あるいは部分単語を表しています)。あとの方のゼロに注目してください。これは、バッチが一番長いサンプルに合わせてパディングされた結果です。
train_batch, train_labels = next(iter(train_batches)) train_batch.numpy()
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
単純なモデルの構築[Keras Sequential API](../../guide/keras) を使ってモデルを定義することにします。今回の場合、モデルは「連続した Bag of Words」スタイルのモデルです。* 次のレイヤーは Embedding レイヤーです。このレイヤーは整数エンコードされた語彙を受け取り、それぞれの単語のインデックスに対応する埋め込みベクトルをみつけて取り出します。これらのベクトルはモデルの訓練により学習されます。このベクトルは出力配列に次元を追加します。その結果次元は `(batch, sequence, embedding)` となります。* 次に、GlobalAveragePooling1D レイヤーが、それぞれのサンプルについて、シーケンスの次元で平均を取り、固定長の出力ベクトルを返します。これにより、モデルは可変長の入力を最も簡単な方法で扱えるようになります。* この固定長のベクトルは、16個の隠れユニットを持つ全結合(Dense)レイヤーに接続されます。* 最後のレイヤーは、1個の出力ノードを持つ Dense レイヤーです。シグモイド活性化関数を使うことで、値は 0 と 1 の間の値を取り、レビューがポジティブ(好意的)であるかどうかの確率(または確信度)を表します。Caution: このモデルはマスキングを使用していません。このため、ゼロパディングが入力の一部として扱われ、結果としてパディングの長さが出力に影響を与える可能性があります。これを修正するには[マスキングとパディングのガイド](../../guide/keras/masking_and_padding)を参照してください。
embedding_dim=16 model = keras.Sequential([ layers.Embedding(encoder.vocab_size, embedding_dim), layers.Dense(16, activation='relu'), layers.GlobalAveragePooling1D(), layers.Dense(1, activation='sigmoid') ]) model.summary()
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
モデルのコンパイルと訓練
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) history = model.fit( train_batches, epochs=10, validation_data=test_batches, validation_steps=20)
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
このアプローチにより、モデルの評価時の正解率は 88% 前後に達します(モデルは過学習しており、訓練時の正解率の方が際立って高いことに注意してください)。
import matplotlib.pyplot as plt history_dict = history.history acc = history_dict['accuracy'] val_acc = history_dict['val_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) plt.figure(figsize=(12,9)) plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot(epochs, val_loss, 'b', label='Validation loss') plt.title('Training and validation loss') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.show() plt.figure(figsize=(12,9)) plt.plot(epochs, acc, 'bo', label='Training acc') plt.plot(epochs, val_acc, 'b', label='Validation acc') plt.title('Training and validation accuracy') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend(loc='lower right') plt.ylim((0.5,1)) plt.show()
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
学習した埋め込みの取得次に、訓練によって学習された単語埋め込みを取得してみます。これは、shape が `(vocab_size, embedding-dimension)` の行列になります。
e = model.layers[0] weights = e.get_weights()[0] print(weights.shape) # shape: (vocab_size, embedding_dim)
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
この重みをディスクに出力します。[Embedding Projector](http://projector.tensorflow.org) を使うため、タブ区切り形式の2つのファイルをアップロードします。(埋め込みを含む)ベクトルのファイルと、(単語を含む)メタデータファイルです。
import io encoder = info.features['text'].encoder out_v = io.open('vecs.tsv', 'w', encoding='utf-8') out_m = io.open('meta.tsv', 'w', encoding='utf-8') for num, word in enumerate(encoder.subwords): vec = weights[num+1] # 0 はパディングのためスキップ out_m.write(word + "\n") out_v.write('\t'.join([str(x) for x in vec]) + "\n") out_v.close() out_m.close()
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
このチュートリアルを [Colaboratory](https://colab.research.google.com) で実行している場合には、下記のコードを使ってこれらのファイルをローカルマシンにダウンロードすることができます(あるいは、ファイルブラウザを使います。*表示 -> 目次 -> ファイル* )。
try: from google.colab import files except ImportError: pass else: files.download('vecs.tsv') files.download('meta.tsv')
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
Методы обучения без учителя Метод главных компонент Внимание! Решение данной задачи предполагает, что у вас установлены библиотека numpy версии 1.16.4 и выше и библиотека scikit-learn версии 0.21.2 и выше. В следующей ячейке мы проверим это. Если у вас установлены более старые версии, обновите их пожалуйста, или воспользуйтесь бесплатным сервисом https://colab.research.google.com , в котором уже всё готово к работе. В архиве есть руководство по началу работы с colab.
import numpy as np import sklearn
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
В этом задании мы применим метод главных компонент на многомерных данных и постараемся найти оптимальную размерность признаков для решения задачи классификации
import pandas as pd import matplotlib.pyplot as plt import numpy as np %matplotlib inline
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Подготовка данных Исходными [данными](http://archive.ics.uci.edu/ml/machine-learning-databases/auslan2-mld/auslan.data.html) являются показания различных сенсоров, установленных на руках человека, который умеет общаться на языке жестов.В данном случае задача ставится следующим образом: по показаниям датчиков (по 11 сенсоров на каждую руку) определить слово, которое было показано человеком.Как можно решать такую задачу?Показания датчиков представляются в виде временных рядов. Посмотрим на показания для одного из "слов"
# Загружаем данные сенсоров df_database = pd.read_csv('sign_database.csv') # Загружаем метки классов sign_classes = pd.read_csv('sign_classes.csv', index_col=0, header=0, names=['id', 'class']) # Столбец id - идентификаторы "слов" # Столбец time - метка времени # Остальные столбцы - показания серсоров для слова id в момент времени time df_database.head() # Выберем одно из слов с идентификатором = 0 sign0 = df_database.query('id == 0').drop(['id'], axis=1).set_index('time') sign0.plot()
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Для каждого из "слов" у нас есть набор показаний сенсоров с разных частей руки в каждый момент времени.Идея нашего подхода будет заключаться в следующем – давайте для каждого сенсора составим набор характеристик (например, разброс значений, максимальное, минимальное, среднее значение, количество "пиков", и т.п.) и будем использовать эти новые "признаки" для решения задачи классификации. Расчет новых признаков Признаки мы будем считать с помощью библиотеки [tsfresh](http://tsfresh.readthedocs.io/en/latest/index.html). Генерация новых признаков может занять много времени, поэтому мы сохранили посчитанные данные, но при желании вы можете повторить вычисления.
## Если не хотите долго ждать - не убирайте комментарии # from tsfresh.feature_extraction import extract_features # from tsfresh.feature_selection import select_features # from tsfresh.utilities.dataframe_functions import impute # from tsfresh.feature_extraction import ComprehensiveFCParameters, MinimalFCParameters, settings, EfficientFCParameters # sign_features = extract_features(df_database, column_id='id', column_sort='time', # default_fc_parameters=EfficientFCParameters(), # impute_function=impute) # sign_features_filtered = select_features(sign_features, s_classes.loc[:, 'target']) # filepath = './tsfresh_features_filt.csv.gz' # sign_features_filtered.to_csv(filepath, compression='gzip') filepath = './tsfresh_features_filt.csv' sign_features_filtered = pd.read_csv(filepath) sign_features_filtered.shape sign_features_filtered.head()
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Базовая модель В результате у нас получилось очень много признаков (аж 10865), давайте применим метод главных компонент, чтобы получить сжатое признаковое представление, сохранив при этом предиктивную силу в модели.
from sklearn.model_selection import cross_val_score from sklearn.model_selection import StratifiedKFold from sklearn.neighbors import KNeighborsClassifier from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline from sklearn.preprocessing import LabelEncoder
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Создадим бейзлайн без уменьшения размерности. Гиперпараметры модели подбирались произвольно
# Подготовим данные на вход в модель # признаки X = sign_features_filtered.values # классы enc = LabelEncoder() enc.fit(sign_classes.loc[:, 'class']) sign_classes.loc[:, 'target'] = enc.transform(sign_classes.loc[:, 'class']) y = sign_classes.target.values # Будем делать кросс-валидацию на 5 фолдов cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=123) base_model = Pipeline([ ('scaler', StandardScaler()), ('clf', KNeighborsClassifier(n_neighbors=9)) ]) base_cv_scores = cross_val_score(base_model, X, y, cv=cv, scoring='accuracy') base_cv_scores.mean()
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Качество базовой модели должно быть в районе 92 процентов. Метод главных компонент * Добавьте в пайплайн `base_model` шаг с методом главных компонент. Начиная с версии 0.18 в sklearn добавили разные солверы для PCA. Дополнитенльно задайте в модели следующие параметры: `svd_solder = "randomized"` и `random_state=123`.* Остальные гиперпараметры модели и способ кросс-валидации оставьте без изменений* Найдите такое наименьшее количество главных компонент, что качество нового пайплайна превыcит 80%* К качестве ответа укажите долю объяснённой дисперсии при найденной настройке PCA (для этого надо обучить PCA на всех данных). Формат ответа: число в интервале [0, 1] c точностью до сотых. *РЕШЕНИЕ*
numbers numbers = [i for i in range(9, 19)] scores = [] for n in numbers: base_model1 = Pipeline([ ('scaler', StandardScaler()), ('pca', PCA(n_components=n, svd_solver='randomized', random_state=123)), ('clf', KNeighborsClassifier(n_neighbors=9)) ]) scores.append(cross_val_score(base_model1, X, y, cv=cv, scoring='accuracy').mean()) best_pca = 14 X = StandardScaler().fit_transform(X) pca = PCA(n_components=best_pca, svd_solver='randomized', random_state=123) pca.fit(X) plt.plot(pca.explained_variance_ratio_) expl = pca.explained_variance_ratio_.sum()
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Ответ
print('{:.2f}'.format(expl))
0.39
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Load libraries
!pip install -q -r requirements.txt import sys import os import numpy as np import pandas as pd from PIL import Image import torch import torch.nn as nn import torch.utils.data as D from torch.optim.lr_scheduler import ExponentialLR import torch.nn.functional as F from torch.autograd import Variable from torchvision import transforms from ignite.engine import Events from scripts.ignite import create_supervised_evaluator, create_supervised_trainer from ignite.metrics import Loss, Accuracy from ignite.contrib.handlers.tqdm_logger import ProgressBar from ignite.handlers import EarlyStopping, ModelCheckpoint from ignite.contrib.handlers import LinearCyclicalScheduler, CosineAnnealingScheduler from tqdm import tqdm_notebook from sklearn.model_selection import train_test_split from efficientnet_pytorch import EfficientNet from scripts.evaluate import eval_model, eval_model_10 import warnings warnings.filterwarnings('ignore')
_____no_output_____
Apache-2.0
my_notebooks/eval10_experiment5.ipynb
MichelML/ml-aging
Define dataset and model
img_dir = '../input/rxrxairgb512' path_data = '../input/rxrxaicsv' device = 'cuda' batch_size = 32 torch.manual_seed(0) model_name = 'efficientnet-b3' jitter = (0.6, 1.4) class ImagesDS(D.Dataset): # taken textbook from https://arxiv.org/pdf/1812.01187.pdf transform_train = transforms.Compose([ transforms.RandomResizedCrop(448), transforms.ColorJitter(brightness=jitter, contrast=jitter, saturation=jitter, hue=.1), transforms.RandomHorizontalFlip(p=0.5), # PCA Noise should go here, transforms.ToTensor(), transforms.Normalize(mean=(123.68, 116.779, 103.939), std=(58.393, 57.12, 57.375)) ]) transform_validation = transforms.Compose([ transforms.CenterCrop(448), transforms.ToTensor(), transforms.Normalize(mean=(123.68, 116.779, 103.939), std=(58.393, 57.12, 57.375)) ]) def __init__(self, df, img_dir=img_dir, mode='train', validation=False, site=1): self.records = df.to_records(index=False) self.site = site self.mode = mode self.img_dir = img_dir self.len = df.shape[0] self.validation = validation @staticmethod def _load_img_as_tensor(file_name, validation): with Image.open(file_name) as img: if not validation: return ImagesDS.transform_train(img) else: return ImagesDS.transform_validation(img) def _get_img_path(self, index, site=1): experiment, well, plate = self.records[index].experiment, self.records[index].well, self.records[index].plate return f'{self.img_dir}/{self.mode}/{experiment}_{plate}_{well}_s{site}.jpeg' def __getitem__(self, index): img1, img2 = [self._load_img_as_tensor(self._get_img_path(index, site), self.validation) for site in [1,2]] if self.mode == 'train': return img1, img2, int(self.records[index].sirna) else: return img1, img2, self.records[index].id_code def __len__(self): return self.len class TestImagesDS(D.Dataset): transform = transforms.Compose([ transforms.RandomCrop(448), transforms.ToTensor(), transforms.Normalize(mean=(123.68, 116.779, 103.939), std=(58.393, 57.12, 57.375)) ]) def __init__(self, df, img_dir=img_dir, mode='test', validation=False, site=1): self.records = df.to_records(index=False) self.site = site self.mode = mode self.img_dir = img_dir self.len = df.shape[0] self.validation = validation @staticmethod def _load_img_as_tensor(file_name): with Image.open(file_name) as img: return TestImagesDS.transform(img) def _get_img_path(self, index, site=1): experiment, well, plate = self.records[index].experiment, self.records[index].well, self.records[index].plate return f'{self.img_dir}/{self.mode}/{experiment}_{plate}_{well}_s{site}.jpeg' def get_image_pair(self, index): return [self._load_img_as_tensor(self._get_img_path(index, site)) for site in [1,2]] def __getitem__(self, index): image_pairs = [self.get_image_pair(index) for _ in range(20)] return image_pairs, self.records[index].id_code def __len__(self): return self.len # dataframes for training, cross-validation, and testing df_test = pd.read_csv(path_data+'/test.csv') # pytorch test dataset & loader ds_test = TestImagesDS(df_test, mode='test', validation=True) tloader = D.DataLoader(ds_test, batch_size=1, shuffle=False, num_workers=4) class EfficientNetTwoInputs(nn.Module): def __init__(self): super(EfficientNetTwoInputs, self).__init__() self.classes = 1108 model = model = EfficientNet.from_pretrained(model_name, num_classes=1108) num_ftrs = model._fc.in_features model._fc = nn.Identity() self.resnet = model self.fc = nn.Linear(num_ftrs * 2, self.classes) def forward(self, x1, x2): x1_out = self.resnet(x1) x2_out = self.resnet(x2) N, _, _, _ = x1.size() x1_out = x1_out.view(N, -1) x2_out = x2_out.view(N, -1) out = torch.cat((x1_out, x2_out), 1) out = self.fc(out) return out model = EfficientNetTwoInputs()
Loaded pretrained weights for efficientnet-b3
Apache-2.0
my_notebooks/eval10_experiment5.ipynb
MichelML/ml-aging
Evaluate
model.cuda() eval_model_10(model, tloader, 'models/Model_efficientnet-b3_93.pth', path_data)
_____no_output_____
Apache-2.0
my_notebooks/eval10_experiment5.ipynb
MichelML/ml-aging
Introduction to the Research EnvironmentThe research environment is powered by IPython notebooks, which allow one to perform a great deal of data analysis and statistical validation. We'll demonstrate a few simple techniques here. Code Cells vs. Text CellsAs you can see, each cell can be either code or text. To select between them, choose from the 'Cell Type' dropdown menu on the top left. Executing a CommandA code cell will be evaluated when you press play, or when you press the shortcut, shift-enter. Evaluating a cell evaluates each line of code in sequence, and prints the results of the last line below the cell.
2 + 2
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Sometimes there is no result to be printed, as is the case with assignment.
X = 2
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Remember that only the result from the last line is printed.
2 + 2 3 + 3
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
However, you can print whichever lines you want using the `print` statement.
print(2 + 2) 3 + 3
4
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Knowing When a Cell is RunningWhile a cell is running, a `[*]` will display on the left. When a cell has yet to be executed, `[ ]` will display. When it has been run, a number will display indicating the order in which it was run during the execution of the notebook `[5]`. Try on this cell and note it happening.
#Take some time to run something c = 0 for i in range(10000000+1): c = c + i print(c)
50000005000000
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Ejemplo 1: Progresión Aritmética de Diferencia 1$\frac{n\cdot \left(n+1\right)}{2}=1+2+3+4+5+6+\cdot \cdot \cdot +n$
n = 10000000 print(int(n*(n+1)/2))
50000005000000
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Importing LibrariesThe vast majority of the time, you'll want to use functions from pre-built libraries. You can't import every library on Quantopian due to security issues, but you can import most of the common scientific ones. Here I import numpy and pandas, the two most common and useful libraries in quant finance. I recommend copying this import statement to every new notebook.Notice that you can rename libraries to whatever you want after importing. The `as` statement allows this. Here we use `np` and `pd` as aliases for `numpy` and `pandas`. This is a very common aliasing and will be found in most code snippets around the web. The point behind this is to allow you to type fewer characters when you are frequently accessing these libraries.
import numpy as np import pandas as pd # This is a plotting library for pretty pictures. import matplotlib.pyplot as plt
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Tab AutocompletePressing tab will give you a list of IPython's best guesses for what you might want to type next. This is incredibly valuable and will save you a lot of time. If there is only one possible option for what you could type next, IPython will fill that in for you. Try pressing tab very frequently, it will seldom fill in anything you don't want, as if there is ambiguity a list will be shown. This is a great way to see what functions are available in a library.Try placing your cursor after the `.` and pressing tab.
np.random
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Getting Documentation HelpPlacing a question mark after a function and executing that line of code will give you the documentation IPython has for that function. It's often best to do this in a new cell, as you avoid re-executing other code and running into bugs.
np.random.normal?
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Ejemplo 2 Obtener un numero primo entre 1 y 100
def is_prime(number): if number <= 1: return False elif number <= 3: return True if number%2==0 or number%3==0: return False i = 5 while i*i <= number: if number % i == 0 or number % (i+2) == 0: return False; return True n = 0 while True: n = np.random.randint(0, 100) if exist: break print(n, "Es un numero primo")
49 Es un numero primo
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
SamplingWe'll sample some random data using a function from `numpy`.
# Sample 100 points with a mean of 0 and an std of 1. This is a standard normal distribution. X = np.random.normal(0, 1, 100) X
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
PlottingWe can use the plotting library we imported as follows.
plt.plot(X)
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Squelching Line OutputYou might have noticed the annoying line of the form `[]` before the plots. This is because the `.plot` function actually produces output. Sometimes we wish not to display output, we can accomplish this with the semi-colon as follows.
plt.plot(X);
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021