markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
ignore the above o/p, the main o/ps are below. | print(datai_list)
data_total_GB = [[z/(8*1024*1024) for z in y]for y in datai_list]
print(data_total_GB)
fig = plt.figure(figsize=(12,12))
#figsize=(15,15)
plt.plot(range(49),data_total_GB[0],label="30deg")
plt.plot(range(49),data_total_GB[1],label="20deg")
plt.plot(range(49),data_total_GB[2],label="10deg")
plt.plot(range(49),data_total_GB[3],label="0deg")
plt.plot(range(49),data_total_GB[4],label="-10deg")
plt.plot(range(49),data_total_GB[5],label="-20deg")
plt.plot(range(49),data_total_GB[6],label="-30deg")
plt.plot(range(49),data_total_GB[7],label="-40deg")
plt.plot(range(49),data_total_GB[8],label="-50deg")
plt.plot(range(49),data_total_GB[9],label="-60deg")
plt.plot(range(49),data_total_GB[10],label="-70deg",linestyle="--")
plt.plot(range(49),data_total_GB[11],label="-80deg",linestyle="--")
plt.plot(range(49),data_total_GB[12],label="-90deg",linestyle="--")
plt.xlabel("12 days") #each number represents 1 12 day chunk
plt.ylabel("Total Data Transfer(GB)")
plt.legend()
plt.show()
fig.savefig("data.png",bbox_inches = "tight") | _____no_output_____ | MIT | LatitudeTable.ipynb | kssumanth27/notebooks |
Amazon SageMaker Debugger - Using built-in rule[Amazon SageMaker](https://aws.amazon.com/sagemaker/) is managed platform to build, train and host maching learning models. Amazon SageMaker Debugger is a new feature which offers the capability to debug machine learning models during training by identifying and detecting problems with the models in near real-time. In this notebook you'll be looking at how to use a SageMaker provided built in rule during a TensorFlow training job. How does Amazon SageMaker Debugger work?Amazon SageMaker Debugger lets you go beyond just looking at scalars like losses and accuracies during training and gives you full visibility into all tensors 'flowing through the graph' during training. Furthermore, it helps you monitor your training in near real-time using rules and provides you alerts, once it has detected inconsistency in training flow. Concepts* **Tensors**: These represent the state of the training network at intermediate points during its execution* **Debug Hook**: Hook is the construct with which Amazon SageMaker Debugger looks into the training process and captures the tensors requested at the desired step intervals* **Rule**: A logical construct, implemented as Python code, which helps analyze the tensors captured by the hook and report anomalies, if at allWith these concepts in mind, let's understand the overall flow of things that Amazon SageMaker Debugger uses to orchestrate debugging Saving tensors during trainingThe tensors captured by the debug hook are stored in the S3 location specified by you. There are two ways you can configure Amazon SageMaker Debugger to save tensors: With no changes to your training scriptIf you use one of Amazon SageMaker provided [Deep Learning Containers](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-containers-frameworks-deep-learning.html) for 1.15, then you don't need to make any changes to your training script for the tensors to be stored. Amazon SageMaker Debugger will use the configuration you provide through Amazon SageMaker SDK's Tensorflow `Estimator` when creating your job to save the tensors in the fashion you specify. You can review the script we are going to use at [src/mnist_zerocodechange.py](src/mnist_zerocodechange.py). You will note that this is an untouched TensorFlow script which uses the `tf.estimator` interface. Please note that Amazon SageMaker Debugger only supports `tf.keras`, `tf.Estimator` and `tf.MonitoredSession` interfaces. Full description of support is available at [Amazon SageMaker Debugger with TensorFlow ](https://github.com/awslabs/sagemaker-debugger/tree/master/docs/tensorflow.md) Orchestrating your script to store tensorsFor other containers, you need to make couple of lines of changes to your training script. The Amazon SageMaker Debugger exposes a library `smdebug` which allows you to capture these tensors and save them for analysis. It's highly customizable and allows to save the specific tensors you want at different frequencies and possibly with other configurations. Refer [DeveloperGuide](https://github.com/awslabs/sagemaker-debugger/tree/master/docs) for details on how to use the Debugger library with your choice of framework in your training script. Here we have an example script orchestrated at [src/mnist_byoc](src/mnist_byoc.py). You also need to ensure that your container has the `smdebug` library installed. Analysis of tensorsOnce the tensors are saved, Amazon SageMaker Debugger can be configured to run debugging ***Rules*** on them. At a very broad level, a rule is python code used to detect certain conditions during training. Some of the conditions that a data scientist training an algorithm may care about are monitoring for gradients getting too large or too small, detecting overfitting, and so on. Amazon Sagemaker Debugger will come pre-packaged with certain first-party (1P) rules. Users can write their own rules using Amazon Sagemaker Debugger APIs. You can also analyze raw tensor data outside of the Rules construct in say, a Sagemaker notebook, using Amazon Sagemaker Debugger's full set of APIs. This notebook will show you how to use a built in SageMaker Rule with your training job as well as provide a sneak peak into these APIs for interactive exploration. Please refer [Analysis Developer Guide](https://github.com/awslabs/sagemaker-debugger/blob/master/docs/api.md) for more on these APIs. SetupFollow this one time setup to get your notebook up and running to use Amazon SageMaker Debugger. This is only needed because we plan to perform interactive analysis using this library in the notebook. | ! pip install smdebug | Requirement already satisfied: smdebug in /opt/conda/lib/python3.7/site-packages (0.7.2)
Requirement already satisfied: boto3>=1.10.32 in /opt/conda/lib/python3.7/site-packages (from smdebug) (1.12.45)
Requirement already satisfied: protobuf>=3.6.0 in /opt/conda/lib/python3.7/site-packages (from smdebug) (3.11.3)
Requirement already satisfied: packaging in /opt/conda/lib/python3.7/site-packages (from smdebug) (20.1)
Requirement already satisfied: numpy<2.0.0,>1.16.0 in /opt/conda/lib/python3.7/site-packages (from smdebug) (1.18.1)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /opt/conda/lib/python3.7/site-packages (from boto3>=1.10.32->smdebug) (0.9.5)
Requirement already satisfied: botocore<1.16.0,>=1.15.45 in /opt/conda/lib/python3.7/site-packages (from boto3>=1.10.32->smdebug) (1.15.45)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /opt/conda/lib/python3.7/site-packages (from boto3>=1.10.32->smdebug) (0.3.3)
Requirement already satisfied: six>=1.9 in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->smdebug) (1.14.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from protobuf>=3.6.0->smdebug) (45.2.0.post20200210)
Requirement already satisfied: pyparsing>=2.0.2 in /opt/conda/lib/python3.7/site-packages (from packaging->smdebug) (2.4.6)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /opt/conda/lib/python3.7/site-packages (from botocore<1.16.0,>=1.15.45->boto3>=1.10.32->smdebug) (2.8.1)
Requirement already satisfied: docutils<0.16,>=0.10 in /opt/conda/lib/python3.7/site-packages (from botocore<1.16.0,>=1.15.45->boto3>=1.10.32->smdebug) (0.15.2)
Requirement already satisfied: urllib3<1.26,>=1.20; python_version != "3.4" in /opt/conda/lib/python3.7/site-packages (from botocore<1.16.0,>=1.15.45->boto3>=1.10.32->smdebug) (1.25.8)
| Apache-2.0 | aws_sagemaker_studio/sagemaker_debugger/tensorflow_builtin_rule/tf-mnist-builtin-rule.ipynb | fhirschmann/amazon-sagemaker-examples |
With the setup out of the way let's start training our TensorFlow model in SageMaker with the debugger enabled. Training TensorFlow models in SageMaker with Amazon SageMaker Debugger SageMaker TensorFlow as a frameworkWe'll train a TensorFlow model in this notebook with Amazon Sagemaker Debugger enabled and monitor the training jobs with Amazon Sagemaker Debugger Rules. This will be done using Amazon SageMaker [TensorFlow 1.15.0](https://docs.aws.amazon.com/sagemaker/latest/dg/pre-built-containers-frameworks-deep-learning.html) Container as a framework. | import boto3
import os
import sagemaker
from sagemaker.tensorflow import TensorFlow | _____no_output_____ | Apache-2.0 | aws_sagemaker_studio/sagemaker_debugger/tensorflow_builtin_rule/tf-mnist-builtin-rule.ipynb | fhirschmann/amazon-sagemaker-examples |
Let's import the libraries needed for our demo of Amazon SageMaker Debugger. | from sagemaker.debugger import Rule, DebuggerHookConfig, TensorBoardOutputConfig, CollectionConfig, rule_configs | _____no_output_____ | Apache-2.0 | aws_sagemaker_studio/sagemaker_debugger/tensorflow_builtin_rule/tf-mnist-builtin-rule.ipynb | fhirschmann/amazon-sagemaker-examples |
Now we'll define the configuration for our training to run. We'll using image recognition using MNIST dataset as our training example. | # define the entrypoint script
entrypoint_script='src/mnist_zerocodechange.py'
hyperparameters = {
"num_epochs": 1
}
!pygmentize src/mnist_zerocodechange.py | [33m"""[39;49;00m
[33mThis script is a simple MNIST training script which uses Tensorflow's Estimator interface.[39;49;00m
[33mIt is designed to be used with SageMaker Debugger in an official SageMaker Framework container (i.e. AWS Deep Learning Container). You will notice that this script looks exactly like a normal TensorFlow training script.[39;49;00m
[33mThe hook needed by SageMaker Debugger to save tensors during training will be automatically added in those environments. [39;49;00m
[33mThe hook will load configuration from json configuration that SageMaker will put in the training container from the configuration provided using the SageMaker python SDK when creating a job.[39;49;00m
[33mFor more information, please refer to https://github.com/awslabs/sagemaker-debugger/blob/master/docs/sagemaker.md [39;49;00m
[33m"""[39;49;00m
[37m# Standard Library[39;49;00m
[34mimport[39;49;00m [04m[36margparse[39;49;00m
[34mimport[39;49;00m [04m[36mrandom[39;49;00m
[37m# Third Party[39;49;00m
[34mimport[39;49;00m [04m[36mnumpy[39;49;00m [34mas[39;49;00m [04m[36mnp[39;49;00m
[34mimport[39;49;00m [04m[36mtensorflow[39;49;00m [34mas[39;49;00m [04m[36mtf[39;49;00m
[34mimport[39;49;00m [04m[36mlogging[39;49;00m
logging.getLogger().setLevel(logging.INFO)
parser = argparse.ArgumentParser()
parser.add_argument([33m"[39;49;00m[33m--lr[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=[36mfloat[39;49;00m, default=[34m0.001[39;49;00m)
parser.add_argument([33m"[39;49;00m[33m--random_seed[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=[36mbool[39;49;00m, default=[34mFalse[39;49;00m)
parser.add_argument([33m"[39;49;00m[33m--num_epochs[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=[36mint[39;49;00m, default=[34m5[39;49;00m, help=[33m"[39;49;00m[33mNumber of epochs to train for[39;49;00m[33m"[39;49;00m)
parser.add_argument(
[33m"[39;49;00m[33m--num_steps[39;49;00m[33m"[39;49;00m,
[36mtype[39;49;00m=[36mint[39;49;00m,
help=[33m"[39;49;00m[33mNumber of steps to train for. If this[39;49;00m[33m"[39;49;00m [33m"[39;49;00m[33mis passed, it overrides num_epochs[39;49;00m[33m"[39;49;00m,
)
parser.add_argument(
[33m"[39;49;00m[33m--num_eval_steps[39;49;00m[33m"[39;49;00m,
[36mtype[39;49;00m=[36mint[39;49;00m,
help=[33m"[39;49;00m[33mNumber of steps to evaluate for. If this[39;49;00m[33m"[39;49;00m
[33m"[39;49;00m[33mis passed, it doesnt evaluate over the full eval set[39;49;00m[33m"[39;49;00m,
)
parser.add_argument([33m"[39;49;00m[33m--model_dir[39;49;00m[33m"[39;49;00m, [36mtype[39;49;00m=[36mstr[39;49;00m, default=[33m"[39;49;00m[33m/tmp/mnist_model[39;49;00m[33m"[39;49;00m)
args = parser.parse_args()
[37m# these random seeds are only intended for test purpose.[39;49;00m
[37m# for now, 2,2,12 could promise no assert failure when running tests.[39;49;00m
[37m# if you wish to change the number, notice that certain steps' tensor value may be capable of variation[39;49;00m
[34mif[39;49;00m args.random_seed:
tf.set_random_seed([34m2[39;49;00m)
np.random.seed([34m2[39;49;00m)
random.seed([34m12[39;49;00m)
[34mdef[39;49;00m [32mcnn_model_fn[39;49;00m(features, labels, mode):
[33m"""Model function for CNN."""[39;49;00m
[37m# Input Layer[39;49;00m
input_layer = tf.reshape(features[[33m"[39;49;00m[33mx[39;49;00m[33m"[39;49;00m], [-[34m1[39;49;00m, [34m28[39;49;00m, [34m28[39;49;00m, [34m1[39;49;00m])
[37m# Convolutional Layer #1[39;49;00m
conv1 = tf.layers.conv2d(
inputs=input_layer, filters=[34m32[39;49;00m, kernel_size=[[34m5[39;49;00m, [34m5[39;49;00m], padding=[33m"[39;49;00m[33msame[39;49;00m[33m"[39;49;00m, activation=tf.nn.relu
)
[37m# Pooling Layer #1[39;49;00m
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[[34m2[39;49;00m, [34m2[39;49;00m], strides=[34m2[39;49;00m)
[37m# Convolutional Layer #2 and Pooling Layer #2[39;49;00m
conv2 = tf.layers.conv2d(
inputs=pool1, filters=[34m64[39;49;00m, kernel_size=[[34m5[39;49;00m, [34m5[39;49;00m], padding=[33m"[39;49;00m[33msame[39;49;00m[33m"[39;49;00m, activation=tf.nn.relu
)
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[[34m2[39;49;00m, [34m2[39;49;00m], strides=[34m2[39;49;00m)
[37m# Dense Layer[39;49;00m
pool2_flat = tf.reshape(pool2, [-[34m1[39;49;00m, [34m7[39;49;00m * [34m7[39;49;00m * [34m64[39;49;00m])
dense = tf.layers.dense(inputs=pool2_flat, units=[34m1024[39;49;00m, activation=tf.nn.relu)
dropout = tf.layers.dropout(
inputs=dense, rate=[34m0.4[39;49;00m, training=mode == tf.estimator.ModeKeys.TRAIN
)
[37m# Logits Layer[39;49;00m
logits = tf.layers.dense(inputs=dropout, units=[34m10[39;49;00m)
predictions = {
[37m# Generate predictions (for PREDICT and EVAL mode)[39;49;00m
[33m"[39;49;00m[33mclasses[39;49;00m[33m"[39;49;00m: tf.argmax([36minput[39;49;00m=logits, axis=[34m1[39;49;00m),
[37m# Add `softmax_tensor` to the graph. It is used for PREDICT and by the[39;49;00m
[37m# `logging_hook`.[39;49;00m
[33m"[39;49;00m[33mprobabilities[39;49;00m[33m"[39;49;00m: tf.nn.softmax(logits, name=[33m"[39;49;00m[33msoftmax_tensor[39;49;00m[33m"[39;49;00m),
}
[34mif[39;49;00m mode == tf.estimator.ModeKeys.PREDICT:
[34mreturn[39;49;00m tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
[37m# Calculate Loss (for both TRAIN and EVAL modes)[39;49;00m
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
[37m# Configure the Training Op (for TRAIN mode)[39;49;00m
[34mif[39;49;00m mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=args.lr)
train_op = optimizer.minimize(loss=loss, global_step=tf.train.get_global_step())
[34mreturn[39;49;00m tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
[37m# Add evaluation metrics (for EVAL mode)[39;49;00m
eval_metric_ops = {
[33m"[39;49;00m[33maccuracy[39;49;00m[33m"[39;49;00m: tf.metrics.accuracy(labels=labels, predictions=predictions[[33m"[39;49;00m[33mclasses[39;49;00m[33m"[39;49;00m])
}
[34mreturn[39;49;00m tf.estimator.EstimatorSpec(mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
[37m# Load training and eval data[39;49;00m
((train_data, train_labels), (eval_data, eval_labels)) = tf.keras.datasets.mnist.load_data()
train_data = train_data / np.float32([34m255[39;49;00m)
train_labels = train_labels.astype(np.int32) [37m# not required[39;49;00m
eval_data = eval_data / np.float32([34m255[39;49;00m)
eval_labels = eval_labels.astype(np.int32) [37m# not required[39;49;00m
mnist_classifier = tf.estimator.Estimator(model_fn=cnn_model_fn, model_dir=args.model_dir)
train_input_fn = tf.estimator.inputs.numpy_input_fn(
x={[33m"[39;49;00m[33mx[39;49;00m[33m"[39;49;00m: train_data}, y=train_labels, batch_size=[34m128[39;49;00m, num_epochs=args.num_epochs, shuffle=[34mTrue[39;49;00m
)
eval_input_fn = tf.estimator.inputs.numpy_input_fn(
x={[33m"[39;49;00m[33mx[39;49;00m[33m"[39;49;00m: eval_data}, y=eval_labels, num_epochs=[34m1[39;49;00m, shuffle=[34mFalse[39;49;00m
)
mnist_classifier.train(input_fn=train_input_fn, steps=args.num_steps)
mnist_classifier.evaluate(input_fn=eval_input_fn, steps=args.num_eval_steps)
| Apache-2.0 | aws_sagemaker_studio/sagemaker_debugger/tensorflow_builtin_rule/tf-mnist-builtin-rule.ipynb | fhirschmann/amazon-sagemaker-examples |
Setting up the EstimatorNow it's time to setup our TensorFlow estimator. We've added new parameters to the estimator to enable your training job for debugging through Amazon SageMaker Debugger. These new parameters are explained below.* **debugger_hook_config**: This new parameter accepts a local path where you wish your tensors to be written to and also accepts the S3 URI where you wish your tensors to be uploaded to. SageMaker will take care of uploading these tensors transparently during execution.* **rules**: This new parameter will accept a list of rules you wish to evaluate against the tensors output by this training job. For rules, Amazon SageMaker Debugger supports two types: * **SageMaker Rules**: These are rules specially curated by the data science and engineering teams in Amazon SageMaker which you can opt to evaluate against your training job. * **Custom Rules**: You can optionally choose to write your own rule as a Python source file and have it evaluated against your training job. To provide Amazon SageMaker Debugger to evaluate this rule, you would have to provide the S3 location of the rule source and the evaluator image. Using Amazon SageMaker Rules In this example we'll demonstrate how to use SageMaker rules to be evaluated against your training. You can find the list of SageMaker rules and the configurations best suited for using them [here](https://github.com/awslabs/sagemaker-debugger-rulesconfig).The rules we'll use are **VanishingGradient** and **LossNotDecreasing**. As the names suggest, the rules will attempt to evaluate if there are vanishing gradients in the tensors captured by the debugging hook during training and also if the loss is not decreasing. | rules = [
Rule.sagemaker(rule_configs.vanishing_gradient()),
Rule.sagemaker(rule_configs.loss_not_decreasing())
]
estimator = TensorFlow(
role=sagemaker.get_execution_role(),
base_job_name='smdebugger-demo-mnist-tensorflow',
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
train_volume_size=400,
entry_point=entrypoint_script,
framework_version='1.15',
py_version='py3',
train_max_run=3600,
script_mode=True,
hyperparameters=hyperparameters,
## New parameter
rules = rules
) | _____no_output_____ | Apache-2.0 | aws_sagemaker_studio/sagemaker_debugger/tensorflow_builtin_rule/tf-mnist-builtin-rule.ipynb | fhirschmann/amazon-sagemaker-examples |
*Note that Amazon Sagemaker Debugger is only supported for py_version='py3' currently.*Let's start the training by calling `fit()` on the TensorFlow estimator. | estimator.fit(wait=True) | 2020-04-27 23:56:40 Starting - Starting the training job...
2020-04-27 23:57:04 Starting - Launching requested ML instances
********* Debugger Rule Status *********
*
* VanishingGradient: InProgress
* LossNotDecreasing: InProgress
*
****************************************
...
2020-04-27 23:57:36 Starting - Preparing the instances for training.........
2020-04-27 23:59:10 Downloading - Downloading input data
2020-04-27 23:59:10 Training - Downloading the training image...
2020-04-27 23:59:30 Training - Training image download completed. Training in progress..[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/__init__.py:1473: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead.
[0m
[34m2020-04-27 23:59:36,257 sagemaker-containers INFO Imported framework sagemaker_tensorflow_container.training[0m
[34m2020-04-27 23:59:36,263 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2020-04-27 23:59:36,707 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2020-04-27 23:59:36,727 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2020-04-27 23:59:36,746 sagemaker-containers INFO No GPUs detected (normal if no gpus installed)[0m
[34m2020-04-27 23:59:36,760 sagemaker-containers INFO Invoking user script
[0m
[34mTraining Env:
[0m
[34m{
"additional_framework_parameters": {},
"channel_input_dirs": {},
"current_host": "algo-1",
"framework_module": "sagemaker_tensorflow_container.training:main",
"hosts": [
"algo-1"
],
"hyperparameters": {
"model_dir": "s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model",
"num_epochs": 3
},
"input_config_dir": "/opt/ml/input/config",
"input_data_config": {},
"input_dir": "/opt/ml/input",
"is_master": true,
"job_name": "smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900",
"log_level": 20,
"master_hostname": "algo-1",
"model_dir": "/opt/ml/model",
"module_dir": "s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/source/sourcedir.tar.gz",
"module_name": "mnist_zerocodechange",
"network_interface_name": "eth0",
"num_cpus": 4,
"num_gpus": 0,
"output_data_dir": "/opt/ml/output/data",
"output_dir": "/opt/ml/output",
"output_intermediate_dir": "/opt/ml/output/intermediate",
"resource_config": {
"current_host": "algo-1",
"hosts": [
"algo-1"
],
"network_interface_name": "eth0"
},
"user_entry_point": "mnist_zerocodechange.py"[0m
[34m}
[0m
[34mEnvironment variables:
[0m
[34mSM_HOSTS=["algo-1"][0m
[34mSM_NETWORK_INTERFACE_NAME=eth0[0m
[34mSM_HPS={"model_dir":"s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model","num_epochs":3}[0m
[34mSM_USER_ENTRY_POINT=mnist_zerocodechange.py[0m
[34mSM_FRAMEWORK_PARAMS={}[0m
[34mSM_RESOURCE_CONFIG={"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"}[0m
[34mSM_INPUT_DATA_CONFIG={}[0m
[34mSM_OUTPUT_DATA_DIR=/opt/ml/output/data[0m
[34mSM_CHANNELS=[][0m
[34mSM_CURRENT_HOST=algo-1[0m
[34mSM_MODULE_NAME=mnist_zerocodechange[0m
[34mSM_LOG_LEVEL=20[0m
[34mSM_FRAMEWORK_MODULE=sagemaker_tensorflow_container.training:main[0m
[34mSM_INPUT_DIR=/opt/ml/input[0m
[34mSM_INPUT_CONFIG_DIR=/opt/ml/input/config[0m
[34mSM_OUTPUT_DIR=/opt/ml/output[0m
[34mSM_NUM_CPUS=4[0m
[34mSM_NUM_GPUS=0[0m
[34mSM_MODEL_DIR=/opt/ml/model[0m
[34mSM_MODULE_DIR=s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/source/sourcedir.tar.gz[0m
[34mSM_TRAINING_ENV={"additional_framework_parameters":{},"channel_input_dirs":{},"current_host":"algo-1","framework_module":"sagemaker_tensorflow_container.training:main","hosts":["algo-1"],"hyperparameters":{"model_dir":"s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model","num_epochs":3},"input_config_dir":"/opt/ml/input/config","input_data_config":{},"input_dir":"/opt/ml/input","is_master":true,"job_name":"smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900","log_level":20,"master_hostname":"algo-1","model_dir":"/opt/ml/model","module_dir":"s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/source/sourcedir.tar.gz","module_name":"mnist_zerocodechange","network_interface_name":"eth0","num_cpus":4,"num_gpus":0,"output_data_dir":"/opt/ml/output/data","output_dir":"/opt/ml/output","output_intermediate_dir":"/opt/ml/output/intermediate","resource_config":{"current_host":"algo-1","hosts":["algo-1"],"network_interface_name":"eth0"},"user_entry_point":"mnist_zerocodechange.py"}[0m
[34mSM_USER_ARGS=["--model_dir","s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model","--num_epochs","3"][0m
[34mSM_OUTPUT_INTERMEDIATE_DIR=/opt/ml/output/intermediate[0m
[34mSM_HP_MODEL_DIR=s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model[0m
[34mSM_HP_NUM_EPOCHS=3[0m
[34mPYTHONPATH=/opt/ml/code:/usr/local/bin:/usr/lib/python36.zip:/usr/lib/python3.6:/usr/lib/python3.6/lib-dynload:/usr/local/lib/python3.6/dist-packages:/usr/lib/python3/dist-packages
[0m
[34mInvoking script with the following command:
[0m
[34m/usr/bin/python3 mnist_zerocodechange.py --model_dir s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model --num_epochs 3
[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/__init__.py:1473: The name tf.estimator.inputs is deprecated. Please use tf.compat.v1.estimator.inputs instead.
[0m
[34mDownloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz[0m
[34m#015 8192/11490434 [..............................] - ETA: 0s#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 106496/11490434 [..............................] - ETA: 5s#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 737280/11490434 [>.............................] - ETA: 1s#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 5210112/11490434 [============>.................] - ETA: 0s#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#015 6987776/11490434 [=================>............] - ETA: 0s#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#010#01511493376/11490434 [==============================] - 0s 0us/step[0m
[34mINFO:tensorflow:Using default config.[0m
[34mINFO:tensorflow:Using config: {'_model_dir': 's3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': allow_soft_placement: true[0m
[34mgraph_options {
rewrite_options {
meta_optimizer_iterations: ONE
}[0m
[34m}[0m
[34m, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_experimental_max_worker_delay_secs': None, '_session_creation_timeout_secs': 7200, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7fb77f42d1d0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:114: The name tf.estimator.inputs.numpy_input_fn is deprecated. Please use tf.compat.v1.estimator.inputs.numpy_input_fn instead.
[0m
[34m[2020-04-27 23:59:39.646 ip-10-0-201-124.us-east-2.compute.internal:26 INFO json_config.py:90] Creating hook from json_config at /opt/ml/input/config/debughookconfig.json.[0m
[34m[2020-04-27 23:59:39.647 ip-10-0-201-124.us-east-2.compute.internal:26 INFO hook.py:183] tensorboard_dir has not been set for the hook. SMDebug will not be exporting tensorboard summaries.[0m
[34m[2020-04-27 23:59:39.647 ip-10-0-201-124.us-east-2.compute.internal:26 INFO hook.py:228] Saving to /opt/ml/output/tensors[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/training_util.py:236: Variable.initialized_value (from tensorflow.python.ops.variables) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_queue_runner.py:62: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_queue_runner.py:62: QueueRunner.__init__ (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_functions.py:500: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/inputs/queues/feeding_functions.py:500: add_queue_runner (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[34mINFO:tensorflow:Calling model_fn.[0m
[34mINFO:tensorflow:Calling model_fn.[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:54: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse `tf.keras.layers.Conv2D` instead.[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:54: conv2d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse `tf.keras.layers.Conv2D` instead.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mPlease use `layer.__call__` method instead.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/layers/convolutional.py:424: Layer.apply (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mPlease use `layer.__call__` method instead.[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:58: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.MaxPooling2D instead.[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:58: max_pooling2d (from tensorflow.python.layers.pooling) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.MaxPooling2D instead.[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:68: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.Dense instead.[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:68: dense (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.Dense instead.[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:70: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.dropout instead.[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:70: dropout (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse keras.layers.dropout instead.[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:88: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:88: The name tf.losses.sparse_softmax_cross_entropy is deprecated. Please use tf.compat.v1.losses.sparse_softmax_cross_entropy instead.
[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/losses/losses_impl.py:121: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse tf.where in 2.0, which has the same broadcast rule as np.where[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/losses/losses_impl.py:121: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse tf.where in 2.0, which has the same broadcast rule as np.where[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:92: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.
[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:92: The name tf.train.GradientDescentOptimizer is deprecated. Please use tf.compat.v1.train.GradientDescentOptimizer instead.
[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:93: The name tf.train.get_global_step is deprecated. Please use tf.compat.v1.train.get_global_step instead.
[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:93: The name tf.train.get_global_step is deprecated. Please use tf.compat.v1.train.get_global_step instead.
[0m
[34mINFO:tensorflow:Done calling model_fn.[0m
[34mINFO:tensorflow:Done calling model_fn.[0m
[34mINFO:tensorflow:Create CheckpointSaverHook.[0m
[34mINFO:tensorflow:Create CheckpointSaverHook.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:117: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:117: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:143: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:143: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
[0m
[34m[2020-04-27 23:59:40.256 ip-10-0-201-124.us-east-2.compute.internal:26 INFO hook.py:364] Monitoring the collections: gradients, losses, sm_metrics, metrics[0m
[34mINFO:tensorflow:Graph was finalized.[0m
[34mINFO:tensorflow:Graph was finalized.[0m
[34mINFO:tensorflow:Running local_init_op.[0m
[34mINFO:tensorflow:Running local_init_op.[0m
[34mINFO:tensorflow:Done running local_init_op.[0m
[34mINFO:tensorflow:Done running local_init_op.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py:888: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py:888: start_queue_runners (from tensorflow.python.training.queue_runner_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mTo construct input pipelines, use the `tf.data` module.[0m
[34mINFO:tensorflow:Saving checkpoints for 0 into s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model/model.ckpt.[0m
[34mINFO:tensorflow:Saving checkpoints for 0 into s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model/model.ckpt.[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/smdebug/tensorflow/session.py:304: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse `tf.compat.v1.graph_util.extract_sub_graph`[0m
[34mWARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/smdebug/tensorflow/session.py:304: extract_sub_graph (from tensorflow.python.framework.graph_util_impl) is deprecated and will be removed in a future version.[0m
[34mInstructions for updating:[0m
[34mUse `tf.compat.v1.graph_util.extract_sub_graph`[0m
[34mINFO:tensorflow:loss = 2.3208826, step = 1[0m
[34mINFO:tensorflow:loss = 2.3208826, step = 1[0m
[34mERROR:root:'NoneType' object has no attribute 'write'[0m
[34mINFO:tensorflow:global_step/sec: 7.73273[0m
[34mINFO:tensorflow:global_step/sec: 7.73273[0m
[34mINFO:tensorflow:loss = 2.2985032, step = 101 (12.933 sec)[0m
[34mINFO:tensorflow:loss = 2.2985032, step = 101 (12.933 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.0584[0m
[34mINFO:tensorflow:global_step/sec: 8.0584[0m
[34mINFO:tensorflow:loss = 2.2796118, step = 201 (12.409 sec)[0m
[34mINFO:tensorflow:loss = 2.2796118, step = 201 (12.409 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.16216[0m
[34mINFO:tensorflow:global_step/sec: 8.16216[0m
[34mINFO:tensorflow:loss = 2.2400365, step = 301 (12.252 sec)[0m
[34mINFO:tensorflow:loss = 2.2400365, step = 301 (12.252 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.20902[0m
[34mINFO:tensorflow:global_step/sec: 8.20902[0m
[34mINFO:tensorflow:loss = 2.244422, step = 401 (12.182 sec)[0m
[34mINFO:tensorflow:loss = 2.244422, step = 401 (12.182 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.29027[0m
[34mINFO:tensorflow:global_step/sec: 8.29027[0m
[34mINFO:tensorflow:loss = 2.2057943, step = 501 (12.062 sec)[0m
[34mINFO:tensorflow:loss = 2.2057943, step = 501 (12.062 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.12505[0m
[34mINFO:tensorflow:global_step/sec: 8.12505[0m
[34mINFO:tensorflow:loss = 2.1722574, step = 601 (12.308 sec)[0m
[34mINFO:tensorflow:loss = 2.1722574, step = 601 (12.308 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.21211[0m
[34mINFO:tensorflow:global_step/sec: 8.21211[0m
[34mINFO:tensorflow:loss = 2.126483, step = 701 (12.177 sec)[0m
[34mINFO:tensorflow:loss = 2.126483, step = 701 (12.177 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.54074[0m
[34mINFO:tensorflow:global_step/sec: 8.54074[0m
[34mINFO:tensorflow:loss = 2.0739117, step = 801 (11.708 sec)[0m
[34mINFO:tensorflow:loss = 2.0739117, step = 801 (11.708 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.60594[0m
[34mINFO:tensorflow:global_step/sec: 8.60594[0m
[34mINFO:tensorflow:loss = 2.023419, step = 901 (11.620 sec)[0m
[34mINFO:tensorflow:loss = 2.023419, step = 901 (11.620 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.60855[0m
[34mINFO:tensorflow:global_step/sec: 8.60855[0m
[34mINFO:tensorflow:loss = 1.9700434, step = 1001 (11.791 sec)[0m
[34mINFO:tensorflow:loss = 1.9700434, step = 1001 (11.791 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.33646[0m
[34mINFO:tensorflow:global_step/sec: 8.33646[0m
[34mINFO:tensorflow:loss = 1.8422208, step = 1101 (11.821 sec)[0m
[34mINFO:tensorflow:loss = 1.8422208, step = 1101 (11.821 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.58772[0m
[34mINFO:tensorflow:global_step/sec: 8.58772[0m
[34mINFO:tensorflow:loss = 1.7151158, step = 1201 (11.644 sec)[0m
[34mINFO:tensorflow:loss = 1.7151158, step = 1201 (11.644 sec)[0m
[34mINFO:tensorflow:global_step/sec: 8.54481[0m
[34mINFO:tensorflow:global_step/sec: 8.54481[0m
[34mINFO:tensorflow:loss = 1.4826751, step = 1301 (11.703 sec)[0m
[34mINFO:tensorflow:loss = 1.4826751, step = 1301 (11.703 sec)[0m
2020-04-28 00:02:39 Uploading - Uploading generated training model[34mINFO:tensorflow:global_step/sec: 8.40044[0m
[34mINFO:tensorflow:global_step/sec: 8.40044[0m
[34mINFO:tensorflow:loss = 1.3823929, step = 1401 (11.904 sec)[0m
[34mINFO:tensorflow:loss = 1.3823929, step = 1401 (11.904 sec)[0m
[34mINFO:tensorflow:Saving checkpoints for 1407 into s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model/model.ckpt.[0m
[34mINFO:tensorflow:Saving checkpoints for 1407 into s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model/model.ckpt.[0m
[34mINFO:tensorflow:Loss for final step: 1.3139015.[0m
[34mINFO:tensorflow:Loss for final step: 1.3139015.[0m
[34mINFO:tensorflow:Calling model_fn.[0m
[34mINFO:tensorflow:Calling model_fn.[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:98: The name tf.metrics.accuracy is deprecated. Please use tf.compat.v1.metrics.accuracy instead.
[0m
[34mWARNING:tensorflow:From mnist_zerocodechange.py:98: The name tf.metrics.accuracy is deprecated. Please use tf.compat.v1.metrics.accuracy instead.
[0m
[34mINFO:tensorflow:Done calling model_fn.[0m
[34mINFO:tensorflow:Done calling model_fn.[0m
[34mINFO:tensorflow:Starting evaluation at 2020-04-28T00:02:32Z[0m
[34mINFO:tensorflow:Starting evaluation at 2020-04-28T00:02:32Z[0m
[34m[2020-04-28 00:02:32.878 ip-10-0-201-124.us-east-2.compute.internal:26 INFO hook.py:364] Monitoring the collections: gradients, losses, sm_metrics, metrics[0m
[34mINFO:tensorflow:Graph was finalized.[0m
[34mINFO:tensorflow:Graph was finalized.[0m
[34mINFO:tensorflow:Restoring parameters from s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model/model.ckpt-1407[0m
[34mINFO:tensorflow:Restoring parameters from s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model/model.ckpt-1407[0m
[34mINFO:tensorflow:Running local_init_op.[0m
[34mINFO:tensorflow:Running local_init_op.[0m
[34mINFO:tensorflow:Done running local_init_op.[0m
[34mINFO:tensorflow:Done running local_init_op.[0m
[34mINFO:tensorflow:Finished evaluation at 2020-04-28-00:02:36[0m
[34mINFO:tensorflow:Finished evaluation at 2020-04-28-00:02:36[0m
[34mINFO:tensorflow:Saving dict for global step 1407: accuracy = 0.7942, global_step = 1407, loss = 1.2718687[0m
[34mINFO:tensorflow:Saving dict for global step 1407: accuracy = 0.7942, global_step = 1407, loss = 1.2718687[0m
[34mINFO:tensorflow:Saving 'checkpoint_path' summary for global step 1407: s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model/model.ckpt-1407[0m
[34mINFO:tensorflow:Saving 'checkpoint_path' summary for global step 1407: s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/model/model.ckpt-1407[0m
[34m[2020-04-28 00:02:37.390 ip-10-0-201-124.us-east-2.compute.internal:26 INFO utils.py:25] The end of training job file will not be written for jobs running under SageMaker.[0m
[34m2020-04-28 00:02:37,661 sagemaker_tensorflow_container.training WARNING No model artifact is saved under path /opt/ml/model. Your training job will not save any model files to S3.[0m
[34mFor details of how to construct your training script see:[0m
[34mhttps://sagemaker.readthedocs.io/en/stable/using_tf.html#adapting-your-local-tensorflow-script[0m
[34m2020-04-28 00:02:37,662 sagemaker-containers INFO Reporting training SUCCESS[0m
2020-04-28 00:03:10 Completed - Training job completed
********* Debugger Rule Status *********
*
* VanishingGradient: NoIssuesFound
* LossNotDecreasing: NoIssuesFound
*
****************************************
Training seconds: 241
Billable seconds: 241
| Apache-2.0 | aws_sagemaker_studio/sagemaker_debugger/tensorflow_builtin_rule/tf-mnist-builtin-rule.ipynb | fhirschmann/amazon-sagemaker-examples |
Result As a result of calling the `fit()` Amazon SageMaker Debugger kicked off two rule evaluation jobs to monitor vanishing gradient and loss decrease, in parallel with the training job. The rule evaluation status(es) will be visible in the training logs at regular intervals. As you can see, in the summary, there was no step in the training which reported vanishing gradients in the tensors. Although, the loss was not found to be decreasing at step 1900. | estimator.latest_training_job.rule_job_summary() | _____no_output_____ | Apache-2.0 | aws_sagemaker_studio/sagemaker_debugger/tensorflow_builtin_rule/tf-mnist-builtin-rule.ipynb | fhirschmann/amazon-sagemaker-examples |
Let's try and look at the logs of the rule job for loss not decreasing. To do that, we'll use this utlity function to get a link to the rule job logs. | def _get_rule_job_name(training_job_name, rule_configuration_name, rule_job_arn):
"""Helper function to get the rule job name with correct casing"""
return "{}-{}-{}".format(
training_job_name[:26], rule_configuration_name[:26], rule_job_arn[-8:]
)
def _get_cw_url_for_rule_job(rule_job_name, region):
return "https://{}.console.aws.amazon.com/cloudwatch/home?region={}#logStream:group=/aws/sagemaker/ProcessingJobs;prefix={};streamFilter=typeLogStreamPrefix".format(region, region, rule_job_name)
def get_rule_jobs_cw_urls(estimator):
region = boto3.Session().region_name
training_job = estimator.latest_training_job
training_job_name = training_job.describe()["TrainingJobName"]
rule_eval_statuses = training_job.describe()["DebugRuleEvaluationStatuses"]
result={}
for status in rule_eval_statuses:
if status.get("RuleEvaluationJobArn", None) is not None:
rule_job_name = _get_rule_job_name(training_job_name, status["RuleConfigurationName"], status["RuleEvaluationJobArn"])
result[status["RuleConfigurationName"]] = _get_cw_url_for_rule_job(rule_job_name, region)
return result
get_rule_jobs_cw_urls(estimator) | _____no_output_____ | Apache-2.0 | aws_sagemaker_studio/sagemaker_debugger/tensorflow_builtin_rule/tf-mnist-builtin-rule.ipynb | fhirschmann/amazon-sagemaker-examples |
Data Analysis - Interactive ExplorationNow that we have trained a job, and looked at automated analysis through rules, let us also look at another aspect of Amazon SageMaker Debugger. It allows us to perform interactive exploration of the tensors saved in real time or after the job. Here we focus on after-the-fact analysis of the above job. We import the `smdebug` library, which defines a concept of Trial that represents a single training run. Note how we fetch the path to debugger artifacts for the above job. | from smdebug.trials import create_trial
trial = create_trial(estimator.latest_job_debugger_artifacts_path()) | [2020-04-28 00:07:09.068 f8455ab5c5ab:546 INFO s3_trial.py:42] Loading trial debug-output at path s3://sagemaker-us-east-2-441510144314/smdebugger-demo-mnist-tensorflow-2020-04-27-23-56-39-900/debug-output
| Apache-2.0 | aws_sagemaker_studio/sagemaker_debugger/tensorflow_builtin_rule/tf-mnist-builtin-rule.ipynb | fhirschmann/amazon-sagemaker-examples |
We can list all the tensors that were recorded to know what we want to plot. Each one of these names is the name of a tensor, which is auto-assigned by TensorFlow. In some frameworks where such names are not available, we try to create a name based on the layer's name and whether it is weight, bias, gradient, input or output. | trial.tensor_names() | [2020-04-28 00:07:11.217 f8455ab5c5ab:546 INFO trial.py:198] Training has ended, will refresh one final time in 1 sec.
[2020-04-28 00:07:12.236 f8455ab5c5ab:546 INFO trial.py:210] Loaded all steps
| Apache-2.0 | aws_sagemaker_studio/sagemaker_debugger/tensorflow_builtin_rule/tf-mnist-builtin-rule.ipynb | fhirschmann/amazon-sagemaker-examples |
We can also retrieve tensors by some default collections that `smdebug` creates from your training job. Here we are interested in the losses collection, so we can retrieve the names of tensors in losses collection as follows. Amazon SageMaker Debugger creates default collections such as weights, gradients, biases, losses automatically. You can also create custom collections from your tensors. | trial.tensor_names(collection="losses")
import matplotlib.pyplot as plt
import re
# Define a function that, for the given tensor name, walks through all
# the iterations for which we have data and fetches the value.
# Returns the set of steps and the values
def get_data(trial, tname):
tensor = trial.tensor(tname)
steps = tensor.steps()
vals = [tensor.value(s) for s in steps]
return steps, vals
def plot_tensors(trial, collection_name, ylabel=''):
"""
Takes a `trial` and plots all tensors that match the given regex.
"""
plt.figure(
num=1, figsize=(8, 8), dpi=80,
facecolor='w', edgecolor='k')
tensors = trial.tensor_names(collection=collection_name)
for tensor_name in sorted(tensors):
steps, data = get_data(trial, tensor_name)
plt.plot(steps, data, label=tensor_name)
plt.legend(bbox_to_anchor=(1.04,1), loc='upper left')
plt.xlabel('Iteration')
plt.ylabel(ylabel)
plt.show()
plot_tensors(trial, "losses", ylabel="Loss") | _____no_output_____ | Apache-2.0 | aws_sagemaker_studio/sagemaker_debugger/tensorflow_builtin_rule/tf-mnist-builtin-rule.ipynb | fhirschmann/amazon-sagemaker-examples |
Récursivité Introduction **Objectifs**:- Comprendre que des problèmes complexes qui peuvent être difficiles à résoudre avec les «techniques habituelles» peuvent avoir une solution récursive simple,- Apprendre à formuler des programmes récursivement,- Comprendre et appliquer les trois lois de la récursivité,- Comprendre la récursivité comme une forme d'itération,- Implémenter une solution récursive à un problème,- Comprendre comment la récursion fonctionne à bas niveau. La récursivité est une *méthode de résolution de problème*. Elle consiste à **découper le problème, puis les sous-problèmes obtenus ... jusqu'à obtenir des sous-problèmes si petits qu'ils puissent être résolus de façon directe**. Ordinairement, la récursivité nécessite qu'une fonction s'appelle **elle-même**. Exemple de la somme d'une liste de nombres Pour illustrer le propos, considérons le problème classique qui consiste à *calculer la somme d'une liste de nombres*. Voici sa solution «classique»: | def sommer(nbs):
somme = 0 # accu
for nb in nbs:
somme = somme + nb # ou somme += nb
return somme
assert sommer([5, 4, 7]) == 16 | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
La somme se produit alors comme suit: $$(\underbrace{ (\underbrace{ (\underbrace{(0+5)}_{\text{it. 1} } +4) }_{\text{it. 2}}+7)}_{\text{it. 3}})$$ Mais supposez un instant que nous ne disposions ni de boucle `while`, ni de boucle `for`. Comme mathématiquement: $$(((5) + 4)+7)=5+4+7=(5+(4+(7)))$$ nous pouvons utiliser la dernière expression pour découper simplement le problème en deux parties: 1. Récupérer le *premier* nombre de la liste: `5`, 2. calculer la somme des nombres *restants*: `sommer([4, 7])`, 3. ajouter les valeurs obtenues aux deux premières étapes: `5 + sommer([4, 7])`. Les points 1. et 3. sont élémentaires et le **point 2.** est un problème **similaire** au problème initial **mais plus petit**. On peut alors réappliquer **le même** découpage à ce sous-problème ... jusqu'à obtenir un sous-problème **si petit** qu'on puisse le *résoudre directement*: > si la liste à sommer ne contient qu'un nombre, sa somme est simplement ce nombre. Pour notre exemple cela donne: ```sommer([5, 4, 7]) = 5 + sommer([4, 7]) sommer([4, 7]) = 4 + sommer([7]) sommer([7]) = 7 = 4 + 7 = 5 + 11= 16``` ou en «applatissant»:```sommer([5, 4, 7]) = 5 + sommer([4, 7]) = 5 + (4 + sommer([7])) = 5 + (4 + (7)) = 5 + (4 + 7) = 5 + 11 = 16``` En généralisant, cela donne:``` sommer(nbs) = nbs[0] si taille nbs vaut 1 nbs[0] + sommer(nbs[1:]) sinon``` ou en paraphrasant l'étape «récursive» (la deuxième):> Pour sommer des nombres, ajouter le premier à la *somme* de ceux qui restent. Python (comme la plupart des langages de programmation) autorise qu'une fonction *s'appelle elle-même*. Nous expliquerons plus tard comment cela est possible. Voici donc une **solution récursive** au problème de la somme: | def sommer(nbs):
# cas où le problème est suffisemment petit
if len(nbs) == 1:
return nbs[0]
# si le problème est trop gros
else:
# découpage
premier, *reste = nbs # ou premier = nbs[0]; reste = nbs[1:]
# résolution du sous-pb en appelant **cette** fonction
somme_reste = sommer(reste)
# combinaison
return premier + somme_reste
assert sommer([5, 4, 7]) == 16 | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
En utilisant la **composition** et les **tranches** \[*slices*\], on peut exprimer cela de façon plus concise: | def sommer(nbs):
if len(nbs) == 1: return nbs[0] # cas de base
return nbs[0] + sommer(nbs[1:]) # appel récursif
assert sommer([5, 4, 7]) == 16 | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
voir dans [Python Tutor](http://pythontutor.com/visualize.htmlcode=def%20sommer%28nbs%29%3A%0A%20%20%20%20if%20len%28nbs%29%20%3D%3D%201%3A%20return%20nbs%5B0%5D%0A%20%20%20%20return%20nbs%5B0%5D%20%2B%20sommer%28nbs%5B1%3A%5D%29%0A%0Aprint%28sommer%28%5B5,%204,%207%5D%29%29&cumulative=false&curInstr=0&heapPrimitives=false&mode=display&origin=opt-frontend.js&py=3&rawInputLstJSON=%5B%5D&textReferences=false) Encore plus court en utilisant l'opérateur ternaire `expr1 if cond else expr2` | def sommer(nbs):
return nbs[0] + sommer(nbs[1:]) if len(nbs) > 1 else nbs[0]
assert sommer([5, 4, 7]) == 16 | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
Voici les **points clés** de ce code: 1. On commence par vérifier si on est dans le **cas de base**: celui d'un problème suffisemment simple pour être résolu directement. C'est notre garde fou...! 2. **Récursion**: Si on est pas dans le cas de base, notre fonction *s'appelle elle-même* - on appelle cela un **appel récursif** - avec un argument qui exprime un problème plus simple - `[4, 7]` - par rapport à l'argument initial - `[5, 4, 7]`. Illustrations Les numéros entourés précisent l'ordre des événements: En voici une autre en «poupées russes»: Visualiser les appels et retours Pour visualiser cela depuis le code, on peut utiliser des `print`: | def sommer_voir(nbs):
print(f'appel de sommer({nbs})')
if len(nbs) == 1:
print(f'retour de sommer({nbs}): -> {nbs[0]}')
return nbs[0]
reste = sommer_voir(nbs[1:])
print(f'retour de sommer({nbs}): {nbs[0]} + {reste} -> {nbs[0] + reste}')
return nbs[0] + reste
sommer_voir([5, 4, 7]) == 16 | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
On peut même mieux voir l'imbrication des appels en décalant le texte affiché en fonction de l'ordre d'appel: | def sommer_voir(nbs, n=0):
dec = ' ' * n # niveau de décalage
print(f'{dec}appel de sommer({nbs})')
if len(nbs) == 1:
print(f'{dec}retour de sommer({nbs}): -> {nbs[0]}')
return nbs[0]
reste = sommer_voir(nbs[1:], n+1)
print(f'{dec}retour de sommer({nbs}): {nbs[0]} + {reste} -> {nbs[0] + reste}')
return nbs[0] + reste
sommer_voir([4, 7, 36, 12, 28]) | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
Synthèse Un **algorithme récursif** doit respecter les trois lois qui suivent: 1. Il doit posséder un (ou plusieurs) **cas de base(s)**: problème(s) si simple(s) qu'on peut le(s) résoudre directement, 2. Il doit modifier son état de façon à **progresser** vers l'un des cas de bases: **partage** du problème en sous-problèmes, 3. Il doit s'appeler lui-même, on dit **récursivement**. Pour notre exemple: 1. **cas de base**: liste de taille 1, le résultat est son unique élément, 2. **partage**: la liste initiale est scindée en: - son *premier élément*, - les éléments *restants*: ils sont moins nombreux qu'initialement et on se rapproche donc du cas de base (*progression*). 3. **s'appelle lui-même**: on appelle récursivement la fonction sur les éléments *restants*. Exercices Exercice 1 On définit la **factorielle d'un entier positif ou nul** $n$ comme le produit de tous les entiers de $1$ à $n$ (inclus). En mathématique, elle se note $n!$ (lire «factorielle $n$»).Par exemple: $4!=1\times 2\times 3\times 4=24$.On peut aussi définir la factorielle d'un entier de façon **récursive**: $$n!=\left\{\begin{array}{l}1\text{ si } n\in\{0;1\}\crn\times (n-1)! \text{ sinon}\end{array}\right.$$Par exemple: $$\begin{eqnarray}4!&=&4\times 3!\cr&=& 4\times (3\times 2!)\cr&=& 4\times (3\times (2\times 1!))\cr&=& 4\times (3\times (2\times 1))\cr&=& 24\end{eqnarray}$$Définir une fonction *récursive* `fact` qui prend un entier positif en argument et renvoie sa factorielle. | def fact(n):
pass | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
**Solution** | def fact(n):
if n > 1: return n * fact(n - 1)
return 1
fact(4) | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
*** Exercice 2 Définir de façon récursive `puissance(x, n)` qui calcule $x^n$ (comment passe-t-on de $x^{n-1}$ à $x^n$) | def puissance(x, n):
pass | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
**Solution** | def puissance(x, n):
if n > 0: return x * puissance(x, n-1)
return 1 # x^0=1 quel que soit x
assert puissance(2, 10) == 1024 | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
*** Exercice 3 Définir de façon récursive `maximum(nbs)` qui renvoie la plus grande valeur de la liste `nbs`. | def maximum(nbs):
pass | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
**Solution** | def maximum(nbs):
if len(nbs) == 1: return nbs[0]
prem, *reste = nbs
m = maximum(reste)
return m if m > prem else prem
assert maximum([2, 5, -1, 12, 3]) == 12 | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
*** Exercice 4 1. Définir de façon récursive `base2(n)` qui renvoie l'écriture en base 2 de l'entier positif $n$ (sous la forme d'une chaîne de caractères). | def base2(n):
pass | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
**Solution** | def base2(n):
if n in [0, 1]: return str(n)
q, r = n // 2, n % 2
return base2(q) + str(r)
assert base2(13) == '1101' # 1huit+1quatre+0deux+1un | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
2. De même, définir récursivement `base16(n)` | def base16(n):
pass | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
**Solution** | def base16(n):
if n in list(range(10)): return str(n)
d = {10: "A", 11: "B", 12: "C", 13: "D", 14: "E", 15: "F"}
if n in d: return d[n]
q, r = n // 16, n % 16
return base16(q) + base16(r)
assert base16(43) == '2B' # 2 seize + B un | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
*** Exercice 5 En tenant compte de l'observation suivante:> si $q$ et $r$ sont respectivement le quotient et le reste de la division euclidienne de $n$ par $2$, alors $n=2q+r$ et: >>$$x^n=x^{2q+r}=x^{2q}x^r=(x^{q})^{2}x^r$$redéfinir de façon récursive la fonction `puissance(x, n)` de l'exercice 2. | def puissance_bis(x, n):
pass
assert puissance_bis(2, 10) == 1024
assert puissance_bis(5, 3) == 125 | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
**Solution** | def puissance_bis(x, n):
if n == 0: return 1
q, r = n // 2, n % 2
tmp = puissance_bis(x, q)
return tmp * tmp * (1 if r == 0 else x)
assert puissance_bis(2, 10) == 1024
assert puissance_bis(5, 3) == 125 | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
Exécuter alors les cellules qui suivent:*Note*: `%timeit` est une directive spéciale des notebooks qui permet de mesurer le temps moyen mis par une fonction pour s'exécuter. | %timeit puissance_bis(2, 1024)
%timeit puissance(2, 1024)
%timeit 23 ** 50 | _____no_output_____ | CC0-1.0 | 1_recursivite/1_recursivite.ipynb | efloti/cours-nsi-terminale |
Senegal* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Senegal.ipynb) | import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview("Senegal", weeks=5);
overview("Senegal");
compare_plot("Senegal", normalise=True);
# load the data
cases, deaths = get_country_data("Senegal")
# get population of the region for future normalisation:
inhabitants = population("Senegal")
print(f'Population of "Senegal": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table | _____no_output_____ | CC-BY-4.0 | ipynb/Senegal.ipynb | oscovida/oscovida.github.io |
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Senegal.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))-------------------- | print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
| _____no_output_____ | CC-BY-4.0 | ipynb/Senegal.ipynb | oscovida/oscovida.github.io |
Using Debuggingbook Code in your own ProgramsThis notebook has instructions on how to use the `debuggingbook` code in your own programs. In short, there are three ways:1. Simply run the notebooks in your browser, using the "mybinder" environment. Choose "Resources->Edit as Notebook" in any of the `fuzzingbook.org` pages; this will lead you to a preconfigured Jupyter Notebook environment where you can toy around at your leisure.2. Import the code for your own Python programs. Using `pip install fuzzingbook`, you can install all code and start using it from your own code. See "Can I import the code for my own Python projects?", below.3. Download or check out the code and/or the notebooks from the project site. This allows you to edit and run all things locally. However, be sure to also install the required packages; see below for details. Can I import the code for my own Python projects?Yes, you can! (If you like Python, that is.) We provide a `debuggingbook` Python package that you can install using the `pip` package manager:```shell$ pip install debuggingbook```As of `debuggingbook 1.1`, this is set up such that additional required Python packages are also installed. However, also see "Install Additional Non-Python Packages" below.Once `pip` is complete, you can import individual classes, constants, or functions from each notebook using```python>>> from debuggingbook. import ```where `` is the name of the class, constant, or function to use, and `` is the name of the respective notebook. (If you read this at debuggingbook.org, then the notebook name is the identifier preceding `".html"` in the URL).Here is an example importing `Debugger` from [the chapter on debuggers](Debugger.ipynb), whose notebook name is `Debugger`:```python>>> from debuggingbook.Debugger import Debugger>>> with Debugger(): function_to_be_observed()```The "Synopsis" section at the beginning of a chapter gives a short survey on useful code features you can use. Which OS and Python versions are required?As of `debuggingbook 1.1`, Python 3.9 and later is required. Specifically, we use Python 3.9.7 for development and testing. This is also the version to be used if you check out the code from git, and the version you get if you use the debugging book within the "mybinder" environment.To use the `debuggingbook` code with earlier Python versions, use```shell$ pip install 'debuggingbook=1.0.1'```Our notebooks generally assume a Unix-like environment; the code is tested on Linux and macOS. System-independent code may also run on Windows. Can I use the code from within a Jupyter notebook?Yes, you can! First, you install the `debuggingbook` package (as above); you can then access all code right from your notebook.Another way to use the code is to _import the notebooks directly_. Download the notebooks from the menu. Then, add your own notebooks into the same folder. After importing `bookutils`, you can then simply import the code from other notebooks, just as our own notebooks do.Here is again the above example, importing `Debugger` from [the chapter on debuggers](Debugger.ipynb) – but now from a notebook: | import bookutils
from Debugger import Debugger
with Debugger():
x = 1 + 1 | _____no_output_____ | MIT | docs/notebooks/Importing.ipynb | TheV1rtuoso/debuggingbook |
!pip install superimport
!git clone --depth 1 https://github.com/probml/pyprobml &> /dev/null
!pip install flax
%run pyprobml/scripts/vb_gauss_cholesky_biclusters_demo.py
plt.show()
| _____no_output_____ | MIT | notebooks/vb_gauss_biclusters_demo.ipynb | susnato/probml-notebooks |
|
Aggression linear word oh | report_results("linear_word_oh_aggression_prediction_results.csv") | AUC score 0.9434281464773387
| Apache-2.0 | Fatma_dataset/results/Aggression_results_analysis.ipynb | Nintendofan885/Detect_Cyberbullying_from_socialmedia |
Aggression linear char oh | report_results("linear_char_oh_aggression_prediction_results.csv") | AUC score 0.9173706304487989
| Apache-2.0 | Fatma_dataset/results/Aggression_results_analysis.ipynb | Nintendofan885/Detect_Cyberbullying_from_socialmedia |
Aggression mlp word oh | report_results("mlp_word_oh_aggression_prediction_results.csv") | AUC score 0.9413535359427857
| Apache-2.0 | Fatma_dataset/results/Aggression_results_analysis.ipynb | Nintendofan885/Detect_Cyberbullying_from_socialmedia |
Aggression mlp char oh | report_results("mlp_char_oh_aggression_prediction_results.csv") | AUC score 0.9377764851936341
| Apache-2.0 | Fatma_dataset/results/Aggression_results_analysis.ipynb | Nintendofan885/Detect_Cyberbullying_from_socialmedia |
Agression lstm word | report_results("lstm_word_oh_aggression_prediction_results.csv") | AUC score 0.9555933152450244
| Apache-2.0 | Fatma_dataset/results/Aggression_results_analysis.ipynb | Nintendofan885/Detect_Cyberbullying_from_socialmedia |
Aggression lstm char | report_results("lstm_char_oh_aggression_prediction_results.csv") | AUC score 0.7929897055078095
| Apache-2.0 | Fatma_dataset/results/Aggression_results_analysis.ipynb | Nintendofan885/Detect_Cyberbullying_from_socialmedia |
aggression conv-lstm word | report_results("conv_lstm_word_oh_aggression_prediction_results.csv") | AUC score 0.9002429773190748
| Apache-2.0 | Fatma_dataset/results/Aggression_results_analysis.ipynb | Nintendofan885/Detect_Cyberbullying_from_socialmedia |
aggression conv-lstm char | report_results("conv_lstm_char_oh_aggression_prediction_results.csv") | AUC score 0.9298119174163423
| Apache-2.0 | Fatma_dataset/results/Aggression_results_analysis.ipynb | Nintendofan885/Detect_Cyberbullying_from_socialmedia |
Individual Classifiers Gaussian Naive Bayes | from sklearn.naive_bayes import GaussianNB
estimator = GaussianNB()
param_grid = {}
gnb_best_score_, gnb_best_params_ = parameterTune(estimator, param_grid)
gnb_df = test_eval(GaussianNB, gnb_best_params_)
print('best_score_:',gnb_best_score_,'\nbest_params_:',gnb_best_params_) | best_score_: 0.7677044755508129
best_params_: {}
| MIT | 01-Titanic_Machine_Learning_from_Disaster/05_ensembling.ipynb | L-ashwin/Exploring-ml |
Logistic Regression | from sklearn.linear_model import LogisticRegression
estimator = LogisticRegression(tol=1e-4, solver='liblinear', random_state=1)
param_grid = {
'max_iter' : [1000, 2000, 3000],
'penalty' : ['l1', 'l2'],
'solver' : ['liblinear']
}
lrc_best_score_, lrc_best_params_ = parameterTune(estimator, param_grid)
lrc_df = test_eval(LogisticRegression, lrc_best_params_)
print('best_score_:',lrc_best_score_,'\nbest_params_:',lrc_best_params_) | best_score_: 0.8260247316552632
best_params_: {'max_iter': 1000, 'penalty': 'l1', 'solver': 'liblinear'}
| MIT | 01-Titanic_Machine_Learning_from_Disaster/05_ensembling.ipynb | L-ashwin/Exploring-ml |
K-Neighbors Classifier | from sklearn.neighbors import KNeighborsClassifier
estimator = KNeighborsClassifier()
param_grid = {
'n_neighbors' : [3, 5, 7, 10],
'weights' : ['uniform', 'distance'],
'p' : [1, 2]
}
knn_best_score_, knn_best_params_ = parameterTune(estimator, param_grid)
knn_df = test_eval(KNeighborsClassifier, knn_best_params_)
print('best_score_:',knn_best_score_,'\nbest_params_:',knn_best_params_) | best_score_: 0.8282907538760906
best_params_: {'n_neighbors': 10, 'p': 1, 'weights': 'uniform'}
| MIT | 01-Titanic_Machine_Learning_from_Disaster/05_ensembling.ipynb | L-ashwin/Exploring-ml |
Support Vector Classifier | from sklearn.svm import SVC
estimator = SVC()
param_grid = [
{ 'kernel' : ['linear'],
'C' : [0.1, 1, 10, 100]},
{ 'kernel' : ['rbf'],
'C' : [0.1, 1, 10, 100],
'gamma' : ['scale', 'auto', 1e-1, 1e-2, 1e-3, 1e-4],},
]
svc_best_score_, svc_best_params_ = parameterTune(estimator, param_grid)
svc_df = test_eval(SVC, svc_best_params_)
print('best_score_:',svc_best_score_,'\nbest_params_:',svc_best_params_) | best_score_: 0.8372418555018518
best_params_: {'C': 100, 'gamma': 0.01, 'kernel': 'rbf'}
| MIT | 01-Titanic_Machine_Learning_from_Disaster/05_ensembling.ipynb | L-ashwin/Exploring-ml |
Ensembles 1. Bagging Random Forest Classifier | from sklearn.ensemble import RandomForestClassifier
estimator = RandomForestClassifier()
param_grid = {
'n_estimators' : [50, 100, 250, 500, 750, 1000],
'criterion' : ["gini", "entropy"],
'max_depth' : [2,5,10,15,20],
'max_features' : ["auto","sqrt"],
}
rfc_best_score_, rfc_best_params_ = parameterTune(estimator, param_grid)
rfc_df = test_eval(RandomForestClassifier, rfc_best_params_)
print('best_score_:',rfc_best_score_,'\nbest_params_:',rfc_best_params_) | best_score_: 0.8338773460548616
best_params_: {'criterion': 'gini', 'max_depth': 5, 'max_features': 'auto', 'n_estimators': 500}
| MIT | 01-Titanic_Machine_Learning_from_Disaster/05_ensembling.ipynb | L-ashwin/Exploring-ml |
2. Boosting AdaBoostClassifier | from sklearn.ensemble import AdaBoostClassifier
estimator = AdaBoostClassifier()
param_grid = {
'n_estimators' : [20, 50, 100, 250],
}
adb_best_score_, adb_best_params_ = parameterTune(estimator, param_grid)
adb_df = test_eval(AdaBoostClassifier, adb_best_params_)
print('best_score_:',adb_best_score_,'\nbest_params_:',adb_best_params_) | best_score_: 0.8249513527085558
best_params_: {'n_estimators': 50}
| MIT | 01-Titanic_Machine_Learning_from_Disaster/05_ensembling.ipynb | L-ashwin/Exploring-ml |
GradientBoostingClassifier | from sklearn.ensemble import GradientBoostingClassifier
estimator = GradientBoostingClassifier()
param_grid = {
'loss' : ['deviance', 'exponential'],
'learning_rate' : [0.1, 0.01],
'n_estimators' : [100, 250, 500],
'subsample' : [0.75, 0.9, 1.0],
'max_depth' : [1, 2, 3, 5, 7],
}
gdb_best_score_, gdb_best_params_ = parameterTune(estimator, param_grid)
gdb_df = test_eval(GradientBoostingClassifier, gdb_best_params_)
print('best_score_:',gdb_best_score_,'\nbest_params_:',gdb_best_params_) | best_score_: 0.8473667691921412
best_params_: {'learning_rate': 0.1, 'loss': 'deviance', 'max_depth': 2, 'n_estimators': 250, 'subsample': 0.9}
| MIT | 01-Titanic_Machine_Learning_from_Disaster/05_ensembling.ipynb | L-ashwin/Exploring-ml |
Submission File | pd.DataFrame({
'GaussianNB' : gnb_best_score_,
'LogisticRegression' : lrc_best_score_,
'KNeighborsClassifier' : knn_best_score_,
'SVC' : svc_best_score_,
'RandomForestClassifier' : rfc_best_score_,
'AdaBoostClassifier' : adb_best_score_,
'GradientBoostingClassifier' : gdb_best_score_
}, index=['Accuracy'])
best_params = {
'GaussianNB' : gnb_best_params_,
'LogisticRegression' : lrc_best_params_,
'KNeighborsClassifier' : knn_best_params_,
'SVC' : svc_best_params_,
'RandomForestClassifier' : rfc_best_params_,
'AdaBoostClassifier' : adb_best_params_,
'GradientBoostingClassifier' : gdb_best_params_
}
with open("./results/05_.json", 'w') as file:
json.dump(best_params, file)
#adb_df.to_csv('./results/05_01_adb.csv', index=None) #0.76315
#gdb_df.to_csv('./results/05_02_gdb.csv', index=None) #0.77033 | _____no_output_____ | MIT | 01-Titanic_Machine_Learning_from_Disaster/05_ensembling.ipynb | L-ashwin/Exploring-ml |
Tipos de Muestreo | import numpy as np
import matplotlib.pyplot as plt
import scipy.signal as signal
import warnings
warnings.filterwarnings("ignore") | _____no_output_____ | MIT | md_scripts/tipos_de_muestreo.ipynb | AgustinSolano/SyS_scriptsbook |
Funciones para el Muestreo | def mIdeal(senal,t_senal,Ts,Fs_orig):
#tomo una muestra cada Ts y guardo en un vector
senal_mues1 = senal[np.arange(0,len(senal),int(np.round(Ts*Fs_orig)))]
#creo un vector de tiempos asociado a la muestras
t_ideal = t_senal[np.arange(0,len(senal),int(np.round(Ts*Fs_orig)))]
#t_ideal = np.arange(0,len(senal_mues1)*Ts,Ts)
return t_ideal, senal_mues1 | _____no_output_____ | MIT | md_scripts/tipos_de_muestreo.ipynb | AgustinSolano/SyS_scriptsbook |
Funcion para la Transformada de Fourier | def TFourier(signal,fs,unidadesx):#unidadesx = 0 en Hz, 1 rad/s
FFT = abs(np.fft.fftshift(np.fft.fft(signal)))
nFFT = len(FFT)
fFFT = np.arange(-nFFT/2,nFFT/2)*(fs/nFFT)
if unidadesx == 1:
fFFT= fFFT*2*np.pi
return fFFT,FFT | _____no_output_____ | MIT | md_scripts/tipos_de_muestreo.ipynb | AgustinSolano/SyS_scriptsbook |
Señales | # Se lavanta senial de un archivo separado por comas (.csv)
path_ECG = './external_files/ECG.csv'
senal_ECG = np.genfromtxt(path_ECG, delimiter=',')
fs_ECG = 1000 # Hz: frecuencia la cual fueron muestrados los datos originales, que se simulan como analogicos
t_ECG = np.arange(0,len(senal_ECG)/fs_ECG,1/fs_ECG)
# Calculo de la T.Fourier
fFFT_ECG,FFT_ECG = TFourier(senal_ECG,fs_ECG,0)
# Graficacion de la senial y su espectro
fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(16, 8))
ax1.plot(t_ECG,senal_ECG)
ax1.set_title("Señal Original de ECG")
ax1.set_xlabel("Tiempo [s]")
ax1.set_ylabel("Amplitud")
ax1.grid(linestyle='--')
ax2.plot(fFFT_ECG,FFT_ECG)
ax2.set_xlim([-60,60])
ax2.set_title("Transformada de Fourier de señal de ECG")
ax2.set_xlabel("Frecuencia [Hz]")
ax2.set_ylabel("Amplitud")
ax2.grid(linestyle='--')
plt.tight_layout() | _____no_output_____ | MIT | md_scripts/tipos_de_muestreo.ipynb | AgustinSolano/SyS_scriptsbook |
Muestreo Ideal | # Identifico la frecuencia maxima de la
fmax_ECG = 60 #a partir del espectro veo que la frecuencia
fs_muest_ECG = 2*fmax_ECG*1.25
Ts_ECG = 1/fs_muest_ECG
# Se realiza el muestreo ideal
t_ideal, sign_ideal = mIdeal(senal_ECG,t_ECG,Ts_ECG,fs_ECG)
t_min = -0.1
t_max = 3.0
# Graficacion
fig, (ax1,ax2) = plt.subplots(2, 1, figsize=(16, 12))
ax1.stem(t_ideal,sign_ideal)
ax1.set_xlim(t_min,t_max)
ax1.set_title("Señal muestreada idealmente")
ax1.set_xlabel("Tiempo [s]")
ax1.set_ylabel("Amplitud")
ax1.grid(linestyle='--')
ax2.stem(t_ideal,sign_ideal)
ax2.plot(t_ECG,senal_ECG,'--r')
ax2.set_xlim(t_min,t_max)
ax2.set_title("Señal Muestreada y Original")
ax2.set_xlabel("Tiempo [s]")
ax2.set_ylabel("Amplitud")
ax2.grid(linestyle='--')
ax2.legend(['Señal Original','Muestreo ideal'])
fig2, (ax3) = plt.subplots(1, 1, figsize=(16, 12))
ax3.stem(t_ideal,sign_ideal)
ax3.plot(t_ECG,senal_ECG,'--r')
ax3.set_xlim(0.5,1.1)
ax3.set_title("Señal Muestreada y Original")
ax3.set_xlabel("Tiempo [s]")
ax3.set_ylabel("Amplitud")
ax3.grid(linestyle='--')
ax3.legend(['Señal Original','Muestreo ideal'])
plt.show() | _____no_output_____ | MIT | md_scripts/tipos_de_muestreo.ipynb | AgustinSolano/SyS_scriptsbook |
Histogram | x = np.random.normal(size = 2000)
plt.hist(x, bins=40, color='yellowgreen')
plt.gca().set(title='Histogram', ylabel='Frequency')
plt.show()
x = np.random.rand(2000)
plt.hist(x, bins=30 ,color='#D4AC0D')
plt.gca().set(title='Histogram', ylabel='Frequency')
plt.show()
# Using Edge Color for readability
plt.figure(figsize=(10,8))
x = np.random.normal(size = 2000)
plt.hist(x, bins=40, color='yellowgreen' , edgecolor="#6A9662")
plt.gca().set(title='Histogram', ylabel='Frequency')
plt.show() | _____no_output_____ | MIT | Data Visualization/Matplotlib/4. Histogram.ipynb | shreejitverma/Data-Scientist |
Binning | # Binning
plt.figure(figsize=(10,8))
x = np.random.normal(size = 2000)
plt.hist(x, bins=30, color='yellowgreen' , edgecolor="#6A9662")
plt.gca().set(title='Histogram', ylabel='Frequency')
plt.show()
plt.figure(figsize=(10,8))
plt.hist(x, bins=20, color='yellowgreen' , edgecolor="#6A9662")
plt.gca().set(title='Histogram', ylabel='Frequency')
plt.show()
plt.figure(figsize=(10,8))
plt.hist(x, bins=10, color='yellowgreen' , edgecolor="#6A9662")
plt.gca().set(title='Histogram', ylabel='Frequency')
plt.show()
| _____no_output_____ | MIT | Data Visualization/Matplotlib/4. Histogram.ipynb | shreejitverma/Data-Scientist |
Plotting Multiple Histograms | plt.figure(figsize=(8,11))
x = np.random.normal(-4,1,size = 800)
y = np.random.normal(0,1.5,size = 800)
z = np.random.normal(3.5,1,size = 800)
plt.hist(x, bins=30, color='yellowgreen' , alpha=0.6)
plt.hist(y, bins=30, color='#FF8F00' , alpha=0.6)
plt.hist(z, bins=30, color='blue' , alpha=0.6)
plt.gca().set(title='Histogram', ylabel='Frequency')
plt.show()
# Using Histogram to plot a cumulative distribution function
plt.figure(figsize=(10,8))
x = np.random.rand(2000)
plt.hist(x, bins=30 ,color='#ffa41b' , edgecolor="#639a67",cumulative=True)
plt.gca().set(title='Histogram', ylabel='Frequency')
plt.show() | _____no_output_____ | MIT | Data Visualization/Matplotlib/4. Histogram.ipynb | shreejitverma/Data-Scientist |
Linear regression from scratchPowerful ML libraries can eliminate repetitive work, but if you rely too much on abstractions, you might never learn how neural networks really work under the hood. So for this first example, let's get our hands dirty and build everything from scratch, relying only on autograd and NDArray. First, we'll import the same dependencies as in the [autograd chapter](../chapter01_crashcourse/autograd.ipynb). We'll also import the powerful `gluon` package but in this chapter, we'll only be using it for data loading. | from __future__ import print_function
import mxnet as mx
from mxnet import nd, autograd, gluon
mx.random.seed(1) | _____no_output_____ | Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Set the contextWe'll also want to specify the contexts where computation should happen. This tutorial is so simple that you could probably run it on a calculator watch. But, to develop good habits we're going to specify two contexts: one for data and one for our models. | data_ctx = mx.cpu()
model_ctx = mx.cpu() | _____no_output_____ | Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Linear regressionTo get our feet wet, we'll start off by looking at the problem of regression.This is the task of predicting a *real valued target* $y$ given a data point $x$.In linear regression, the simplest and still perhaps the most useful approach,we assume that prediction can be expressed as a *linear* combination of the input features (thus giving the name *linear* regression):$$\hat{y} = w_1 \cdot x_1 + ... + w_d \cdot x_d + b$$Given a collection of data points $X$, and corresponding target values $\boldsymbol{y}$, we'll try to find the *weight* vector $\boldsymbol{w}$ and bias term $b$ (also called an *offset* or *intercept*)that approximately associate data points $\boldsymbol{x}_i$ with their corresponding labels ``y_i``. Using slightly more advanced math notation, we can express the predictions $\boldsymbol{\hat{y}}$corresponding to a collection of datapoints $X$ via the matrix-vector product:$$\boldsymbol{\hat{y}} = X \boldsymbol{w} + b$$Before we can get going, we will need two more things * Some way to measure the quality of the current model * Some way to manipulate the model to improve its quality Square lossIn order to say whether we've done a good job, we need some way to measure the quality of a model. Generally, we will define a *loss function*that says *how far* are our predictions from the correct answers.For the classical case of linear regression, we usually focus on the squared error.Specifically, our loss will be the sum, over all examples, of the squared error $(y_i-\hat{y})^2)$ on each:$$\ell(y, \hat{y}) = \sum_{i=1}^n (\hat{y}_i-y_i)^2.$$For one-dimensional data, we can easily visualize the relationship between our single feature and the target variable. It's also easy to visualize a linear predictor and it's error on each example. Note that squared loss *heavily penalizes outliers*. For the visualized predictor below, the lone outlier would contribute most of the loss. Manipulating the modelFor us to minimize the error,we need some mechanism to alter the model.We do this by choosing values of the *parameters*$\boldsymbol{w}$ and $b$.This is the only job of the learning algorithm.Take training data ($X$, $y$) and the functional form of the model $\hat{y} = X\boldsymbol{w} + b$.Learning then consists of choosing the best possible $\boldsymbol{w}$ and $b$ based on the available evidence. Historical noteYou might reasonably point out that linear regression is a classical statistical model.[According to Wikipedia](https://en.wikipedia.org/wiki/Regression_analysisHistory), Legendre first developed the method of least squares regression in 1805,which was shortly thereafter rediscovered by Gauss in 1809. Presumably, Legendre, who had Tweeted about the paper several times,was peeved that Gauss failed to cite his arXiv preprint. Matters of provenance aside, you might wonder - if Legendre and Gauss worked on linear regression, does that mean there were the original deep learning researchers?And if linear regression doesn't wholly belong to deep learning, then why are we presenting a linear model as the first example in a tutorial series on neural networks? Well it turns out that we can express linear regression as the simplest possible (useful) neural network. A neural network is just a collection of nodes (aka neurons) connected by directed edges. In most networks, we arrange the nodes into layers with each feeding its output into the layer above. To calculate the value of any node, we first perform a weighted sum of the inputs (according to weights ``w``) and then apply an *activation function*. For linear regression, we only have two layers, one corresponding to the input (depicted in orange) and a one-node layer (depicted in green) correspnding to the ouput.For the output node the activation function is just the identity function.While you certainly don't have to view linear regression through the lens of deep learning, you can (and we will!).To ground the concepts that we just discussed in code, let's actually code up a neural network for linear regression from scratch.To get going, we will generate a simple synthetic dataset by sampling random data points ``X[i]`` and corresponding labels ``y[i]`` in the following manner. Out inputs will each be sampled from a random normal distribution with mean $0$ and variance $1$. Our features will be independent. Another way of saying this is that they will have diagonal covariance. The labels will be generated accoding to the *true* labeling function `y[i] = 2 * X[i][0]- 3.4 * X[i][1] + 4.2 + noise` where the noise is drawn from a random gaussian with mean ``0`` and variance ``.01``. We could express the labeling function in mathematical notation as:$$y = X \cdot w + b + \eta, \quad \text{for } \eta \sim \mathcal{N}(0,\sigma^2)$$ | num_inputs = 2
num_outputs = 1
num_examples = 10000
def real_fn(X):
return 2 * X[:, 0] - 3.4 * X[:, 1] + 4.2
X = nd.random_normal(shape=(num_examples, num_inputs), ctx=data_ctx)
noise = .1 * nd.random_normal(shape=(num_examples,), ctx=data_ctx)
y = real_fn(X) + noise | _____no_output_____ | Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Notice that each row in ``X`` consists of a 2-dimensional data point and that each row in ``Y`` consists of a 1-dimensional target value. | print(X[0])
print(y[0]) |
[-1.22338355 2.39233518]
<NDArray 2 @cpu(0)>
[-6.09602737]
<NDArray 1 @cpu(0)>
| Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Note that because our synthetic features `X` live on `data_ctx` and because our noise also lives on `data_ctx`, the labels `y`, produced by combining `X` and `noise` in `real_fn` also live on `data_ctx`. We can confirm that for any randomly chosen point, a linear combination with the (known) optimal parameters produces a prediction that is indeed close to the target value | print(2 * X[0, 0] - 3.4 * X[0, 1] + 4.2) |
[-6.38070679]
<NDArray 1 @cpu(0)>
| Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
We can visualize the correspondence between our second feature (``X[:, 1]``) and the target values ``Y`` by generating a scatter plot with the Python plotting package ``matplotlib``. Make sure that ``matplotlib`` is installed. Otherwise, you may install it by running ``pip2 install matplotlib`` (for Python 2) or ``pip3 install matplotlib`` (for Python 3) on your command line. In order to plot with ``matplotlib`` we'll just need to convert ``X`` and ``y`` into NumPy arrays by using the `.asnumpy()` function. | import matplotlib.pyplot as plt
plt.scatter(X[:, 1].asnumpy(),y.asnumpy())
plt.show() | _____no_output_____ | Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Data iteratorsOnce we start working with neural networks, we're going to need to iterate through our data points quickly. We'll also want to be able to grab batches of ``k`` data points at a time, to shuffle our data. In MXNet, data iterators give us a nice set of utilities for fetching and manipulating data. In particular, we'll work with the simple ``DataLoader`` class, that provides an intuitive way to use an ``ArrayDataset`` for training models.We can load `X` and `y` into an ArrayDataset, by calling `gluon.data.ArrayDataset(X, y)`. It's ok for `X` to be a multi-deminsional input array (say, of images) and `y` to be just a one-dimensional array of labels. The one requirement is that they have equal lengths along the first axis, i.e., `len(X) == len(y)`. Given an `ArrayDataset`, we can create a DataLoader which will grab random batches of data from an `ArrayDataset`. We'll want to specify two arguments. First, we'll need to say the `batch_size`, i.e., how many examples we want to grab at a time. Second, we'll want to specify whether or not to shuffle the data between iterations through the dataset. | batch_size = 4
train_data = gluon.data.DataLoader(gluon.data.ArrayDataset(X, y),
batch_size=batch_size, shuffle=True) | _____no_output_____ | Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Once we've initialized our DataLoader (``train_data``), we can easily fetch batches by iterating over `train_data` just as if it were a Python list. You can use for favorite iterating techniques like foreach loops: `for data, label in train_data` or enumerations: `for i, (data, label) in enumerate(train_data)`. First, let's just grab one batch and break out of the loop. | for i, (data, label) in enumerate(train_data):
print(data, label)
break |
[[-0.14732301 -1.32803488]
[-0.56128627 0.48301753]
[ 0.75564283 -0.12659997]
[-0.96057719 -0.96254188]]
<NDArray 4x2 @cpu(0)>
[ 8.25711536 1.30587864 6.15542459 5.48825312]
<NDArray 4 @cpu(0)>
| Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
If we run that same code again you'll notice that we get a different batch. That's because we instructed the `DataLoader` that `shuffle=True`. | for i, (data, label) in enumerate(train_data):
print(data, label)
break |
[[-0.59027743 -1.52694809]
[-0.00750104 2.68466949]
[ 1.50308061 0.54902577]
[ 1.69129586 0.32308948]]
<NDArray 4x2 @cpu(0)>
[ 8.28844357 -5.07566643 5.3666563 6.52408457]
<NDArray 4 @cpu(0)>
| Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Finally, if we actually pass over the entire dataset, and count the number of batches, we'll find that there are 2500 batches. We expect this because our dataset has 10,000 examples we configure the `DataLoader` with a batch size of 4. | counter = 0
for i, (data, label) in enumerate(train_data):
pass
print(i+1) | 2500
| Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Model parametersNow let's allocate some memory for our parameters and set their initial values. We'll want to initialize these parameters on the `model_ctx`. | w = nd.random_normal(shape=(num_inputs, num_outputs), ctx=model_ctx)
b = nd.random_normal(shape=num_outputs, ctx=model_ctx)
params = [w, b] | _____no_output_____ | Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
In the succeeding cells, we're going to update these parameters to better fit our data. This will involve taking the gradient (a multi-dimensional derivative) of some *loss function* with respect to the parameters. We'll update each parameter in the direction that reduces the loss. But first, let's just allocate some memory for each gradient. | for param in params:
param.attach_grad() | _____no_output_____ | Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Neural networksNext we'll want to define our model. In this case, we'll be working with linear models, the simplest possible *useful* neural network. To calculate the output of the linear model, we simply multiply a given input with the model's weights (``w``), and add the offset ``b``. | def net(X):
return mx.nd.dot(X, w) + b | _____no_output_____ | Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Ok, that was easy. Loss functionTrain a model means making it better and better over the course of a period of training. But in order for this goal to make any sense at all, we first need to define what *better* means in the first place. In this case, we'll use the squared distance between our prediction and the true value. | def square_loss(yhat, y):
return nd.mean((yhat - y) ** 2) | _____no_output_____ | Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
OptimizerIt turns out that linear regression actually has a closed-form solution. However, most interesting models that we'll care about cannot be solved analytically. So we'll solve this problem by stochastic gradient descent. At each step, we'll estimate the gradient of the loss with respect to our weights, using one batch randomly drawn from our dataset. Then, we'll update our parameters a small amount in the direction that reduces the loss. The size of the step is determined by the *learning rate* ``lr``. | def SGD(params, lr):
for param in params:
param[:] = param - lr * param.grad | _____no_output_____ | Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Execute training loopNow that we have all the pieces, we just need to wire them together by writing a training loop. First we'll define ``epochs``, the number of passes to make over the dataset. Then for each pass, we'll iterate through ``train_data``, grabbing batches of examples and their corresponding labels. For each batch, we'll go through the following ritual: * Generate predictions (``yhat``) and the loss (``loss``) by executing a forward pass through the network.* Calculate gradients by making a backwards pass through the network (``loss.backward()``). * Update the model parameters by invoking our SGD optimizer. | epochs = 10
learning_rate = .0001
num_batches = num_examples/batch_size
for e in range(epochs):
cumulative_loss = 0
# inner loop
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(model_ctx)
label = label.as_in_context(model_ctx).reshape((-1, 1))
with autograd.record():
output = net(data)
loss = square_loss(output, label)
loss.backward()
SGD(params, learning_rate)
cumulative_loss += loss.asscalar()
print(cumulative_loss / num_batches) | 24.6606138554
9.09776815639
3.36058844271
1.24549788469
0.465710770596
0.178157229481
0.0721970594548
0.0331197250206
0.0186954441286
0.0133724625537
| Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
Visualizing our training progessIn the succeeding chapters, we'll introduce more realistic data, fancier models, more complicated loss functions, and more. But the core ideas are the same and the training loop will look remarkably familiar. Because these tutorials are self-contained, you'll get to know this ritual quite well. In addition to updating out model, we'll often want to do some bookkeeping. Among other things, we might want to keep track of training progress and visualize it graphically. We demonstrate one slighly more sophisticated training loop below. | ############################################
# Re-initialize parameters because they
# were already trained in the first loop
############################################
w[:] = nd.random_normal(shape=(num_inputs, num_outputs), ctx=model_ctx)
b[:] = nd.random_normal(shape=num_outputs, ctx=model_ctx)
############################################
# Script to plot the losses over time
############################################
def plot(losses, X, sample_size=100):
xs = list(range(len(losses)))
f, (fg1, fg2) = plt.subplots(1, 2)
fg1.set_title('Loss during training')
fg1.plot(xs, losses, '-r')
fg2.set_title('Estimated vs real function')
fg2.plot(X[:sample_size, 1].asnumpy(),
net(X[:sample_size, :]).asnumpy(), 'or', label='Estimated')
fg2.plot(X[:sample_size, 1].asnumpy(),
real_fn(X[:sample_size, :]).asnumpy(), '*g', label='Real')
fg2.legend()
plt.show()
learning_rate = .0001
losses = []
plot(losses, X)
for e in range(epochs):
cumulative_loss = 0
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(model_ctx)
label = label.as_in_context(model_ctx).reshape((-1, 1))
with autograd.record():
output = net(data)
loss = square_loss(output, label)
loss.backward()
SGD(params, learning_rate)
cumulative_loss += loss.asscalar()
print("Epoch %s, batch %s. Mean loss: %s" % (e, i, cumulative_loss/num_batches))
losses.append(cumulative_loss/num_batches)
plot(losses, X) | _____no_output_____ | Apache-2.0 | Training/Tutorial - Gluon MXNet - The Straight Dope Master/chapter02_supervised-learning/linear-regression-scratch.ipynb | farhadrclass/DataScience-Lab |
[](https://githubtocolab.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/NER_PT.ipynb) **Detect entities in Portuguese text** 1. Colab Setup | # Install PySpark and Spark NLP
! pip install -q pyspark==3.1.2 spark-nlp
# Install Spark NLP Display lib
! pip install --upgrade -q spark-nlp-display | _____no_output_____ | Apache-2.0 | tutorials/streamlit_notebooks/NER_PT.ipynb | fcivardi/spark-nlp-workshop |
2. Start the Spark session | import json
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
from sparknlp.annotator import *
from sparknlp.base import *
import sparknlp
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start() | _____no_output_____ | Apache-2.0 | tutorials/streamlit_notebooks/NER_PT.ipynb | fcivardi/spark-nlp-workshop |
3. Select the DL model | # If you change the model, re-run all the cells below.
# Applicable models: wikiner_840B_300, wikiner_6B_300, wikiner_6B_100
MODEL_NAME = "wikiner_840B_300" | _____no_output_____ | Apache-2.0 | tutorials/streamlit_notebooks/NER_PT.ipynb | fcivardi/spark-nlp-workshop |
4. Some sample examples | # Enter examples to be transformed as strings in this list
text_list = [
"""William Henry Gates III (nascido em 28 de outubro de 1955) é um magnata americano de negócios, desenvolvedor de software, investidor e filantropo. Ele é mais conhecido como co-fundador da Microsoft Corporation. Durante sua carreira na Microsoft, Gates ocupou os cargos de presidente, diretor executivo (CEO), presidente e diretor de arquitetura de software, além de ser o maior acionista individual até maio de 2014. Ele é um dos empreendedores e pioneiros mais conhecidos da revolução dos microcomputadores nas décadas de 1970 e 1980. Nascido e criado em Seattle, Washington, Gates co-fundou a Microsoft com o amigo de infância Paul Allen em 1975, em Albuquerque, Novo México; tornou-se a maior empresa de software de computador pessoal do mundo. Gates liderou a empresa como presidente e CEO até deixar o cargo em janeiro de 2000, mas ele permaneceu como presidente e tornou-se arquiteto-chefe de software. No final dos anos 90, Gates foi criticado por suas táticas de negócios, que foram consideradas anticompetitivas. Esta opinião foi confirmada por várias decisões judiciais. Em junho de 2006, Gates anunciou que iria passar para um cargo de meio período na Microsoft e trabalhar em período integral na Fundação Bill & Melinda Gates, a fundação de caridade privada que ele e sua esposa, Melinda Gates, estabeleceram em 2000. [ 9] Ele gradualmente transferiu seus deveres para Ray Ozzie e Craig Mundie. Ele deixou o cargo de presidente da Microsoft em fevereiro de 2014 e assumiu um novo cargo como consultor de tecnologia para apoiar a recém-nomeada CEO Satya Nadella.""",
"""A Mona Lisa é uma pintura a óleo do século XVI, criada por Leonardo. É realizada no Louvre, em Paris."""
] | _____no_output_____ | Apache-2.0 | tutorials/streamlit_notebooks/NER_PT.ipynb | fcivardi/spark-nlp-workshop |
5. Define Spark NLP pipeline | document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')
tokenizer = Tokenizer() \
.setInputCols(['document']) \
.setOutputCol('token')
# The wikiner_840B_300 is trained with glove_840B_300, so the embeddings in the
# pipeline should match. Same applies for the other available models.
if MODEL_NAME == "wikiner_840B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_840B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_300":
embeddings = WordEmbeddingsModel.pretrained('glove_6B_300', lang='xx') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
elif MODEL_NAME == "wikiner_6B_100":
embeddings = WordEmbeddingsModel.pretrained('glove_100d') \
.setInputCols(['document', 'token']) \
.setOutputCol('embeddings')
ner_model = NerDLModel.pretrained(MODEL_NAME, 'pt') \
.setInputCols(['document', 'token', 'embeddings']) \
.setOutputCol('ner')
ner_converter = NerConverter() \
.setInputCols(['document', 'token', 'ner']) \
.setOutputCol('ner_chunk')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
embeddings,
ner_model,
ner_converter
]) | glove_840B_300 download started this may take some time.
Approximate size to download 2.3 GB
[OK!]
wikiner_840B_300 download started this may take some time.
Approximate size to download 14.5 MB
[OK!]
| Apache-2.0 | tutorials/streamlit_notebooks/NER_PT.ipynb | fcivardi/spark-nlp-workshop |
6. Run the pipeline | empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
df = spark.createDataFrame(pd.DataFrame({'text': text_list}))
result = pipeline_model.transform(df) | _____no_output_____ | Apache-2.0 | tutorials/streamlit_notebooks/NER_PT.ipynb | fcivardi/spark-nlp-workshop |
7. Visualize results | from sparknlp_display import NerVisualizer
NerVisualizer().display(
result = result.collect()[0],
label_col = 'ner_chunk',
document_col = 'document'
) | _____no_output_____ | Apache-2.0 | tutorials/streamlit_notebooks/NER_PT.ipynb | fcivardi/spark-nlp-workshop |
The test function is $y = 5x^2+10x-8$. BFGS's method is implemented to find the minimum of the test function. The user should be able to find the x and y of the minmium as well as access the jacobian of eaach optimization step.First, we instantiate a `Number(5)` as the initial guess ($x_0$) of the root to the minimum. The `bfgs()` method takes the test function and the initial guess. Second, the function and its derivative are evaluated at $x_0$. BFGS requires a speculated Hessian, and the initial guess is usually an identity matrix, or in the scalar case, 1. The initial guess of hessian is stored in $b_0$ Then an intermediate $s_0$ is determined through solving $b_0s_0=-\nabla func(x_0)$Third, $x_1$'s value is set to be $x_0+s_0$Fourth, another intermediate $y_0$'s value is set to be $\nabla(x_1)-\nabla(x_0)$Fifth, $b_1$ is updated and its value is equal to $b_1=b_0+\Delta b_0$, where $\Delta b_0$ is equivalent to $\frac{y_0}{s_0}-b0$Sixth, The values of $b_0$ and $x_0$ are updated with $b_1$ and $x_1$, respectively. Such process repeats until the jacobian turns 0Note, in our example, | import sys
sys.path.append('..')
import autodiff.operations as operations
from autodiff.structures import Number
import numpy as np
def func(x):
return 5 * x ** 2 + 10 * x - 8
def bfgs(func, initial_guess):
#bfgs for scalar functions
x0 = initial_guess
#initial guess of hessian
b0 = 1
fxn0 = func(x0)
fpxn0 = fxn0.jacobian(x0)
jacobians = []
jacobians.append(fpxn0)
while(np.abs(fpxn0)>1*10**-7):
fxn0 = func(x0)
fpxn0 = fxn0.jacobian(x0)
s0 = -fpxn0/b0
x1=x0+s0
fxn1 = func(x1)
fpxn1 = fxn1.jacobian(x1)
y0 = fpxn1-fpxn0
if y0 == 0:
break
#delta_b = y0**2/(y0*s0)-b0*s0**2*b0/(s0*b0*s0)
delta_b = y0/s0-b0
b1 = b0 + delta_b
x0 = x1
b0 = b1
jacobians.append(fpxn1)
return x0,func(x0),jacobians
x0 = Number(5)
xstar,minimum,jacobians = bfgs(func,x0)
print("The jacobians at 1st, 2nd and final steps are:",jacobians,'. The jacobian value is 0 in the last step, indicating completion of the optimization process.')
print()
print("The x* is", xstar ) | The jacobians at 1st, 2nd and final steps are: [60, -540.0, 0.0] . The jacobian value is 0 in the last step, indicating completion of the optimization process.
The x* is Number(val=-1.0)
| MIT | docs/bfgs.ipynb | rocketscience0/cs207-FinalProject |
Commity | class Commity():
def __init__(self,M) -> None:
self.M = M
self.model = [LinearRegression(basis_function="polynomial",deg=3) for _ in range(M)]
def fit(self,X,y):
n = len(X)
sample = int(n*0.8)
for i in range(self.M):
idx = np.random.randint(0,n,sample)
X_bootstrap,y_bootstrap = X[idx],y[idx]
self.model[i].fit(X_bootstrap,y_bootstrap)
def predict(self,X):
y_pred = self.model[0].predict(X)
for i in range(self.M-1):
y_pred += self.model[i+1].predict(X)
return y_pred/self.M
def f(x):
return np.sin(x) + x
generator = RegressionDataGenerator(f)
X,y = generator(n=100,std=0.6)
# in many cases, a commity model has worse results ..., maybe because of data size of each model is smaller than that of non commity model
lr = LinearRegression(basis_function="polynomial",deg=3)
lr.fit(X,y)
plot_regression1D(X,y,lr,"Linear Regression",f)
com = Commity(M=5)
com.fit(X,y)
plot_regression1D(X,y,com,"Commity",f) | RMSE : 0.14445858781232257
| MIT | notebook/chapter14_combining_models.ipynb | hedwig100/PRML |
AdaBoost | class AdaBoost(Classifier):
"""AdaBoost
weak_learner is decision stump
Attributes:
M (int): number of weak leaner
weak_leaner (list): list of data about weak learner
"""
def __init__(self,M=5) -> None:
"""__init__
Args:
M (int): number of weak leaner
"""
super(AdaBoost,self).__init__()
self.M = M
def fit(self,X,y):
"""fit
only accept N_dim = 2 data
Args:
X (2-D array): shape = (N_samples,2),
y (1-D array or 2-D array) : if 1-D array, y should be label-encoded, but 2-D arrray, y should be one-hot-encoded. should be 2-class data.
"""
y = self._onehot_to_label(y)
y[y == 0.0] = -1.0
y = y.astype("int")
N = len(X)
sort_idx = np.argsort(X,axis=0)
weight = np.ones(N)/N
weak_learner = [None]*self.M
for i in range(self.M):
x_border,x_more_or_less,x_score = self._weak_learn(X[:,0],sort_idx[:,0],y,weight)
y_border,y_more_or_less,y_score = self._weak_learn(X[:,1],sort_idx[:,1],y,weight)
if x_score < y_score:
ax = "x"
border,more_or_less = x_border,x_more_or_less
else:
ax = "y"
border,more_or_less = y_border,y_more_or_less
miss = self._miss_idx(X,y,ax,border,more_or_less)
eps = np.sum(miss*weight)/np.sum(weight)
alpha = np.log((1 - eps)/eps)
weight *= np.exp(alpha*miss)
weak_learner[i] = {
"ax":ax,
"border":border,
"more_or_less":more_or_less,
"alpha":alpha
}
self.weak_learner = weak_learner
def _weak_learn(self,X,sort_idx,y,weight):
weight_sum = weight.sum()
more_score = weight[y != 1].sum() # score when all data is asigned 1
border,more_or_less,score = X[sort_idx[0]]-1,"more",more_score
for i in range(len(X)):
if y[sort_idx[i]] == 1:
more_score += weight[sort_idx[i]]
else:
more_score -= weight[sort_idx[i]]
less_score = weight_sum - more_score
if more_score < score:
border,more_or_less,score = X[sort_idx[i]],"more",more_score
if less_score < score:
border,more_or_less,score = X[sort_idx[i]],"less",less_score
return border,more_or_less,score
def _miss_idx(self,X,y,ax,border,more_or_less):
y_pred = self._predict(X,ax,border,more_or_less)
return (y_pred != y).astype("int")
def _predict(self,X,ax,border,more_or_less):
if more_or_less == "more":
if ax == "x":
class1 = X[:,0] > border
elif ax == "y":
class1 = X[:,1] > border
elif more_or_less == "less":
if ax == "x":
class1 = X[:,0] <= border
elif ax == 'y':
class1 = X[:,1] <= border
pred = np.zeros(len(X)) - 1
pred[class1] = 1
return pred
def predict(self,X):
"""predict
Args:
X (2-D array) : explanatory variable, shape = (N_samples,2)
Returns:
y (1-D array or 2-D array) : if 1-D array, y should be label-encoded, but 2-D arrray, y should be one-hot-encoded. This depends on parameter y when fitting.
"""
y_pred = np.zeros(len(X))
for i in range(self.M):
pred = self._predict(
X,
self.weak_learner[i]["ax"],
self.weak_learner[i]["border"],
self.weak_learner[i]["more_or_less"],
)
y_pred += self.weak_learner[i]["alpha"]*pred
y_pred = np.sign(y_pred)
return self._inverse_transform(y_pred)
generator = ClassificationDataGenerator2(f=np.sin)
X,y = generator(n=100,x_lower=0,x_upper=2*np.pi,y_lower=-1.2,y_upper=1.2)
ab = AdaBoost(M=20)
ab.fit(X,y)
plot_classifier(X,y,ab,title="AdaBoost") | _____no_output_____ | MIT | notebook/chapter14_combining_models.ipynb | hedwig100/PRML |
CART | class CARTRegressor():
"""CARTRegressor
Attributes:
lamda (float): regularizatioin parameter
tree (object): parameter
"""
def __init__(self,lamda=1e-2):
"""__init__
Args:
lamda (float): regularizatioin parameter
"""
self.lamda = lamda
def fit(self,X,y):
"""fit
Args:
X (2-D array) : explanatory variable,shape = (N_samples,N_dim)
y (1-D array) : target variable, shape = (N_samples)
"""
N = len(X)
leaves = np.zeros(N)
num_nodes = 1
num_leaves = 1
tree = []
while True:
if num_leaves == 0:
break
for leaf in range(num_nodes-num_leaves,num_nodes):
idx = np.arange(N)[leaf == leaves]
if len(idx) == 1:
num_leaves -= 1
tree.append({
"border": None,
"target": y[idx][0]
}) # has no child
continue
ax,border,score,more_index,less_index = -1,None,1e20,None,None
for m in range(X.shape[1]):
now_border,now_score,now_more_index,now_less_index = self._find_boundry(idx,X[idx,m],y[idx])
if now_score < score:
ax,border,score,more_index,less_index = m,now_border,now_score,now_more_index,now_less_index
if border is None:
num_leaves -= 1
tree.append({
"border": None,
"target": y[idx].mean()
}) # has no child
continue
tree.append({
"left_index": num_nodes,
"right_index": num_nodes+1,
"border": border,
"ax": ax
})
leaves[less_index] = num_nodes
leaves[more_index] = num_nodes+1
num_nodes += 2
num_leaves += 1
self.tree = tree
def _find_boundry(self,idx,X,y):
n = len(idx)
sort_idx = np.argsort(X)
all_sum = np.sum(y)
right_sum = all_sum
# when all data is in one leaf
score_now = self._error_function(y,right_sum/n) + self.lamda
border_index,score = None,score_now
pred = np.zeros(n)
for i in range(n-1):
right_sum -= y[sort_idx[i]]
left_sum = all_sum - right_sum
pred[sort_idx[i+1:]] = right_sum/(n-i-1)
pred[sort_idx[:i+1]] = left_sum/(i+1)
score_now = self._error_function(y,pred) + self.lamda*2
if score_now < score:
border_index,score = i,score_now
if border_index is None: # no division
return None,1e20,None,None
border = X[sort_idx[border_index]]
more_index = idx[sort_idx[border_index+1:]]
less_index = idx[sort_idx[:border_index+1]]
return border,score,more_index,less_index
def _error_function(self,y,pred):
return np.mean((y-pred)**2)
def _predict(self,X,p_id=0):
if self.tree[p_id]["border"] is None:
return np.zeros(len(X)) + self.tree[p_id]["target"]
ax = self.tree[p_id]["ax"]
border = self.tree[p_id]["border"]
y = np.zeros(len(X))
y[X[:,ax] > border] = self._predict(X[X[:,ax] > border],p_id=self.tree[p_id]["right_index"])
y[X[:,ax] <= border] = self._predict(X[X[:,ax] <= border],p_id=self.tree[p_id]["left_index"])
return y
def predict(self,X):
"""predict
Args:
X (2-D array) : explanatory variable, shape = (N_samples,N_dim)
Returns:
y (1-D array) : predictive value
"""
y = self._predict(X)
return y
generator = RegressionDataGenerator(f=np.sin)
X,y = generator(n=100,std=0.2)
cart = CARTRegressor(lamda=1e-2)
cart.fit(X,y.ravel())
plot_regression1D(X,y,cart,title="CART Regressor",f=np.sin) | RMSE : 1.00482444289275
| MIT | notebook/chapter14_combining_models.ipynb | hedwig100/PRML |
Linear Mixture | class LinearMixture(Regression):
"""LinearMixture
Attributes:
K (int): number of mixture modesl
max_iter (int): max iteration
threshold (float): threshold for EM algorithm
pi (1-D array): mixture, which model is chosen
weight (2-D array): shape = (K,M), M is dimension of feature space, weight
beta (float): precision parameter
"""
def __init__(self,K=3,max_iter=100,threshold=1e-3,basis_function="gauss",mu=None,s=None,deg=None):
super(LinearMixture,self).__init__(basis_function,mu,s,deg)
self.K = K
self.max_iter = max_iter
self.threshold = threshold
def _gauss(self,x,mu,beta):
return (beta/2*np.pi)**0.5 * np.exp(-beta/2*(x-mu)**2)
def fit(self,X,y):
"""fit
Args:
X (2-D array) : explanatory variable,shape = (N_samples,N_dim)
y (1-D array) : target variable, shape = (N_samples)
"""
design_mat = self.make_design_mat(X)
N,M = design_mat.shape
gamma = np.random.rand(N,self.K) + 1
gamma /= gamma.sum(axis=1,keepdims=True)
for _ in range(self.max_iter):
# M step
pi = gamma.mean(axis = 0)
R = gamma.T.reshape(self.K,N,1)
weight = np.linalg.inv(design_mat.T@(R*design_mat))@design_mat.T@(R*y.reshape(-1,1))
weight = weight.reshape((self.K,M))
beta = N/np.sum(gamma*(y.reshape(-1,1) - [email protected])**2)
# E step
gauss = pi*np.exp(-beta/2*(y.reshape(-1,1) - [email protected])**2) + 1e-10
new_gamma = gauss/gauss.sum(axis=1,keepdims=True)
if np.mean((new_gamma - gamma)**2)**0.5 < self.threshold:
gamma = new_gamma
break
gamma = new_gamma
self.pi = pi
self.weight = weight
self.beta = beta
def predict(self,X):
"""predict
Args:
X (2-D array) : data,shape = (N_samples,N_dim)
Returns:
y (1-D array) : predicted value, shape = (N_samples)
"""
design_mat = self.make_design_mat(X)
return np.dot([email protected],self.pi)
generator = RegressionDataGenerator(f=np.sin)
X,y = generator(n=100,std=0.2)
linMix = LinearMixture(K=5,basis_function="polynomial",deg=4)
linMix.fit(X,y.ravel())
plot_regression1D(X,y,linMix,title="Linear Mixture",f=np.sin) | RMSE : 0.9774354228059758
| MIT | notebook/chapter14_combining_models.ipynb | hedwig100/PRML |
선형 회귀 구글 코랩에서 실행하기 k-최근접 이웃의 한계 | import numpy as np
perch_length = np.array(
[8.4, 13.7, 15.0, 16.2, 17.4, 18.0, 18.7, 19.0, 19.6, 20.0,
21.0, 21.0, 21.0, 21.3, 22.0, 22.0, 22.0, 22.0, 22.0, 22.5,
22.5, 22.7, 23.0, 23.5, 24.0, 24.0, 24.6, 25.0, 25.6, 26.5,
27.3, 27.5, 27.5, 27.5, 28.0, 28.7, 30.0, 32.8, 34.5, 35.0,
36.5, 36.0, 37.0, 37.0, 39.0, 39.0, 39.0, 40.0, 40.0, 40.0,
40.0, 42.0, 43.0, 43.0, 43.5, 44.0]
)
perch_weight = np.array(
[5.9, 32.0, 40.0, 51.5, 70.0, 100.0, 78.0, 80.0, 85.0, 85.0,
110.0, 115.0, 125.0, 130.0, 120.0, 120.0, 130.0, 135.0, 110.0,
130.0, 150.0, 145.0, 150.0, 170.0, 225.0, 145.0, 188.0, 180.0,
197.0, 218.0, 300.0, 260.0, 265.0, 250.0, 250.0, 300.0, 320.0,
514.0, 556.0, 840.0, 685.0, 700.0, 700.0, 690.0, 900.0, 650.0,
820.0, 850.0, 900.0, 1015.0, 820.0, 1100.0, 1000.0, 1100.0,
1000.0, 1000.0]
)
from sklearn.model_selection import train_test_split
# 훈련 세트와 테스트 세트로 나눕니다
train_input, test_input, train_target, test_target = train_test_split(
perch_length, perch_weight, random_state=42)
# 훈련 세트와 테스트 세트를 2차원 배열로 바꿉니다
train_input = train_input.reshape(-1, 1)
test_input = test_input.reshape(-1, 1)
from sklearn.neighbors import KNeighborsRegressor
knr = KNeighborsRegressor(n_neighbors=3)
# k-최근접 이웃 회귀 모델을 훈련합니다
knr.fit(train_input, train_target)
print(knr.predict([[50]]))
import matplotlib.pyplot as plt
# 50cm 농어의 이웃을 구합니다
distances, indexes = knr.kneighbors([[50]])
# 훈련 세트의 산점도를 그립니다
plt.scatter(train_input, train_target)
# 훈련 세트 중에서 이웃 샘플만 다시 그립니다
plt.scatter(train_input[indexes], train_target[indexes], marker='D')
# 50cm 농어 데이터
plt.scatter(50, 1033, marker='^')
plt.show()
print(np.mean(train_target[indexes]))
print(knr.predict([[100]]))
# 100cm 농어의 이웃을 구합니다
distances, indexes = knr.kneighbors([[100]])
# 훈련 세트의 산점도를 그립니다
plt.scatter(train_input, train_target)
# 훈련 세트 중에서 이웃 샘플만 다시 그립니다
plt.scatter(train_input[indexes], train_target[indexes], marker='D')
# 100cm 농어 데이터
plt.scatter(100, 1033, marker='^')
plt.show() | _____no_output_____ | MIT | ch03_regression/3-2.ipynb | CaptLWM/AI |
선형 회귀 | from sklearn.linear_model import LinearRegression
lr = LinearRegression()
# 선형 회귀 모델 훈련
lr.fit(train_input, train_target)
# 50cm 농어에 대한 예측
print(lr.predict([[50]]))
print(lr.coef_, lr.intercept_)
# 훈련 세트의 산점도를 그립니다
plt.scatter(train_input, train_target)
# 15에서 50까지 1차 방정식 그래프를 그립니다
plt.plot([15, 50], [15*lr.coef_+lr.intercept_, 50*lr.coef_+lr.intercept_])
# 50cm 농어 데이터
plt.scatter(50, 1241.8, marker='^')
plt.show()
print(lr.score(train_input, train_target))
print(lr.score(test_input, test_target)) | 0.9398463339976039
0.8247503123313558
| MIT | ch03_regression/3-2.ipynb | CaptLWM/AI |
다항 회귀 | train_poly = np.column_stack((train_input ** 2, train_input))
test_poly = np.column_stack((test_input ** 2, test_input))
print(train_poly.shape, test_poly.shape)
lr = LinearRegression()
lr.fit(train_poly, train_target)
print(lr.predict([[50**2, 50]]))
print(lr.coef_, lr.intercept_)
# 구간별 직선을 그리기 위해 15에서 49까지 정수 배열을 만듭니다
point = np.arange(15, 50)
# 훈련 세트의 산점도를 그립니다
plt.scatter(train_input, train_target)
# 15에서 49까지 2차 방정식 그래프를 그립니다
plt.plot(point, 1.01*point**2 - 21.6*point + 116.05)
# 50cm 농어 데이터
plt.scatter([50], [1574], marker='^')
plt.show()
print(lr.score(train_poly, train_target))
print(lr.score(test_poly, test_target)) | 0.9706807451768623
0.9775935108325122
| MIT | ch03_regression/3-2.ipynb | CaptLWM/AI |
Orbit Computation This tutorial demonstrates how to generate satellite orbits using various models. Setup | import numpy as np
import pandas as pd
import plotly.graph_objs as go
from ostk.physics.units import Length
from ostk.physics.units import Angle
from ostk.physics.time import Scale
from ostk.physics.time import Instant
from ostk.physics.time import Duration
from ostk.physics.time import Interval
from ostk.physics.time import DateTime
from ostk.physics.coordinate.spherical import LLA
from ostk.physics.coordinate import Frame
from ostk.physics import Environment
from ostk.physics.environment.objects.celestial_bodies import Earth
from ostk.astrodynamics import Trajectory
from ostk.astrodynamics.trajectory import Orbit
from ostk.astrodynamics.trajectory.orbit.models import Kepler
from ostk.astrodynamics.trajectory.orbit.models.kepler import COE
from ostk.astrodynamics.trajectory.orbit.models import SGP4
from ostk.astrodynamics.trajectory.orbit.models.sgp4 import TLE | _____no_output_____ | Apache-2.0 | notebooks/Orbit Computation/Orbit Computation.ipynb | open-space-collective/open-space-toolk |
--- SGP4 Computation | environment = Environment.default() | _____no_output_____ | Apache-2.0 | notebooks/Orbit Computation/Orbit Computation.ipynb | open-space-collective/open-space-toolk |
Create a Classical Orbital Element (COE) set: | a = Length.kilometers(7000.0)
e = 0.0001
i = Angle.degrees(35.0)
raan = Angle.degrees(40.0)
aop = Angle.degrees(45.0)
nu = Angle.degrees(50.0)
coe = COE(a, e, i, raan, aop, nu) | _____no_output_____ | Apache-2.0 | notebooks/Orbit Computation/Orbit Computation.ipynb | open-space-collective/open-space-toolk |
Setup a Keplerian orbital model: | epoch = Instant.date_time(DateTime(2018, 1, 1, 0, 0, 0), Scale.UTC)
earth = environment.access_celestial_object_with_name("Earth")
keplerian_model = Kepler(coe, epoch, earth, Kepler.PerturbationType.No) | _____no_output_____ | Apache-2.0 | notebooks/Orbit Computation/Orbit Computation.ipynb | open-space-collective/open-space-toolk |
Create a Two-Line Element (TLE) set: | tle = TLE(
"1 39419U 13066D 18260.77424112 .00000022 00000-0 72885-5 0 9996",
"2 39419 97.6300 326.6556 0013847 175.2842 184.8495 14.93888428262811"
) | _____no_output_____ | Apache-2.0 | notebooks/Orbit Computation/Orbit Computation.ipynb | open-space-collective/open-space-toolk |
Setup a SGP4 orbital model: | sgp4_model = SGP4(tle) | _____no_output_____ | Apache-2.0 | notebooks/Orbit Computation/Orbit Computation.ipynb | open-space-collective/open-space-toolk |
Setup the orbit: | # orbit = Orbit(keplerian_model, environment.access_celestial_object_with_name("Earth"))
orbit = Orbit(sgp4_model, environment.access_celestial_object_with_name("Earth")) | _____no_output_____ | Apache-2.0 | notebooks/Orbit Computation/Orbit Computation.ipynb | open-space-collective/open-space-toolk |
Now that the orbit is set, we can compute the satellite position: | start_instant = Instant.date_time(DateTime(2018, 9, 5, 0, 0, 0), Scale.UTC)
end_instant = Instant.date_time(DateTime(2018, 9, 6, 0, 0, 0), Scale.UTC)
interval = Interval.closed(start_instant, end_instant)
step = Duration.minutes(1.0) | _____no_output_____ | Apache-2.0 | notebooks/Orbit Computation/Orbit Computation.ipynb | open-space-collective/open-space-toolk |
Generate a time grid: | instants = interval.generate_grid(step)
states = [[instant, orbit.get_state_at(instant)] for instant in instants]
def convert_state (instant, state):
lla = LLA.cartesian(state.get_position().in_frame(Frame.ITRF(), state.get_instant()).get_coordinates(), Earth.equatorial_radius, Earth.flattening)
return [
repr(instant),
float(instant.get_modified_julian_date(Scale.UTC)),
*state.get_position().get_coordinates().transpose()[0].tolist(),
*state.get_velocity().get_coordinates().transpose()[0].tolist(),
float(lla.get_latitude().in_degrees()),
float(lla.get_longitude().in_degrees()),
float(lla.get_altitude().in_meters())
]
orbit_data = [convert_state(instant, state) for [instant, state] in states]
orbit_df = pd.DataFrame(orbit_data, columns=['$Time^{UTC}$', '$MJD^{UTC}$', '$x_{x}^{ECI}$', '$x_{y}^{ECI}$', '$x_{z}^{ECI}$', '$v_{x}^{ECI}$', '$v_{y}^{ECI}$', '$v_{z}^{ECI}$', '$Latitude$', '$Longitude$', '$Altitude$']) | _____no_output_____ | Apache-2.0 | notebooks/Orbit Computation/Orbit Computation.ipynb | open-space-collective/open-space-toolk |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.