markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
For more complicated models or fits it may be better to use the `estimate_line_parameters` function instead of manually creating e.g. a `Gaussian1D` model and setting the center. An example of this pattern is given below.Note that we provided a default `Gaussian1D` model to the `estimate_line_parameters` function above. This function makes reasonable guesses for `Gaussian1D`, `Voigt1D`, and `Lorentz1D`, the most common line profiles used for spectral lines, but may or may not work for other models. See [the relevant docs section](https://specutils.readthedocs.io/en/latest/fitting.htmlparameter-estimation) for more details.In this example we also show an example of a *joint* fit of all three lines at the same time. While the difference may seems subtle, in cases of blended lines this typically provides much better fits: | halpha_line_estimates = []
for line in halpha_lines:
line_region = SpectralRegion(line['line_center']-3*u.angstrom,
line['line_center']+3*u.angstrom)
line_spectrum = extract_region(sdss_halpha_contsub, line_region)
line_estimate = fitting.estimate_line_parameters(line_spectrum, models.Gaussian1D())
halpha_line_estimates.append(line_estimate)
# this could be done more flexibly with a for loop but we are explicit here for simplicity
combined_model_estimate = halpha_line_estimates[0] + halpha_line_estimates[1] + halpha_line_estimates[2]
combined_model_estimate
combined_model = fitting.fit_lines(sdss_halpha_contsub, combined_model_estimate)
plt.step(sdss_halpha_contsub.spectral_axis, sdss_halpha_contsub.flux, where='mid')
plt.plot(sdss_halpha_contsub.spectral_axis,
combined_model(sdss_halpha_contsub.spectral_axis))
combined_model | _____no_output_____ | BSD-3-Clause | aas_233_workshop/09b-Specutils/Specutils_analysis.ipynb | astropy/astropy-workshops |
Keras simple CNN 2020/11/11Ryutaro Hashimoto___ Table of Contents1 Setup1.1 Launching a Sagemaker session1.2 Prepare the dataset for training2 Train the model2.1 Specifying the Instance Type2.2 Setting for hyperparameters2.3 Metrics2.4 Tags2.5 Setting for estimator2.6 Specify data input and output2.7 Execute Training2.8 Checking the accuracy of a model with TensorBoard3 Predict by trained Model3.1 Deploy the trained model3.2 Invoke the endpoint3.3 Download the dataset for prediction3.4 Prediction3.5 Accuracy3.6 Confusion Matrix4 Cleanup Setup Launching a Sagemaker session | import sagemaker
sagemaker_session = sagemaker.Session()
role = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxx' # ← your iam role ARN | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Prepare the dataset for trainingSkip the next code since you have already downloaded it. | !python generate_cifar10_tfrecords.py --data-dir ./data | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Next, we upload the data to Amazon S3: | from sagemaker.s3 import S3Uploader
bucket = 'sagemaker-tutorial-hashimoto'
dataset_uri = S3Uploader.upload('data', 's3://{}/tf-cifar10-example/data'.format(bucket))
display(dataset_uri) | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Train the model Specifying the Instance Type | instance_type = 'ml.p2.xlarge' | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Setting for hyperparameters | hyperparameters = {'epochs': 10, 'batch-size': 256} | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Metrics | metric_definitions = [
{'Name': 'train:loss', 'Regex': '.*loss: ([0-9\\.]+) - accuracy: [0-9\\.]+.*'},
{'Name': 'train:accuracy', 'Regex': '.*loss: [0-9\\.]+ - accuracy: ([0-9\\.]+).*'},
{'Name': 'validation:accuracy', 'Regex': '.*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: ([0-9\\.]+).*'},
{'Name': 'validation:loss', 'Regex': '.*step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: ([0-9\\.]+) - val_accuracy: [0-9\\.]+.*'},
{'Name': 'sec/steps', 'Regex': '.* - \d+s (\d+)[mu]s/step - loss: [0-9\\.]+ - accuracy: [0-9\\.]+ - val_loss: [0-9\\.]+ - val_accuracy: [0-9\\.]+'}
] | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Tags | tags = [{'Key': 'Project', 'Value': 'cifar10'}, {'Key': 'TensorBoard', 'Value': 'file'}] | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Setting for estimator | import subprocess
from sagemaker.tensorflow import TensorFlow
estimator = TensorFlow(entry_point='cifar10_keras_main.py',
source_dir='source_dir',
metric_definitions=metric_definitions,
hyperparameters=hyperparameters,
role=role,
framework_version='1.15.2',
py_version='py3',
instance_count=1,
instance_type=instance_type,
base_job_name='cifar10-tf',
tags=tags)
help(TensorFlow) | Help on class TensorFlow in module sagemaker.tensorflow.estimator:
class TensorFlow(sagemaker.estimator.Framework)
| TensorFlow(py_version=None, framework_version=None, model_dir=None, image_uri=None, distribution=None, **kwargs)
|
| Handle end-to-end training and deployment of user-provided TensorFlow code.
|
| Method resolution order:
| TensorFlow
| sagemaker.estimator.Framework
| sagemaker.estimator.EstimatorBase
| builtins.object
|
| Methods defined here:
|
| __init__(self, py_version=None, framework_version=None, model_dir=None, image_uri=None, distribution=None, **kwargs)
| Initialize a ``TensorFlow`` estimator.
|
| Args:
| py_version (str): Python version you want to use for executing your model training
| code. Defaults to ``None``. Required unless ``image_uri`` is provided.
| framework_version (str): TensorFlow version you want to use for executing your model
| training code. Defaults to ``None``. Required unless ``image_uri`` is provided.
| List of supported versions:
| https://github.com/aws/sagemaker-python-sdk#tensorflow-sagemaker-estimators.
| model_dir (str): S3 location where the checkpoint data and models can be exported to
| during training (default: None). It will be passed in the training script as one of
| the command line arguments. If not specified, one is provided based on
| your training configuration:
|
| * *distributed training with SMDistributed or MPI with Horovod* - ``/opt/ml/model``
| * *single-machine training or distributed training without MPI* - ``s3://{output_path}/model``
| * *Local Mode with local sources (file:// instead of s3://)* - ``/opt/ml/shared/model``
|
| To disable having ``model_dir`` passed to your training script,
| set ``model_dir=False``.
| image_uri (str): If specified, the estimator will use this image for training and
| hosting, instead of selecting the appropriate SageMaker official image based on
| framework_version and py_version. It can be an ECR url or dockerhub image and tag.
|
| Examples:
| 123.dkr.ecr.us-west-2.amazonaws.com/my-custom-image:1.0
| custom-image:latest.
|
| If ``framework_version`` or ``py_version`` are ``None``, then
| ``image_uri`` is required. If also ``None``, then a ``ValueError``
| will be raised.
| distribution (dict): A dictionary with information on how to run distributed training
| (default: None). Currently, the following are supported:
| distributed training with parameter servers, SageMaker Distributed (SMD) Data
| and Model Parallelism, and MPI. SMD Model Parallelism can only be used with MPI.
| To enable parameter server use the following setup:
|
| .. code:: python
|
| {
| "parameter_server": {
| "enabled": True
| }
| }
|
| To enable MPI:
|
| .. code:: python
|
| {
| "mpi": {
| "enabled": True
| }
| }
|
| To enable SMDistributed Data Parallel or Model Parallel:
|
| .. code:: python
|
| {
| "smdistributed": {
| "dataparallel": {
| "enabled": True
| },
| "modelparallel": {
| "enabled": True,
| "parameters": {}
| }
| }
| }
|
| **kwargs: Additional kwargs passed to the Framework constructor.
|
| .. tip::
|
| You can find additional parameters for initializing this class at
| :class:`~sagemaker.estimator.Framework` and
| :class:`~sagemaker.estimator.EstimatorBase`.
|
| create_model(self, role=None, vpc_config_override='VPC_CONFIG_DEFAULT', entry_point=None, source_dir=None, dependencies=None, **kwargs)
| Create a ``TensorFlowModel`` object that can be used for creating
| SageMaker model entities, deploying to a SageMaker endpoint, or
| starting SageMaker Batch Transform jobs.
|
| Args:
| role (str): The ``TensorFlowModel``, which is also used during transform jobs.
| If not specified, the role from the Estimator is used.
| vpc_config_override (dict[str, list[str]]): Optional override for VpcConfig set on the
| model. Default: use subnets and security groups from this Estimator.
|
| * 'Subnets' (list[str]): List of subnet ids.
| * 'SecurityGroupIds' (list[str]): List of security group ids.
|
| entry_point (str): Path (absolute or relative) to the local Python source file which
| should be executed as the entry point to training. If ``source_dir`` is specified,
| then ``entry_point`` must point to a file located at the root of ``source_dir``.
| If not specified and ``endpoint_type`` is 'tensorflow-serving',
| no entry point is used. If ``endpoint_type`` is also ``None``,
| then the training entry point is used.
| source_dir (str): Path (absolute or relative or an S3 URI) to a directory with any other
| serving source code dependencies aside from the entry point file (default: None).
| dependencies (list[str]): A list of paths to directories (absolute or relative) with
| any additional libraries that will be exported to the container (default: None).
| **kwargs: Additional kwargs passed to
| :class:`~sagemaker.tensorflow.model.TensorFlowModel`.
|
| Returns:
| sagemaker.tensorflow.model.TensorFlowModel: A ``TensorFlowModel`` object.
| See :class:`~sagemaker.tensorflow.model.TensorFlowModel` for full details.
|
| hyperparameters(self)
| Return hyperparameters used by your custom TensorFlow code during model training.
|
| transformer(self, instance_count, instance_type, strategy=None, assemble_with=None, output_path=None, output_kms_key=None, accept=None, env=None, max_concurrent_transforms=None, max_payload=None, tags=None, role=None, volume_kms_key=None, entry_point=None, vpc_config_override='VPC_CONFIG_DEFAULT', enable_network_isolation=None, model_name=None)
| Return a ``Transformer`` that uses a SageMaker Model based on the training job. It
| reuses the SageMaker Session and base job name used by the Estimator.
|
| Args:
| instance_count (int): Number of EC2 instances to use.
| instance_type (str): Type of EC2 instance to use, for example, 'ml.c4.xlarge'.
| strategy (str): The strategy used to decide how to batch records in a single request
| (default: None). Valid values: 'MultiRecord' and 'SingleRecord'.
| assemble_with (str): How the output is assembled (default: None). Valid values: 'Line'
| or 'None'.
| output_path (str): S3 location for saving the transform result. If not specified,
| results are stored to a default bucket.
| output_kms_key (str): Optional. KMS key ID for encrypting the transform output
| (default: None).
| accept (str): The accept header passed by the client to
| the inference endpoint. If it is supported by the endpoint,
| it will be the format of the batch transform output.
| env (dict): Environment variables to be set for use during the transform job
| (default: None).
| max_concurrent_transforms (int): The maximum number of HTTP requests to be made to
| each individual transform container at one time.
| max_payload (int): Maximum size of the payload in a single HTTP request to the
| container in MB.
| tags (list[dict]): List of tags for labeling a transform job. If none specified, then
| the tags used for the training job are used for the transform job.
| role (str): The IAM Role ARN for the ``TensorFlowModel``, which is also used
| during transform jobs. If not specified, the role from the Estimator is used.
| volume_kms_key (str): Optional. KMS key ID for encrypting the volume attached to the ML
| compute instance (default: None).
| entry_point (str): Path (absolute or relative) to the local Python source file which
| should be executed as the entry point to training. If ``source_dir`` is specified,
| then ``entry_point`` must point to a file located at the root of ``source_dir``.
| If not specified and ``endpoint_type`` is 'tensorflow-serving',
| no entry point is used. If ``endpoint_type`` is also ``None``,
| then the training entry point is used.
| vpc_config_override (dict[str, list[str]]): Optional override for
| the VpcConfig set on the model.
| Default: use subnets and security groups from this Estimator.
|
| * 'Subnets' (list[str]): List of subnet ids.
| * 'SecurityGroupIds' (list[str]): List of security group ids.
|
| enable_network_isolation (bool): Specifies whether container will
| run in network isolation mode. Network isolation mode restricts
| the container access to outside networks (such as the internet).
| The container does not make any inbound or outbound network
| calls. If True, a channel named "code" will be created for any
| user entry script for inference. Also known as Internet-free mode.
| If not specified, this setting is taken from the estimator's
| current configuration.
| model_name (str): Name to use for creating an Amazon SageMaker
| model. If not specified, the estimator generates a default job name
| based on the training image name and current timestamp.
|
| ----------------------------------------------------------------------
| Data and other attributes defined here:
|
| __abstractmethods__ = frozenset()
|
| ----------------------------------------------------------------------
| Methods inherited from sagemaker.estimator.Framework:
|
| training_image_uri(self)
| Return the Docker image to use for training.
|
| The :meth:`~sagemaker.estimator.EstimatorBase.fit` method, which does
| the model training, calls this method to find the image to use for model
| training.
|
| Returns:
| str: The URI of the Docker image.
|
| ----------------------------------------------------------------------
| Class methods inherited from sagemaker.estimator.Framework:
|
| attach(training_job_name, sagemaker_session=None, model_channel_name='model') from abc.ABCMeta
| Attach to an existing training job.
|
| Create an Estimator bound to an existing training job, each subclass
| is responsible to implement
| ``_prepare_init_params_from_job_description()`` as this method delegates
| the actual conversion of a training job description to the arguments
| that the class constructor expects. After attaching, if the training job
| has a Complete status, it can be ``deploy()`` ed to create a SageMaker
| Endpoint and return a ``Predictor``.
|
| If the training job is in progress, attach will block until the training job
| completes, but logs of the training job will not display. To see the logs
| content, please call ``logs()``
|
| Examples:
| >>> my_estimator.fit(wait=False)
| >>> training_job_name = my_estimator.latest_training_job.name
| Later on:
| >>> attached_estimator = Estimator.attach(training_job_name)
| >>> attached_estimator.logs()
| >>> attached_estimator.deploy()
|
| Args:
| training_job_name (str): The name of the training job to attach to.
| sagemaker_session (sagemaker.session.Session): Session object which
| manages interactions with Amazon SageMaker APIs and any other
| AWS services needed. If not specified, the estimator creates one
| using the default AWS configuration chain.
| model_channel_name (str): Name of the channel where pre-trained
| model data will be downloaded (default: 'model'). If no channel
| with the same name exists in the training job, this option will
| be ignored.
|
| Returns:
| Instance of the calling ``Estimator`` Class with the attached
| training job.
|
| ----------------------------------------------------------------------
| Data and other attributes inherited from sagemaker.estimator.Framework:
|
| CONTAINER_CODE_CHANNEL_SOURCEDIR_PATH = '/opt/ml/input/data/code/sourc...
|
| INSTANCE_TYPE = 'sagemaker_instance_type'
|
| LAUNCH_MPI_ENV_NAME = 'sagemaker_mpi_enabled'
|
| LAUNCH_PS_ENV_NAME = 'sagemaker_parameter_server_enabled'
|
| LAUNCH_SM_DDP_ENV_NAME = 'sagemaker_distributed_dataparallel_enabled'
|
| MPI_CUSTOM_MPI_OPTIONS = 'sagemaker_mpi_custom_mpi_options'
|
| MPI_NUM_PROCESSES_PER_HOST = 'sagemaker_mpi_num_of_processes_per_host'
|
| ----------------------------------------------------------------------
| Methods inherited from sagemaker.estimator.EstimatorBase:
|
| compile_model(self, target_instance_family, input_shape, output_path, framework=None, framework_version=None, compile_max_run=900, tags=None, target_platform_os=None, target_platform_arch=None, target_platform_accelerator=None, compiler_options=None, **kwargs)
| Compile a Neo model using the input model.
|
| Args:
| target_instance_family (str): Identifies the device that you want to
| run your model after compilation, for example: ml_c5. For allowed
| strings see
| https://docs.aws.amazon.com/sagemaker/latest/dg/API_OutputConfig.html.
| input_shape (dict): Specifies the name and shape of the expected
| inputs for your trained model in json dictionary form, for
| example: {'data':[1,3,1024,1024]}, or {'var1': [1,1,28,28],
| 'var2':[1,1,28,28]}
| output_path (str): Specifies where to store the compiled model
| framework (str): The framework that is used to train the original
| model. Allowed values: 'mxnet', 'tensorflow', 'keras', 'pytorch',
| 'onnx', 'xgboost'
| framework_version (str): The version of the framework
| compile_max_run (int): Timeout in seconds for compilation (default:
| 3 * 60). After this amount of time Amazon SageMaker Neo
| terminates the compilation job regardless of its current status.
| tags (list[dict]): List of tags for labeling a compilation job. For
| more, see
| https://docs.aws.amazon.com/sagemaker/latest/dg/API_Tag.html.
| target_platform_os (str): Target Platform OS, for example: 'LINUX'.
| For allowed strings see
| https://docs.aws.amazon.com/sagemaker/latest/dg/API_OutputConfig.html.
| It can be used instead of target_instance_family.
| target_platform_arch (str): Target Platform Architecture, for example: 'X86_64'.
| For allowed strings see
| https://docs.aws.amazon.com/sagemaker/latest/dg/API_OutputConfig.html.
| It can be used instead of target_instance_family.
| target_platform_accelerator (str, optional): Target Platform Accelerator,
| for example: 'NVIDIA'. For allowed strings see
| https://docs.aws.amazon.com/sagemaker/latest/dg/API_OutputConfig.html.
| It can be used instead of target_instance_family.
| compiler_options (dict, optional): Additional parameters for compiler.
| Compiler Options are TargetPlatform / target_instance_family specific. See
| https://docs.aws.amazon.com/sagemaker/latest/dg/API_OutputConfig.html for details.
| **kwargs: Passed to invocation of ``create_model()``.
| Implementations may customize ``create_model()`` to accept
| ``**kwargs`` to customize model creation during deploy. For
| more, see the implementation docs.
|
| Returns:
| sagemaker.model.Model: A SageMaker ``Model`` object. See
| :func:`~sagemaker.model.Model` for full details.
|
| delete_endpoint = func(*args, **kwargs)
|
| deploy(self, initial_instance_count, instance_type, serializer=None, deserializer=None, accelerator_type=None, endpoint_name=None, use_compiled_model=False, wait=True, model_name=None, kms_key=None, data_capture_config=None, tags=None, **kwargs)
| Deploy the trained model to an Amazon SageMaker endpoint and return a
| ``sagemaker.Predictor`` object.
|
| More information:
| http://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-training.html
|
| Args:
| initial_instance_count (int): Minimum number of EC2 instances to
| deploy to an endpoint for prediction.
| instance_type (str): Type of EC2 instance to deploy to an endpoint
| for prediction, for example, 'ml.c4.xlarge'.
| serializer (:class:`~sagemaker.serializers.BaseSerializer`): A
| serializer object, used to encode data for an inference endpoint
| (default: None). If ``serializer`` is not None, then
| ``serializer`` will override the default serializer. The
| default serializer is set by the ``predictor_cls``.
| deserializer (:class:`~sagemaker.deserializers.BaseDeserializer`): A
| deserializer object, used to decode data from an inference
| endpoint (default: None). If ``deserializer`` is not None, then
| ``deserializer`` will override the default deserializer. The
| default deserializer is set by the ``predictor_cls``.
| accelerator_type (str): Type of Elastic Inference accelerator to
| attach to an endpoint for model loading and inference, for
| example, 'ml.eia1.medium'. If not specified, no Elastic
| Inference accelerator will be attached to the endpoint. For more
| information:
| https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html
| endpoint_name (str): Name to use for creating an Amazon SageMaker
| endpoint. If not specified, the name of the training job is
| used.
| use_compiled_model (bool): Flag to select whether to use compiled
| (optimized) model. Default: False.
| wait (bool): Whether the call should wait until the deployment of
| model completes (default: True).
| model_name (str): Name to use for creating an Amazon SageMaker
| model. If not specified, the estimator generates a default job name
| based on the training image name and current timestamp.
| kms_key (str): The ARN of the KMS key that is used to encrypt the
| data on the storage volume attached to the instance hosting the
| endpoint.
| data_capture_config (sagemaker.model_monitor.DataCaptureConfig): Specifies
| configuration related to Endpoint data capture for use with
| Amazon SageMaker Model Monitoring. Default: None.
| tags(List[dict[str, str]]): Optional. The list of tags to attach to this specific
| endpoint. Example:
| >>> tags = [{'Key': 'tagname', 'Value': 'tagvalue'}]
| For more information about tags, see
| https://boto3.amazonaws.com/v1/documentation /api/latest/reference/services/sagemaker.html#SageMaker.Client.add_tags
| **kwargs: Passed to invocation of ``create_model()``.
| Implementations may customize ``create_model()`` to accept
| ``**kwargs`` to customize model creation during deploy.
| For more, see the implementation docs.
|
| Returns:
| sagemaker.predictor.Predictor: A predictor that provides a ``predict()`` method,
| which can be used to send requests to the Amazon SageMaker
| endpoint and obtain inferences.
|
| disable_profiling(self)
| Update the current training job in progress to disable profiling.
|
| Debugger stops collecting the system and framework metrics
| and turns off the Debugger built-in monitoring and profiling rules.
|
| enable_default_profiling(self)
| Update training job to enable Debugger monitoring.
|
| This method enables Debugger monitoring with
| the default ``profiler_config`` parameter to collect system
| metrics and the default built-in ``profiler_report`` rule.
| Framework metrics won't be saved.
| To update training job to emit framework metrics, you can use
| :class:`~sagemaker.estimator.Estimator.update_profiler`
| method and specify the framework metrics you want to enable.
|
| This method is callable when the training job is in progress while
| Debugger monitoring is disabled.
|
| enable_network_isolation(self)
| Return True if this Estimator will need network isolation to run.
|
| Returns:
| bool: Whether this Estimator needs network isolation or not.
|
| fit(self, inputs=None, wait=True, logs='All', job_name=None, experiment_config=None)
| Train a model using the input training dataset.
|
| The API calls the Amazon SageMaker CreateTrainingJob API to start
| model training. The API uses configuration you provided to create the
| estimator and the specified input training data to send the
| CreatingTrainingJob request to Amazon SageMaker.
|
| This is a synchronous operation. After the model training
| successfully completes, you can call the ``deploy()`` method to host the
| model using the Amazon SageMaker hosting services.
|
| Args:
| inputs (str or dict or sagemaker.inputs.TrainingInput): Information
| about the training data. This can be one of three types:
|
| * (str) the S3 location where training data is saved, or a file:// path in
| local mode.
| * (dict[str, str] or dict[str, sagemaker.inputs.TrainingInput]) If using multiple
| channels for training data, you can specify a dict mapping channel names to
| strings or :func:`~sagemaker.inputs.TrainingInput` objects.
| * (sagemaker.inputs.TrainingInput) - channel configuration for S3 data sources
| that can provide additional information as well as the path to the training
| dataset.
| See :func:`sagemaker.inputs.TrainingInput` for full details.
| * (sagemaker.session.FileSystemInput) - channel configuration for
| a file system data source that can provide additional information as well as
| the path to the training dataset.
|
| wait (bool): Whether the call should wait until the job completes (default: True).
| logs ([str]): A list of strings specifying which logs to print. Acceptable
| strings are "All", "None", "Training", or "Rules". To maintain backwards
| compatibility, boolean values are also accepted and converted to strings.
| Only meaningful when wait is True.
| job_name (str): Training job name. If not specified, the estimator generates
| a default job name based on the training image name and current timestamp.
| experiment_config (dict[str, str]): Experiment management configuration.
| Dictionary contains three optional keys,
| 'ExperimentName', 'TrialName', and 'TrialComponentDisplayName'.
|
| get_vpc_config(self, vpc_config_override='VPC_CONFIG_DEFAULT')
| Returns VpcConfig dict either from this Estimator's subnets and
| security groups, or else validate and return an optional override value.
|
| Args:
| vpc_config_override:
|
| latest_job_debugger_artifacts_path(self)
| Gets the path to the DebuggerHookConfig output artifacts.
|
| Returns:
| str: An S3 path to the output artifacts.
|
| latest_job_profiler_artifacts_path(self)
| Gets the path to the profiling output artifacts.
|
| Returns:
| str: An S3 path to the output artifacts.
|
| latest_job_tensorboard_artifacts_path(self)
| Gets the path to the TensorBoardOutputConfig output artifacts.
|
| Returns:
| str: An S3 path to the output artifacts.
|
| logs(self)
| Display the logs for Estimator's training job.
|
| If the output is a tty or a Jupyter cell, it will be color-coded based
| on which instance the log entry is from.
|
| prepare_workflow_for_training(self, job_name=None)
| Calls _prepare_for_training. Used when setting up a workflow.
|
| Args:
| job_name (str): Name of the training job to be created. If not
| specified, one is generated, using the base name given to the
| constructor if applicable.
|
| register(self, content_types, response_types, inference_instances, transform_instances, image_uri=None, model_package_name=None, model_package_group_name=None, model_metrics=None, metadata_properties=None, marketplace_cert=False, approval_status=None, description=None, compile_model_family=None, model_name=None, **kwargs)
| Creates a model package for creating SageMaker models or listing on Marketplace.
|
| Args:
| content_types (list): The supported MIME types for the input data.
| response_types (list): The supported MIME types for the output data.
| inference_instances (list): A list of the instance types that are used to
| generate inferences in real-time.
| transform_instances (list): A list of the instance types on which a transformation
| job can be run or on which an endpoint can be deployed.
| image_uri (str): The container image uri for Model Package, if not specified,
| Estimator's training container image will be used (default: None).
| model_package_name (str): Model Package name, exclusive to `model_package_group_name`,
| using `model_package_name` makes the Model Package un-versioned (default: None).
| model_package_group_name (str): Model Package Group name, exclusive to
| `model_package_name`, using `model_package_group_name` makes the Model Package
| versioned (default: None).
| model_metrics (ModelMetrics): ModelMetrics object (default: None).
| metadata_properties (MetadataProperties): MetadataProperties (default: None).
| marketplace_cert (bool): A boolean value indicating if the Model Package is certified
| for AWS Marketplace (default: False).
| approval_status (str): Model Approval Status, values can be "Approved", "Rejected",
| or "PendingManualApproval" (default: "PendingManualApproval").
| description (str): Model Package description (default: None).
| compile_model_family (str): Instance family for compiled model, if specified, a compiled
| model will be used (default: None).
| model_name (str): User defined model name (default: None).
| **kwargs: Passed to invocation of ``create_model()``. Implementations may customize
| ``create_model()`` to accept ``**kwargs`` to customize model creation during
| deploy. For more, see the implementation docs.
|
| Returns:
| str: A string of SageMaker Model Package ARN.
|
| update_profiler(self, rules=None, system_monitor_interval_millis=None, s3_output_path=None, framework_profile_params=None, disable_framework_metrics=False)
| Update training jobs to enable profiling.
|
| This method updates the ``profiler_config`` parameter
| and initiates Debugger built-in rules for profiling.
|
| Args:
| rules (list[:class:`~sagemaker.debugger.ProfilerRule`]): A list of
| :class:`~sagemaker.debugger.ProfilerRule` objects to define
| rules for continuous analysis with SageMaker Debugger. Currently, you can
| only add new profiler rules during the training job. (default: ``None``)
| s3_output_path (str): The location in S3 to store the output. If profiler is enabled
| once, s3_output_path cannot be changed. (default: ``None``)
| system_monitor_interval_millis (int): How often profiling system metrics are
| collected; Unit: Milliseconds (default: ``None``)
| framework_profile_params (:class:`~sagemaker.debugger.FrameworkProfile`):
| A parameter object for framework metrics profiling. Configure it using
| the :class:`~sagemaker.debugger.FrameworkProfile` class.
| To use the default framework profile parameters, pass ``FrameworkProfile()``.
| For more information about the default values,
| see :class:`~sagemaker.debugger.FrameworkProfile`. (default: ``None``)
| disable_framework_metrics (bool): Specify whether to disable all the framework metrics.
| This won't update system metrics and the Debugger built-in rules for monitoring.
| To stop both monitoring and profiling,
| use the :class:`~sagemaker.estimator.Estimator.desable_profiling`
| method. (default: ``False``)
|
| .. attention::
|
| Updating the profiling configuration for TensorFlow dataloader profiling
| is currently not available. If you started a TensorFlow training job only with
| monitoring and want to enable profiling while the training job is running,
| the dataloader profiling cannot be updated.
|
| ----------------------------------------------------------------------
| Readonly properties inherited from sagemaker.estimator.EstimatorBase:
|
| model_data
| str: The model location in S3. Only set if Estimator has been
| ``fit()``.
|
| training_job_analytics
| Return a ``TrainingJobAnalytics`` object for the current training
| job.
|
| ----------------------------------------------------------------------
| Data descriptors inherited from sagemaker.estimator.EstimatorBase:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
| MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Specify data input and output | inputs = {
'train': '{}/train'.format(dataset_uri),
'validation': '{}/validation'.format(dataset_uri),
'eval': '{}/eval'.format(dataset_uri),
} | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Execute Training | estimator.fit(inputs) | 2021-02-08 06:06:01 Starting - Starting the training job...
2021-02-08 06:06:25 Starting - Launching requested ML instancesProfilerReport-1612764359: InProgress
......
2021-02-08 06:07:32 Starting - Preparing the instances for training......... | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Checking the accuracy of a model with TensorBoardUsing the visualization tool [TensorBoard](https://www.tensorflow.org/tensorboard), we can compare our training jobs.In a local setting, install TensorBoard with `pip install tensorboard`. Then run the command generated by the following code: | !python generate_tensorboard_command.py
! AWS_REGION=us-west-2 tensorboard --logdir file:"s3://sagemaker-us-west-2-005242542034/cifar10-tf-2021-02-08-04-01-54-836/model" | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
After running that command, we can access TensorBoard locally at http://localhost:6006.Based on the TensorBoard metrics, we can see that:1. All jobs run for 10 epochs (0 - 9).1. Both File Mode and Pipe Mode run for ~1 minute - Pipe Mode doesn't affect training performance.1. Distributed training runs for only 45 seconds.1. All of the training jobs resulted in similar validation accuracy.This example uses a relatively small dataset (179 MB). For larger datasets, Pipe Mode can significantly reduce training time because it does not copy the entire dataset into local memory. Predict by trained Model Deploy the trained model | predictor = estimator.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge') | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Invoke the endpointI'll try to generate a random matrix and see if the predictor is working. | import numpy as np
data = np.random.randn(1, 32, 32, 3)
print('Predicted class: {}'.format(np.argmax(predictor.predict(data)['predictions']))) | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Download the dataset for prediction | from tensorflow.keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data() | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Prediction | from tensorflow.keras.preprocessing.image import ImageDataGenerator
def predict(data):
predictions = predictor.predict(data)['predictions']
return predictions
predicted = []
actual = []
batches = 0
batch_size = 128
datagen = ImageDataGenerator()
for data in datagen.flow(x_test, y_test, batch_size=batch_size):
for i, prediction in enumerate(predict(data[0])):
predicted.append(np.argmax(prediction))
actual.append(data[1][i][0])
batches += 1
if batches >= len(x_test) / batch_size:
break | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Accuracy | from sklearn.metrics import accuracy_score
accuracy = accuracy_score(y_pred=predicted, y_true=actual)
display('Average accuracy: {}%'.format(round(accuracy * 100, 2))) | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
Confusion Matrix | %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sn
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_pred=predicted, y_true=actual)
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
sn.set(rc={'figure.figsize': (11.7,8.27)})
sn.set(font_scale=1.4) # for label size
sn.heatmap(cm, annot=True, annot_kws={"size": 10}) # font size | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
CleanupTo avoid incurring extra charges to your AWS account, let's delete the endpoint we created: | predictor.delete_endpoint() | _____no_output_____ | MIT | 2_training/Custom_Model/tensorflow/keras_script_mode_pipe_mode_horovod/keras_CNN_CIFAR10.ipynb | RyutaroHashimoto/aws_sagemaker |
We will use Naive Bayes to model the "Pima Indians Diabetes" data set. This model will predict which people are likely to develop diabetes.This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether or not a patient has diabetes, based on certain diagnostic measurements included in the dataset. Several constraints were placed on the selection of these instances from a larger database. In particular, all patients here are females at least 21 years old of Pima Indian heritage. Import Libraries | # data processing, CSV file I/O
# matplotlib.pyplot plots data
| _____no_output_____ | MIT | Naive_Bayes_Diabetes/Naive_Bayes.ipynb | abhisngh/Data-Science |
Load and review data | # Check number of columns and rows in data frame
# To check first 5 rows of data set
# If there are any null values in data set
# Excluding Outcome column
# Histogram of first 8 columns
| _____no_output_____ | MIT | Naive_Bayes_Diabetes/Naive_Bayes.ipynb | abhisngh/Data-Science |
Identify Correlation in data | #show correlation matrix
# However we want to see correlation in graphical representation
| _____no_output_____ | MIT | Naive_Bayes_Diabetes/Naive_Bayes.ipynb | abhisngh/Data-Science |
Calculate diabetes ratio of True/False from outcome variable Spliting the data Lets check split of data Now lets check diabetes True/False ratio in split data Data Preparation Check hidden missing values As we checked missing values earlier but haven't got any. But there can be lots of entries with 0 values. We must need to take care of those as well. Replace 0s with serial mean Train Naive Bayes algorithm Performance of our model with training data Performance of our model with testing data Lets check the confusion matrix and classification report | # Print Classification report
| _____no_output_____ | MIT | Naive_Bayes_Diabetes/Naive_Bayes.ipynb | abhisngh/Data-Science |
Resolução dos Exercícios - Lista I 1. Crie três variáveis e atribua os valores a seguir: a=1, b=5.9 e c=‘teste’. A partir disso, retorne o tipo de cada uma das variáveis. | # Criando as variáveis
a=1
b=5
c='teste'
# Retornando o tipo de cada variável
print("Tipos das variáveis:\n>> Variável 'a' é do tipo {typea}."
"\n>> Variável 'b' é do tipo {typeb}."
"\n>> Variável 'c' é do tipo {typec}".format(typea=type(a),
typeb=type(b),
typec=type(c))) | Tipos das variáveis:
>> Variável 'a' é do tipo <class 'int'>.
>> Variável 'b' é do tipo <class 'int'>.
>> Variável 'c' é do tipo <class 'str'>
| MIT | Aula1/ResolucaoExercicios_Aula01.ipynb | anablima/CursoUSP_PythonNLP |
2. Troque o valor da variável a por ‘1’ e verifique se o tipo da variável mudou. | # Alterando a variável
a='1'
# Retornando o novo tipo da variável
print("O tipo da variável 'a' mudou para ", type(a)) | O tipo da variável 'a' mudou para <class 'str'>
| MIT | Aula1/ResolucaoExercicios_Aula01.ipynb | anablima/CursoUSP_PythonNLP |
3. Faça a soma da variável b com a variável c. Interprete a saída, tanto em caso de execução correta quanto em caso de execução com erro. | print(b+c)
# Não podemos realizar operações aritméticas entre variáveis com tipos diferentes.
# Para isso ambas as variáveis precisam ser do mesmo tipo ou retorna erro. | _____no_output_____ | MIT | Aula1/ResolucaoExercicios_Aula01.ipynb | anablima/CursoUSP_PythonNLP |
4. Crie uma lista com números de 0 a 9 (em qualquer ordem) e faça:* a) Adicione o número 6* b) Insira o número 7 na 3ª posição da lista* c) Remova o elemento 3 da lista* d) Adicione o número 4* e) Verifique o número de ocorrências do número 4 na lista | # Criando a lista
l1 = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
l1
# a) Adicione o número 6
l1.append(6)
l1
# b) Insira o número 7 na 3ª posição da lista
l1.insert(2,7)
l1
# c) Remova o elemento 3 da lista
l1.remove(3)
l1
# d) Adicione o número 4
l1.append(4)
l1
# e) Verifique o número de ocorrências do número 4 na lista
print(l1.count(4)) | 2
| MIT | Aula1/ResolucaoExercicios_Aula01.ipynb | anablima/CursoUSP_PythonNLP |
5. Ainda com a lista criada na questão anterior, faça:* a) Retorne os primeiros 3 elementos da lista* b) Retorne os elementos que estão da 3ª posição até a 7ª posição da lista* c) Retorne os elementos da lista de 3 em 3 elementos* d) Retorne os 3 últimos elementos da lista* e) Retorne todos os elementos menos os 4 últimos da lista | # a) Retorne os primeiros 3 elementos da lista
print('Lista:', l1)
print('\n3 primeiros elementos da lista:', l1[:3])
# b) Retorne os elementos que estão da 3ª posição até a 7ª posição da lista
print('Lista:', l1)
print('\nElementos da 3ª a 7ª posição da lista:', l1[2:7])
# c) Retorne os elementos da lista de 3 em 3 elementos
print('Posições de 1 a 3: ', l1[:3])
print('Posições de 4 a 6: ', l1[3:6])
print('Posições de 7 a 9: ', l1[6:9])
print('Posições de 10 a 12: ', l1[9:12])
# d) Retorne os 3 últimos elementos da lista
print('Lista:', l1)
print('\n3 últimos elementos da lista:', l1[-3:])
# e) Retorne todos os elementos menos os 4 últimos da lista
print('Lista:', l1)
print('\nTodos os elementos menos os 4 últimos da lista:', l1[:-4]) | Lista: [0, 1, 7, 2, 4, 5, 6, 7, 8, 9, 6, 4]
Todos os elementos menos os 4 últimos da lista: [0, 1, 7, 2, 4, 5, 6, 7]
| MIT | Aula1/ResolucaoExercicios_Aula01.ipynb | anablima/CursoUSP_PythonNLP |
6. Com a lista das questões anteriores, retorne o 6º elemento da lista. | print('Lista:', l1)
print('\n6ª posição da lista:', l1[6]) | Lista: [0, 1, 2, 4, 4, 5, 6, 7, 9, 12]
6ª posição da lista: 7
| MIT | Aula1/ResolucaoExercicios_Aula01.ipynb | anablima/CursoUSP_PythonNLP |
7. Altere o valor do 7º elemento da lista para o valor 12. | print('Lista:', l1)
l1[6] = 12
print('\nLista com a alteração:', l1) |
Lista com a alteração: [0, 1, 7, 2, 4, 5, 12, 9, 6, 4]
| MIT | Aula1/ResolucaoExercicios_Aula01.ipynb | anablima/CursoUSP_PythonNLP |
8. Inverta a ordem dos elementos na lista. | print('Lista:', l1)
l1.reverse()
print('\nLista invertida:', l1) |
Lista invertida: [12, 9, 7, 6, 5, 4, 4, 2, 1, 0]
| MIT | Aula1/ResolucaoExercicios_Aula01.ipynb | anablima/CursoUSP_PythonNLP |
9. Ordene a lista | print('Lista:', l1)
l1.sort()
print('\nLista invertida:', l1) |
Lista invertida: [0, 1, 2, 4, 4, 5, 6, 7, 9, 12]
| MIT | Aula1/ResolucaoExercicios_Aula01.ipynb | anablima/CursoUSP_PythonNLP |
10. Crie uma tupla com números de 0 a 9 (em qualquer ordem) e tente:* a) Alterar o valor do 3º elemento da tupla para o valor 10* b) Verificar o índice (posição) do valor 5 na tupla | # Criando a tupla
t1 = (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
t1
# a) Alterar o valor do 3º elemento da tupla para o valor 10
t1[3] = 10
t1
# Tuplas não são alteráveis, somente as listas são.
# b) Verificar o índice (posição) do valor 5 na tupla
print('Tupla: ', t1)
print('\nIndex do número 5 é:', t1.index(5)) | Tupla: (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
Index do número 5 é: 5
| MIT | Aula1/ResolucaoExercicios_Aula01.ipynb | anablima/CursoUSP_PythonNLP |
Boltzmann MachinesA Boltzmann machine is a type of stochastic recurrent neural network. It is a Markov random field (a undirected graphical model is a set of random variables that has the *Markov property* (the conditional probability distribution of future states of the process (conditional on both past and present states) depends only upon the present state, not on the sequence of events that preceded it)). They were one of the first neural networks capable of learning internal representations, and are able to represent and (given sufficient time) solve combinatoric problems.They are named after the Boltzmann distribution in statistical mechanics, which is used in their sampling function. That's why they are called "energy based models" (EBM). They were invented in 1985 by Geoffrey Hinton, then a Professor at Carnegie Mellon University, and Terry Sejnowski, then a Professor at Johns Hopkins University.[[1](https://en.wikipedia.org/wiki/File:Boltzmannexamplev1.png)]> A graphical representation of an example Boltzmann machine. Each undirected edge represents dependency. In this example there are 3 hidden units and 4 visible units. This is not a restricted Boltzmann machine.The units in the Boltzmann machine are divided into 'visible' units, $\mathbf{v}$, and 'hidden' units, $\mathbf{h}$. The visible units are those that receive information from the 'environment', i.e. the training set is a set of binary vectors over the set $\mathbf{v}$. The distribution over the training set is denoted $P^{+}(\mathbf{v})$. Can see that all nodes form a complete graph (where all units are connected to all other units) Restricted Boltzmann machineA restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. RBMs are a variant of Boltzmann machines, with the restriction that their neurons must form a bipartite graph: a pair of nodes from each of the two groups of units (commonly referred to as the "visible" and "hidden" units respectively) may have a symmetric connection between them; and there are no connections between nodes within a group. By contrast, "unrestricted" Boltzmann machines may have connections between hidden units. This restriction allows for more efficient training algorithms than are available for the general class of Boltzmann machines, in particular the gradient-based contrastive divergence algorithm.Restricted Boltzmann machines can also be used in deep learning networks. In particular, deep belief networks can be formed by "stacking" RBMs and optionally fine-tuning the resulting deep network with gradient descent and backpropagation.[[2](https://en.wikipedia.org/wiki/File:Restricted_Boltzmann_machine.svg)]> Diagram of a restricted Boltzmann machine with three visible units and four hidden units (no bias units).Restricted Boltzmann machines (RBM) are unsupervised nonlinear featurelearners based on a probabilistic model. The features extracted by anRBM or a hierarchy of RBMs often give good results when fed into alinear classifier such as a linear SVM or a perceptron.The model makes assumptions regarding the distribution of inputs. At themoment, scikit-learn only provides `BernoulliRBM`, which assumes the inputs (and all units) are either binary values orvalues between 0 and 1, each encoding the probability that the specificfeature would be turned on.The RBM tries to maximize the likelihood of the data using a particulargraphical model. The parameter learning algorithm used (`Stochastic Maximum Likelihood`) prevents therepresentations from straying far from the input data, which makes themcapture interesting regularities, but makes the model less useful forsmall datasets, and usually not useful for density estimation.The time complexity of this implementation is $O(d^2)$ assuming $d \sim n_{features} \sim n_{components}$.The method gained popularity for initializing deep neural networks withthe weights of independent RBMs. This method is known as unsupervisedpre-training. Example : RBM features for digit classificationFor greyscale image data where pixel values can be interpreted as degrees of blackness on a white background, like handwritten digit recognition, the Bernoulli Restricted Boltzmann machine model (`BernoulliRBM`) can perform effective non-linear feature extraction.In order to learn good latent representations from a small dataset, we artificially generate more labeled data by perturbing the training data with linear shifts of 1 pixel in each direction.This example shows how to build a classification pipeline with a BernoulliRBM feature extractor and a `LogisticRegression` classifier. The hyperparameters of the entire model (learning rate, hidden layer size, regularization) were optimized by grid search, but the search is not reproduced here because of runtime constraints.Logistic regression on raw pixel values is presented for comparison. The example shows that the features extracted by the BernoulliRBM help improve the classification accuracy. | from sklearn.neural_network import BernoulliRBM
X = np.array([[0.5, 0, 0], [0, 0.7, 1], [1, 0, 1], [1, 0.2, 1]])
rbm = BernoulliRBM(n_components=2)
rbm.fit(X)
print('Shape of X: {}'.format(X.shape))
X_r = rbm.transform(X)
print('Dimensionality reduced X : \n{}'.format(X_r))
from scipy.ndimage import convolve
from sklearn import linear_model, datasets, metrics
from sklearn.model_selection import train_test_split
from sklearn.neural_network import BernoulliRBM
from sklearn.pipeline import Pipeline
from sklearn.base import clone
# #############################################################################
# Setting up
def nudge_dataset(X, Y):
"""
This produces a dataset 5 times bigger than the original one,
by moving the 8x8 images in X around by 1px to left, right, down, up
"""
direction_vectors = [
[[0, 1, 0],
[0, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[1, 0, 0],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 1],
[0, 0, 0]],
[[0, 0, 0],
[0, 0, 0],
[0, 1, 0]]]
def shift(x, w):
return convolve(x.reshape((8, 8)), mode='constant', weights=w).ravel()
X = np.concatenate([X] +
[np.apply_along_axis(shift, 1, X, vector)
for vector in direction_vectors])
Y = np.concatenate([Y for _ in range(5)], axis=0)
return X, Y
# Load Data
X, y = datasets.load_digits(return_X_y=True)
X = np.asarray(X, 'float32')
X, Y = nudge_dataset(X, y)
X = (X - np.min(X, 0)) / (np.max(X, 0) + 0.0001) # 0-1 scaling
X_train, X_test, Y_train, Y_test = train_test_split(
X, Y, test_size=0.2, random_state=0)
# Models we will use
logistic = linear_model.LogisticRegression(solver='newton-cg', tol=1)
rbm = BernoulliRBM(random_state=0, verbose=True)
rbm_features_classifier = Pipeline(
steps=[('rbm', rbm), ('logistic', logistic)])
# #############################################################################
# Training
# Hyper-parameters. These were set by cross-validation,
# using a GridSearchCV. Here we are not performing cross-validation to
# save time.
rbm.learning_rate = 0.06
rbm.n_iter = 10
# More components tend to give better prediction performance, but larger
# fitting time
rbm.n_components = 100
logistic.C = 6000
# Training RBM-Logistic Pipeline
rbm_features_classifier.fit(X_train, Y_train)
# Training the Logistic regression classifier directly on the pixel
raw_pixel_classifier = clone(logistic)
raw_pixel_classifier.C = 100.
raw_pixel_classifier.fit(X_train, Y_train)
# #############################################################################
# Evaluation
Y_pred = rbm_features_classifier.predict(X_test)
print("Logistic regression using RBM features:\n%s\n" % (
metrics.classification_report(Y_test, Y_pred)))
Y_pred = raw_pixel_classifier.predict(X_test)
print("Logistic regression using raw pixel features:\n%s\n" % (
metrics.classification_report(Y_test, Y_pred)))
# #############################################################################
# Plotting
scale = 3.25
plt.figure(figsize=(4.2 * scale, 4 * scale))
for i, comp in enumerate(rbm.components_):
plt.subplot(10, 10, i + 1)
plt.imshow(comp.reshape((8, 8)), cmap=plt.cm.gray_r,
interpolation='nearest')
plt.xticks(())
plt.yticks(())
plt.suptitle('100 components extracted by RBM', fontsize=16)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)
plt.show() | [BernoulliRBM] Iteration 1, pseudo-likelihood = -25.39, time = 0.13s
[BernoulliRBM] Iteration 2, pseudo-likelihood = -23.77, time = 0.17s
[BernoulliRBM] Iteration 3, pseudo-likelihood = -22.94, time = 0.18s
[BernoulliRBM] Iteration 4, pseudo-likelihood = -21.91, time = 0.17s
[BernoulliRBM] Iteration 5, pseudo-likelihood = -21.69, time = 0.17s
[BernoulliRBM] Iteration 6, pseudo-likelihood = -21.06, time = 0.17s
[BernoulliRBM] Iteration 7, pseudo-likelihood = -20.89, time = 0.17s
[BernoulliRBM] Iteration 8, pseudo-likelihood = -20.64, time = 0.16s
[BernoulliRBM] Iteration 9, pseudo-likelihood = -20.36, time = 0.17s
[BernoulliRBM] Iteration 10, pseudo-likelihood = -20.09, time = 0.15s
Logistic regression using RBM features:
precision recall f1-score support
0 0.99 0.98 0.99 174
1 0.92 0.94 0.93 184
2 0.95 0.95 0.95 166
3 0.96 0.89 0.92 194
4 0.96 0.95 0.95 186
5 0.93 0.91 0.92 181
6 0.98 0.98 0.98 207
7 0.93 0.99 0.96 154
8 0.87 0.89 0.88 182
9 0.88 0.91 0.89 169
accuracy 0.94 1797
macro avg 0.94 0.94 0.94 1797
weighted avg 0.94 0.94 0.94 1797
Logistic regression using raw pixel features:
precision recall f1-score support
0 0.90 0.92 0.91 174
1 0.60 0.58 0.59 184
2 0.75 0.85 0.80 166
3 0.78 0.78 0.78 194
4 0.81 0.84 0.82 186
5 0.76 0.77 0.77 181
6 0.91 0.87 0.89 207
7 0.85 0.88 0.87 154
8 0.67 0.58 0.62 182
9 0.75 0.77 0.76 169
accuracy 0.78 1797
macro avg 0.78 0.78 0.78 1797
weighted avg 0.78 0.78 0.78 1797
| MIT | section_4/4-7.ipynb | PacktPublishing/Hands-On-Machine-Learning-with-Scikit-Learn-and-TensorFlow-2.0 |
To run the code, you need to enable the CUDA in the setting. You can enable in the menu: `Runtime > Change runtime type` and choose GPU in the hardware accelerator item. | # install shapefromprojections package
%cd /content
!git clone https://github.com/jakeoung/ShapeFromProjections
%cd ShapeFromProjections
!pip install -e .
import sys
import os
sys.path.append(os.getcwd())
# install CUDA kernels
%cd ctdr/cuda
!python build.py build_ext --inplace
%cd ../../run
import numpy as np
import matplotlib.pyplot as plt
import os
import torch
import h5py
import time
import ctdr
from parse_args import args, update_args
from ctdr.model.vanilla import Model
from ctdr.dataset import init_mesh
from ctdr.utils import util_mesh
from ctdr import optimize
import subprocess
#torch.backends.cudnn.benchmark=True
#------------------------------------------------
# load data
#------------------------------------------------
from ctdr.dataset import dataset
args.data='2starA'
# args.niter=3000
update_args(args)
if args.data.find("tomop") > 0:
args.nmaterials = int(args.data[-3:-1])+1
ds = dataset.SinoDataset(args.ddata, args.nmaterials, args.eta)
width_physical = ds.proj_geom['DetectorSpacingX']*ds.proj_geom['DetectorColCount']
height_physical = ds.proj_geom['DetectorSpacingY']*ds.proj_geom['DetectorRowCount']
physical_unit = min(width_physical, height_physical)
finit_obj = args.ddata+'/init.obj'
# if os.path.exists(finit_obj) == False:
if True:
init_mesh.save_init_mesh(finit_obj, args.data, args.nmaterials, physical_unit, args.subdiv)
else:
print(f"Use existing init file {finit_obj}")
use_center_param = False
mus = np.arange(ds.nmaterials) / (ds.nmaterials-1)
print(finit_obj)
# refine
model = Model(finit_obj, ds.proj_geom, args.nmaterials,
mus, args.nmu0, wlap=args.wlap, wflat=args.wflat).cuda()
def get_params(model, exclude_mus=False):
return model.parameters()
def run_simple(model, ds, niter, args):
print("@ model.mus", model.mus)
params = get_params(model)
opt = torch.optim.Adam(params, args.lr, betas=(0.9, 0.99))
idx_angles_full = torch.LongTensor(np.arange(ds.nangles))
p_full = ds.p.cuda()
ds_loader = [ [ idx_angles_full, p_full ] ]
mask_bg = ds.p < 1e-5
mask_bg = mask_bg.cuda()
print(f"@ statistics of mesh: {model.vertices.shape[0]}, {model.faces.shape[0]}\n")
#mask_bg = 1
ledge = 0
llap = 0.
lflat = 0.
for epoch in range(niter):
# if epoch % 20 == 0 or epoch == niter-1:
for idx_angles, p_batch in ds_loader:
displace_prev = model.displace.data.clone()
if args.b > 0:
p_batch = p_batch.cuda()
opt.zero_grad()
phat, mask_valid, edge_loss, lap_loss, flat_loss = model(idx_angles, args.wedge) # full angles
# phat[~mask_valid] = 0.0
# mask_valid = mask_valid + mask_bg
# l2 loss
data_loss = (p_batch - phat)[mask_valid].pow(2).mean()
loss = data_loss + args.wedge * edge_loss + args.wlap * lap_loss + args.wflat * flat_loss
loss.backward()
opt.step()
loss_now = loss.item()
model.mus.data.clamp_(min=0.0)
if epoch % 20 == 0 or epoch == niter-1:
if args.wedge > 0.:
ledge = edge_loss.item()
if args.wlap > 0.:
llap = lap_loss.item()
if args.wflat > 0.:
lflat = flat_loss.item()
plt.imshow(phat.detach().cpu().numpy()[1,:,:]); plt.show()
print(f'~ {epoch:03d} l2_loss: {data_loss.item():.8f} edge: {ledge:.6f} lap: {llap:.6f} flat: {lflat:.6f} mus: {str(model.mus.cpu().detach().numpy())}')
return phat
args.wlap = 10.0
args.wflat = 0.0
args.wedge = 1.0
phat = run_simple(model, ds, 200, args)
# Show the projection image of data and our estimation
plt.imshow(ds.p[1,:,:]); plt.show()
plt.imshow(phat.detach().cpu().numpy()[1,:,:]); plt.show()
# Optional: save the results
# vv = model.vertices.cpu()+model.displace.detach().cpu()
# ff = model.faces.cpu()
# labels_v, labels_f = model.labels_v_np, model.labels.cpu().numpy()
# # util_vis.save_vf_as_img_labels(args.dresult+f'{epoch:04d}_render.png', vv, ff, labels_v, labels_f)
# util_vis.save_sino_as_img(args.dresult+f'{epoch:04d}_sino.png', phat.detach().cpu().numpy())
# util_mesh.save_mesh(args.dresult+f'{epoch:04d}.obj', vv.numpy(), ff.numpy(), labels_v, labels_f)
# util_mesh.save_mesh(args.dresult+'mesh.obj', vv.numpy(), ff.numpy(), labels_v, labels_f)
# util_vis.save_sino_as_img(args.dresult+f'{epoch:04d}_data.png', ds.p.cuda()) | _____no_output_____ | MIT | ctdr_toy_example.ipynb | Aarya-Create/PBL-Mesh |
Find the comparables: extra_features.txtThe file `extra_features.txt` contains important property information like number and quality of pools, detached garages, outbuildings, canopies, and more. Let's load this file and grab a subset with the important columns to continue our study. | %load_ext autoreload
%autoreload 2
from pathlib import Path
import pickle
import pandas as pd
from src.definitions import ROOT_DIR
from src.data.utils import Table, save_pickle
extra_features_fn = ROOT_DIR / 'data/external/2016/Real_building_land/extra_features.txt'
assert extra_features_fn.exists()
extra_features = Table(extra_features_fn, '2016')
extra_features.get_header() | _____no_output_____ | BSD-3-Clause | notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb | RafaelPinto/hcad_pred |
Load accounts of interestLet's remove the account numbers that don't meet free-standing single-family home criteria that we found while processing the `building_res.txt` file. | skiprows = extra_features.get_skiprows()
extra_features_df = extra_features.get_df(skiprows=skiprows)
extra_features_df.head()
extra_features_df.l_dscr.value_counts().head(25) | _____no_output_____ | BSD-3-Clause | notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb | RafaelPinto/hcad_pred |
Grab slice of the extra features of interestWith the value counts on the extra feature description performed above we can see that the majority of the features land in the top 15 categories. Let's filter out the rests of the columns. | cols = extra_features_df.l_dscr.value_counts().head(15).index
cond0 = extra_features_df['l_dscr'].isin(cols)
extra_features_df = extra_features_df.loc[cond0, :] | _____no_output_____ | BSD-3-Clause | notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb | RafaelPinto/hcad_pred |
Build pivot tables for count and gradeThere appear to be two important values related to each extra feature: uts (units area in square feet) and grade. Since a property can have multiple features of the same class, e.g. frame utility shed, let's aggregate them by adding the uts values, and also by taking the mean of the same class feature grades.Let's build individual pivot tables for each and merge them before saving them out. | extra_features_pivot_uts = extra_features_df.pivot_table(index='acct',
columns='l_dscr',
values='uts',
aggfunc='sum',
fill_value=0)
extra_features_pivot_uts.head()
extra_features_pivot_grade = extra_features_df.pivot_table(index='acct',
columns='l_dscr',
values='grade',
aggfunc='mean',
)
extra_features_pivot_grade.head()
extra_features_uts_grade = extra_features_pivot_uts.merge(extra_features_pivot_grade,
how='left',
left_index=True,
right_index=True,
suffixes=('_uts', '_grade'),
validate='one_to_one')
extra_features_uts_grade.head()
assert extra_features_uts_grade.index.is_unique | _____no_output_____ | BSD-3-Clause | notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb | RafaelPinto/hcad_pred |
add `acct` column to make easier the merging process ahead | extra_features_uts_grade.reset_index(inplace=True) | _____no_output_____ | BSD-3-Clause | notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb | RafaelPinto/hcad_pred |
Fix column namesWe would like the column names to be all lower case, with no spaces nor non-alphanumeric characters. | from src.data.utils import fix_column_names
extra_features_uts_grade.columns
extra_features_uts_grade = fix_column_names(extra_features_uts_grade)
extra_features_uts_grade.columns | _____no_output_____ | BSD-3-Clause | notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb | RafaelPinto/hcad_pred |
Find duplicated rows | cond0 = extra_features_uts_grade.duplicated()
extra_features_uts_grade.loc[cond0, :] | _____no_output_____ | BSD-3-Clause | notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb | RafaelPinto/hcad_pred |
Describe | extra_features_uts_grade.info()
extra_features_uts_grade.describe() | _____no_output_____ | BSD-3-Clause | notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb | RafaelPinto/hcad_pred |
Export real_acct | save_fn = ROOT_DIR / 'data/raw/2016/extra_features_uts_grade_comps.pickle'
save_pickle(extra_features_uts_grade, save_fn) | _____no_output_____ | BSD-3-Clause | notebooks/01_Exploratory/1.3-rp-hcad-data-view-extra_features.ipynb | RafaelPinto/hcad_pred |
Data Analysis Project In our data project, we use data directly imported from the World Data Bank. We have chosen to focus on nine different countries: Brazil, China, Denmark, India, Japan, Nigeria, Spain, Turkmenistan and the US. These countries are chosen because they are relatively different, which makes the analysis more interesting. The variables of interest are: GDP per Capita, GDP (current in US $), Total Population, Urban Population in %, Fertility Rate and Literacy Rate. The notebook is organized as follows 1. Data Cleaning and Structuring - Setup - Download Data directly from World Bank - Overview of the Data and Adaption - Detection of Missing Data - Cleaned Data Set2. Data Analysis and Visualisations - Interactive GDP per Capital Plot - World Map Displaying GDP per Capita - Data Visualization on Fertility Rate 3. Regression Data Cleaning and Structuring Setup | import pandas as pd
import numpy as np | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
**We import the packages** we need. If we do not have the packages, we have to install them. Therefore, install:>`pip install pandas-datareader`>`pip install wbdata` | import pandas_datareader
import datetime | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We import the setup to download data directly from world data bank: | from pandas_datareader import wb | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Download Data directly from the World Data Bank We define the countries for the download:China, Japan, Brazil, U.S., Denmark, Spain, Turkmenistan, India, Nigeria. | countries = ['CN','JP','BR','US','DK','ES','TM','IN','NG'] | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We define the indicators for the download:GDP per capita, GDP (current US $), Population total, Urban Population in %, Fertility Rate, Literacy rate. | indicators = {"NY.GDP.PCAP.KD":"GDP per capita", "NY.GDP.MKTP.CD":"GDP(current US $)", "SP.POP.TOTL":"Population total",
"SP.URB.TOTL.IN.ZS":"Urban Population in %", "SP.DYN.TFRT.IN":"Fertility Rate", "SE.ADT.LITR.ZS": "Literacy rate, adult total in %" } | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We download the data and have a look at the table. | data_wb = wb.download(indicator= indicators, country= countries, start=1990, end=2017)
data_wb = data_wb.rename(columns = {"NY.GDP.PCAP.KD":"gdp_pC","NY.GDP.MKTP.CD":"gdp", "SP.POP.TOTL":"pop", "SP.URB.TOTL.IN.ZS":"urban_pop%",
"SP.DYN.TFRT.IN":"frt", "SE.ADT.LITR.ZS":"litr"})
data_wb = data_wb.reset_index()
data_wb.head(-5) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We save the data file as an excel sheet in the folder we saved the current file. | writer = pd.ExcelWriter('pandas_simple.xlsx', engine='xlsxwriter')
data_wb.to_excel(r"./data_wb1.xlsx") | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Overview of the Data and Adaption | #Tonje
data_wb.dtypes | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
In order to ease the reading of the tables, we create a separation in all floats for the whole following file. Afterwards, we round the numbers with two decimals. | pd.options.display.float_format = '{:,}'.format
round(data_wb.head(),2) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Since the gdp is inconvenient to work with, we create a new variable gdp_in_billions showing the gdp in billions US $ and add it to the dataset.We have a look at the table to check whether it worked out. | data_wb['gdp_in_bil'] = data_wb['gdp']/1000000000
round(data_wb.head(),2) #just to check | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We delete the variable gdp since we will continue working exclusively with the variable gdp_in_bil. | del data_wb['gdp']
round(data_wb.head(),2) #just to check | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We have a look at the shape of the dataset in order to get an overview of the observations and variables. | data_wb.shape | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We perform a summary statistics to get an overview of our dataset. | round(data_wb.describe(),2) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Detection of Missing Data We count the missing data: | data_wb.isnull().sum().sum() | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We have a look at how many observations each variable has: | data_wb.count() | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We search for the number of missing values of each variable. (Same step as before, only the other way around.) | data_wb.isnull().sum() | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We drop the literacy rate, because this variable has nearly no data. | data_wb.drop(['litr'], axis = 1, inplace = True) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We search for the nine missing values of fertility rate. It seems like there is no data for the fertility rate for the year 2017. | round(data_wb.groupby('year').mean(),2) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We look whether every country misses the data for the fertility rate for the year 2017. | round(data_wb.loc[data_wb['year'] == '2017', :].head(-1),2) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We drop the year 2017. | I = data_wb['year'] == "2017"
data_wb.drop(data_wb[I].index, inplace = True) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Cleaned data set We perform a summary statistic of our cleaned dataset. | round(data_wb.describe(),2) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
And we check the number of observations and variables. | data_wb.shape | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We control whether the dataset is balanced. | data_wb.count() | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
The data set is balanced. Data Analysis and Visualisations We use the average level of every variable for each single country.The overview shows that countries with a high gdp per capita have a low fertility rate. Countries with a high gdp per capita have a huge share of urban population. We can start to think about the relations between the variables. | round(data_wb.groupby('country').mean(),2) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Interactive plot Now, we want to make an interactive plot which displays the development of GDP per capita over timefor the different countries. First, we import the necessary packages and tools: **Import the packages** we need. If we do not have the packages, we have to install them. Therefore, install:>`pip install matplotlib`>`pip install ipywidgets` | import matplotlib.pyplot as plt
%matplotlib inline
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
| _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Then, we define the relevant variables in a way which simplifies the coding: | country=data_wb["country"]
year=data_wb["year"]
gdp_pC=data_wb["gdp_pC"]
| _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We create a function constructing a figure: | def interactive_figure(country, data_wb):
"""define an interactive figure that uses countries and the dataframe as inputs """
data_country = data_wb[data_wb.country == country]
year = data_country.year
gdp_pC = data_country.gdp_pC
fig = plt.figure(dpi=100)
ax = fig.add_subplot(1,1,1)
ax.plot(year, gdp_pC)
ax.set_xlabel("Years")
ax.set_ylabel("GDP per Capita")
plt.xticks(rotation=90)
plt.gca().invert_xaxis()
| _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We make it interactive with a drop down menu: | widgets.interact(interactive_figure,
year = widgets.fixed(year),
data_wb = widgets.fixed(data_wb),
country=widgets.Dropdown(description="Country", options=data_wb.country.unique()),
gdp_pC=widgets.fixed(gdp_pC)
); | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We can see that the overall trend for the selected countries is increasing GDP per capita.However, for the Western countries and Japan we can see the trace of the 2008 financial crisis. For Spain, one of the countries that suffered most from this crisis, the dip is particularly visible. It is also worth noticing that China fared better than most industustrial nations during this crisis. This is partly due to Chinas closed nature, which made them less vulnerable to financial friction in the world economy. World Map After having a look at the first visualisations, we want to get an insight of the data by plotting it on a world map. This way we can easily compare and see whether countries in certain areas of the world have similar values in the variables we are interested in.First, we import the necessary package: **Import the package** we need. If we do not have the package, we have to install it. Therefore, install:>`pip install folium` | import folium | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Our goal is to visualize the data on a world map using makers.In order to define the location of the markers, we add the coordinates of the counries. Therefore, we add the variable 'Lat' for latitude and 'Lon' for longitude of the respecitve country to each observation in our data set. | row_indexes=data_wb[data_wb['country']== 'Brazil'].index
data_wb.loc[row_indexes,'Lat']= -14.2350
data_wb.loc[row_indexes,'Lon']= -51.9253
row_indexes=data_wb[data_wb['country']== 'China'].index
data_wb.loc[row_indexes,'Lat']= 33.5449
data_wb.loc[row_indexes,'Lon']= 103.149
row_indexes=data_wb[data_wb['country']== 'Denmark'].index
data_wb.loc[row_indexes,'Lat']= 56.2639
data_wb.loc[row_indexes,'Lon']= 9.5018
row_indexes=data_wb[data_wb['country']== 'Spain'].index
data_wb.loc[row_indexes,'Lat']= 40.4637
data_wb.loc[row_indexes,'Lon']= -3.7492
row_indexes=data_wb[data_wb['country']== 'India'].index
data_wb.loc[row_indexes,'Lat']= 20.5937
data_wb.loc[row_indexes,'Lon']= 78.9629
row_indexes=data_wb[data_wb['country']== 'Japan'].index
data_wb.loc[row_indexes,'Lat']= 36.2048
data_wb.loc[row_indexes,'Lon']= 138.2529
row_indexes=data_wb[data_wb['country']== 'Nigeria'].index
data_wb.loc[row_indexes,'Lat']= 9.0820
data_wb.loc[row_indexes,'Lon']= 8.6753
row_indexes=data_wb[data_wb['country']== 'Turkmenistan'].index
data_wb.loc[row_indexes,'Lat']= 38.9697
data_wb.loc[row_indexes,'Lon']= 59.5563
row_indexes=data_wb[data_wb['country']== 'United States'].index
data_wb.loc[row_indexes,'Lat']= 37.0902
data_wb.loc[row_indexes,'Lon']= -95.7129
round(data_wb.head(),4) #just to check | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Now, we want to create the map. 1. We define the variables year (selectedyear) and variable (selectedvariable) we want to display. 2. We have to create an empty map. Since our countries are located all over the world, we have to display the whole world. 3. In order to run the loop later on, we create an overview of the data we are interested in based on the year and variable we defined in step 1. This overview is called year_overview. 4. Now, we run a for loop over every observation in our year_overview. In the loop, we: - create a marker on the map corresponding to the coordinates (location). - define the radius for the marker. It is important to adjust it depending on the variable chosen: - gdp_pC : 15 - urban_pop% : 8000 - frt : 200000 - gdp_in_bil : 150 - set the color to green. - decide on a filling for the circle. | # Definition of variables of interest
selectedyear = 2010
#select the year you are interested in
selectedvariable = 'gdp_pC'
##select the variable you are interested in
# Creation of an empty map
map = folium.Map(location=[0,0], tiles="Mapbox Bright", zoom_start=2)
#Creation of an overview data set displaying only the selected year
year_overview = data_wb.loc[data_wb['year']== str(selectedyear)]
# Run of the for loop in order to add a marker one by one on the map
for i in range(0,len(year_overview)):
folium.Circle(
location=[year_overview.iloc[i]['Lat'], year_overview.iloc[i]['Lon']],
radius=year_overview.iloc[i][selectedvariable]*15, #the smaller the original number, the higher the radius should be chosen
color='green',
fill=True
).add_to(map)
#calling the map
map | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Looking at the gdp per capita in the year 2010, we can see at one glance that developed countries have a substantially higher gdp per capita than emerging and developing countries. Mapping has the advantage of getting an overview and possible correlation of locations at one glance. We save the map in the same folder as the file we are currently working on. | map.save('./map.py') | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We drop the variables for the coordinates since they are no longer needed. | data_wb.drop(['Lat','Lon'], axis = 1, inplace = True) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Fertility Rate per Country The average annual fertility rate presents an overview of the fertility rate for the copuntries and shows that Japan and Spain have the lowest fertility rate, while Nigeria has the highest. | ax = data_wb.groupby('country').frt.mean().plot(kind='bar')
ax.set_ylabel('Avg. annual fertility rate') | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
The following graph presents annual growth rate of the fertility rate for each country. We observe that denmark is the only country with a negative growth rate. The leading country is India with a growth rate of 0.020 over the years. Surprisingly, Nigeria and the US have almost the same growth rate. | def annual_growth(x):
x_last = x.values[-1]
x_first = x.values[0]
num_years = len(x)
growth_annualized = (x_last/x_first)**(1/num_years) - 1.0
return growth_annualized
ax = data_wb.groupby('country')['frt'].agg(annual_growth).plot(kind='bar')
ax.set_ylabel('Annual growth (fertility rate) from first to last year'); | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We look what kind of variables we have. Years should be a numeric variable for the next grapph, but it is a objective (string). | data_wb.dtypes | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We convert year into a float variable. | data_wb['year'] = data_wb.year.astype(float) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We prove what we have done. | data_wb.dtypes | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Fertility Rate per Country from 1990 until 2016 | data_wb = data_wb.set_index(["year", "country"])
#plot fertility rate over the years
data_wb.unstack('country')['frt'].plot() | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
The fertility rate declines continously in most countries. An exception is Turkmenistan. In this country the fertility rate seems to oszilliate. The US had a little peak in 2007, but since then the fertility rate is declining. Correlation Table Before we proceed with a regression, we want to have a look at the correlations between the variables. This can be done with a heatmap: | import seaborn as sns
fig = plt.subplots(figsize = (10,10))
sns.set(font_scale=1.5)
sns.heatmap(data_wb.corr(),square = True,cbar=True,annot=True,annot_kws={'size': 10})
plt.show() | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
This gives a good indication for what to expect from the regression. In the following regression we are interested in ferility rate, and we can see this table that fertility rate is negatively correlated with GDP, urban population and population in general (although the effect is small) Panel Regression We want to perform a regression with fertility rate as dependent variable and gdp per capita, population and urban population as independent variables.**Import the packages** we need. If we do not have the packages, we have to intall them. Therefore, install>`pip install linearmodels` | from linearmodels.panel import PooledOLS
from linearmodels.panel import RandomEffects
from linearmodels import PanelOLS
import statsmodels.api as sm | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
For year and country, check whether these variables are set as index. | print(data_wb.head()) | gdp_pC pop urban_pop% frt \
year country
2,016.0 Brazil 10,868.6534435352 207652865 86.042 1.726
2,015.0 Brazil 11,351.5657481703 205962108 85.77 1.74
2,014.0 Brazil 11,870.1484076345 204213133 85.492 1.753
2,013.0 Brazil 11,915.4170541095 202408632 85.209 1.765
2,012.0 Brazil 11,673.7705356922 200560983 84.923 1.777
gdp_in_bil
year country
2,016.0 Brazil 1,793.98904840929
2,015.0 Brazil 1,802.21437374132
2,014.0 Brazil 2,455.99362515937
2,013.0 Brazil 2,472.80691990167
2,012.0 Brazil 2,465.1886744150297
| MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
We can se that they are set as indexes. For the following regressions, we need "years" to be the second index for the regression to work. Therefore, temporarily reset the index: | data_wb.reset_index(inplace = True )
print(data_wb.head())
data_wb = data_wb.set_index(["country","year"], append=False) | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Pooled OLS-Regression For the first regression, we do a pooled-OLS. We have nine entities (countries) and 27 years. | exog_vars = ['gdp_pC', 'pop', 'urban_pop%']
exog = sm.add_constant(data_wb[exog_vars])
mod = PooledOLS(data_wb.frt, exog)
pooled_res = mod.fit()
print(pooled_res) | PooledOLS Estimation Summary
================================================================================
Dep. Variable: frt R-squared: 0.6796
Estimator: PooledOLS R-squared (Between): 0.7154
No. Observations: 243 R-squared (Within): -0.0943
Date: Thu, Apr 04 2019 R-squared (Overall): 0.6796
Time: 09:25:38 Log-likelihood -292.85
Cov. Estimator: Unadjusted
F-statistic: 168.98
Entities: 9 P-value 0.0000
Avg Obs: 27.000 Distribution: F(3,239)
Min Obs: 27.000
Max Obs: 27.000 F-statistic (robust): 168.98
P-value 0.0000
Time periods: 27 Distribution: F(3,239)
Avg Obs: 9.0000
Min Obs: 9.0000
Max Obs: 9.0000
Parameter Estimates
==============================================================================
Parameter Std. Err. T-stat P-value Lower CI Upper CI
------------------------------------------------------------------------------
const 7.2719 0.2561 28.391 0.0000 6.7674 7.7765
gdp_pC -2.143e-06 4.424e-06 -0.4843 0.6286 -1.086e-05 6.573e-06
pop -2.113e-09 1.417e-10 -14.914 0.0000 -2.392e-09 -1.834e-09
urban_pop% -0.0638 0.0045 -14.023 0.0000 -0.0727 -0.0548
==============================================================================
| MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
The results are questionable. For example gdp per capita seems to have no effect on fertility rate. Moreover, the effect of gdp per capita and population is unlikely small.Therefore, we have a look at our dependent variable. It seems that python takes the variable correctly and the indexes are altso correct. Therefore, we try to run another regression with the same data. Panel OLS-regression | data_wb.frt | _____no_output_____ | MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Now, we run a Panel OLS regression, where we control for entity effects and time effects. | exog_vars = ['gdp_pC', 'pop', 'urban_pop%']
exog = sm.add_constant(data_wb[exog_vars])
mod = PanelOLS(data_wb.frt, exog, entity_effects=True, time_effects=True)
pooled_res = mod.fit()
print(pooled_res) | PanelOLS Estimation Summary
================================================================================
Dep. Variable: frt R-squared: 0.6726
Estimator: PanelOLS R-squared (Between): -5.3319
No. Observations: 243 R-squared (Within): -1.1795
Date: Thu, Apr 04 2019 R-squared (Overall): -5.1484
Time: 09:14:51 Log-likelihood 152.75
Cov. Estimator: Unadjusted
F-statistic: 140.39
Entities: 9 P-value 0.0000
Avg Obs: 27.000 Distribution: F(3,205)
Min Obs: 27.000
Max Obs: 27.000 F-statistic (robust): 140.39
P-value 0.0000
Time periods: 27 Distribution: F(3,205)
Avg Obs: 9.0000
Min Obs: 9.0000
Max Obs: 9.0000
Parameter Estimates
==============================================================================
Parameter Std. Err. T-stat P-value Lower CI Upper CI
------------------------------------------------------------------------------
const -0.3192 0.2853 -1.1191 0.2644 -0.8817 0.2432
gdp_pC 8.134e-05 5.22e-06 15.581 0.0000 7.105e-05 9.163e-05
pop -1.577e-09 2.282e-10 -6.9131 0.0000 -2.027e-09 -1.128e-09
urban_pop% 0.0266 0.0035 7.5169 0.0000 0.0196 0.0335
==============================================================================
F-test for Poolability: 230.04
P-value: 0.0000
Distribution: F(34,205)
Included effects: Entity, Time
| MIT | dataproject/dataProject.ipynb | NumEconCopenhagen/projects-2019-tba |
Auto EncoderThis notebook was created by Camille-Amaury JUGE, in order to better understand Auto Encoder principles and how they work.(it follows the exercices proposed by Hadelin de Ponteves on Udemy : https://www.udemy.com/course/le-deep-learning-de-a-a-z/) Imports | import numpy as np
import pandas as pd
# pytorch
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim as optim
import torch.utils.data
from torch.autograd import Variable
import sys
import csv | _____no_output_____ | CNRI-Python | Exercises/Auto Encoder/Auto Encoder.ipynb | camilleAmaury/DeepLearningExercise |
Data preprocessingsame process as Boltzmann's machine (go there to see more details) | df_movies = pd.read_csv("ml-1m\\movies.dat", sep="::", header=None, engine="python",
encoding="latin-1")
users = pd.read_csv("ml-1m\\users.dat", sep="::", header=None, engine="python",
encoding="latin-1")
ratings = pd.read_csv("ml-1m\\ratings.dat", sep="::", header=None, engine="python",
encoding="latin-1")
df_train = pd.read_csv("ml-100k\\u1.base", delimiter="\t", header=None)
df_test = pd.read_csv("ml-100k\\u1.test", delimiter="\t", header=None)
_users = list(set(np.concatenate((df_train[df_train.columns[0]].value_counts().index,
df_test[df_test.columns[0]].value_counts().index),
axis=0)))
_movies = list(set(np.concatenate((df_train[df_train.columns[1]].value_counts().index,
df_test[df_test.columns[1]].value_counts().index),
axis=0)))
def createMatrix(df, users, movies):
matrix = []
movies_nb = len(movies)
user_nb = len(users)
df_array = np.array(df, dtype="int")
for i,user in enumerate(users):
filtered_movies = df_array[df_array[:,0] == user, 1]
filtered_ratings = df_array[df_array[:,0] == user, 2]
ratings = np.zeros(movies_nb)
for j in range(len(filtered_movies)):
ratings[filtered_movies[j] - 1] = filtered_ratings[j]
matrix.append(ratings)
sys.stdout.write("\r Loading State : {} / {}".format(i+1,user_nb))
sys.stdout.flush()
return matrix
matrix_train = createMatrix(df_train, _users, _movies)
matrix_test = createMatrix(df_test, _users, _movies)
train = torch.FloatTensor(matrix_train)
test = torch.FloatTensor(matrix_test)
train.shape | _____no_output_____ | CNRI-Python | Exercises/Auto Encoder/Auto Encoder.ipynb | camilleAmaury/DeepLearningExercise |
Model | class SparseAutoEncoder(nn.Module):
def __init__(self, input_dim):
super(SparseAutoEncoder, self).__init__()
# creating input layer
self.fully_connected_hidden_layer_1 = nn.Linear(input_dim, 20)
self.fully_connected_hidden_layer_2 = nn.Linear(20, 10)
self.fully_connected_hidden_layer_3 = nn.Linear(10, 20)
self.fully_connected_hidden_layer_4 = nn.Linear(20, input_dim)
self.activation = nn.Sigmoid()
self.optimizer = optim.RMSprop(self.parameters(), lr=0.01, weight_decay=0.5)
self.loss = nn.MSELoss()
def forward(self, X):
return self.fully_connected_hidden_layer_4(
self.activation(self.fully_connected_hidden_layer_3(
self.activation(self.fully_connected_hidden_layer_2(
self.activation(self.fully_connected_hidden_layer_1(X)))))))
def train_(self, X, epoch):
self.X_train = X
for i in range(epoch):
print("Epoch => {}/{}".format(i+1,epoch))
train_loss = 0
s = 0.
for j in range(self.X_train.shape[0]):
batch = Variable(self.X_train[j]).unsqueeze(0)
target = batch.clone()
if torch.sum(target.data > 0) > 0:
output = self(batch)
target.require_grad = False
output[target == 0] = 0
temp_loss = self.loss(output, target)
mean_corrector = self.X_train.shape[1] / (float(torch.sum(target.data > 0)) + 1e-10)
temp_loss.backward()
train_loss += np.sqrt(temp_loss.item() * mean_corrector)
s+=1.
self.optimizer.step()
print(" => Loss : {}".format((train_loss/s)))
def test_(self, X):
test_loss = 0
s = 0.
sys.stdout.write("\r Processing")
sys.stdout.flush()
for j in range(self.X_train.shape[0]):
batch = Variable(self.X_train[j]).unsqueeze(0)
target = Variable(X[j]).unsqueeze(0)
if torch.sum(target.data > 0) > 0:
output = self(batch)
target.require_grad = False
output[target == 0] = 0
temp_loss = self.loss(output, target)
mean_corrector = self.X_train.shape[1] / (float(torch.sum(target.data > 0)) + 1e-10)
test_loss += np.sqrt(temp_loss.item() * mean_corrector)
s+=1.
sys.stdout.write("\r Test Set => Loss : {}".format((test_loss/s)))
sys.stdout.flush()
sae = SparseAutoEncoder(train.shape[1])
sae.train_(train, 20)
sae.test_(test) | Test Set => Loss : 1.0229144248873956 | CNRI-Python | Exercises/Auto Encoder/Auto Encoder.ipynb | camilleAmaury/DeepLearningExercise |
Sentiment Analysis with an RNNIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. >Using an RNN rather than a strictly feedforward network is more accurate since we can include information about the *sequence* of words. Here we'll use a dataset of movie reviews, accompanied by sentiment labels: positive or negative. Network ArchitectureThe architecture for this network is shown below.>**First, we'll pass in words to an embedding layer.** We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the Word2Vec lesson. You can actually train an embedding with the Skip-gram Word2Vec model and use those embeddings as input, here. However, it's good enough to just have an embedding layer and let the network learn a different embedding table on its own. *In this case, the embedding layer is for dimensionality reduction, rather than for learning semantic representations.*>**After input words are passed to an embedding layer, the new embeddings will be passed to LSTM cells.** The LSTM cells will add *recurrent* connections to the network and give us the ability to include information about the *sequence* of words in the movie review data. >**Finally, the LSTM outputs will go to a sigmoid output layer.** We're using a sigmoid function because positive and negative = 1 and 0, respectively, and a sigmoid will output predicted, sentiment values between 0-1. We don't care about the sigmoid outputs except for the **very last one**; we can ignore the rest. We'll calculate the loss by comparing the output at the last time step and the training label (pos or neg). --- Load in and visualize the data | import numpy as np
# read data from text files
with open('data/reviews.txt', 'r') as f:
reviews = f.read()
with open('data/labels.txt', 'r') as f:
labels = f.read()
print(reviews[:2000])
print()
print(labels[:20]) | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
Data pre-processingThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.You can see an example of the reviews data above. Here are the processing steps, we'll want to take:>* We'll want to get rid of periods and extraneous punctuation.* Also, you might notice that the reviews are delimited with newline characters `\n`. To deal with those, I'm going to split the text into each review using `\n` as the delimiter. * Then I can combined all the reviews back together into one big string.First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words. | from string import punctuation
print(punctuation)
# get rid of punctuation
reviews = reviews.lower() # lowercase, standardize
all_text = ''.join([c for c in reviews if c not in punctuation])
# split by new lines and spaces
reviews_split = all_text.split('\n')
all_text = ' '.join(reviews_split)
# create a list of words
words = all_text.split()
words[:30] | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
Encoding the wordsThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.> **Exercise:** Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers **start at 1, not 0**.> Also, convert the reviews to integers and store the reviews in a new list called `reviews_ints`. | # feel free to use this import
from collections import Counter
## Build a dictionary that maps words to integers
vocab_to_int = None
## use the dict to tokenize each review in reviews_split
## store the tokenized reviews in reviews_ints
reviews_ints = []
| _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
**Test your code**As a text that you've implemented the dictionary correctly, print out the number of unique words in your vocabulary and the contents of the first, tokenized review. | # stats about vocabulary
print('Unique words: ', len((vocab_to_int))) # should ~ 74000+
print()
# print tokens in first review
print('Tokenized review: \n', reviews_ints[:1]) | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
Encoding the labelsOur labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.> **Exercise:** Convert labels from `positive` and `negative` to 1 and 0, respectively, and place those in a new list, `encoded_labels`. | # 1=positive, 0=negative label conversion
encoded_labels = None | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
Removing OutliersAs an additional pre-processing step, we want to make sure that our reviews are in good shape for standard processing. That is, our network will expect a standard input text size, and so, we'll want to shape our reviews into a specific length. We'll approach this task in two main steps:1. Getting rid of extremely long or short reviews; the outliers2. Padding/truncating the remaining data so that we have reviews of the same length.Before we pad our review text, we should check for reviews of extremely short or long lengths; outliers that may mess with our training. | # outlier review stats
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens))) | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. We'll have to remove any super short reviews and truncate super long reviews. This removes outliers and should allow our model to train more efficiently.> **Exercise:** First, remove *any* reviews with zero length from the `reviews_ints` list and their corresponding label in `encoded_labels`. | print('Number of reviews before removing outliers: ', len(reviews_ints))
## remove any reviews/labels with zero length from the reviews_ints list.
reviews_ints =
encoded_labels =
print('Number of reviews after removing outliers: ', len(reviews_ints)) | _____no_output_____ | MIT | sentiment-rnn/Sentiment_RNN_Exercise.ipynb | MiniMarvin/pytorch-v2 |
Subsets and Splits